Senior AI and ML HPC Cluster Engineer

at Nvidia
USD 136,000-264,500 per year
SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Ansible @ 4 CentOS @ 6 Docker @ 4 Kubernetes @ 4 Linux @ 6 Python @ 6 Algorithms @ 4 Machine Learning @ 4 TensorFlow @ 3 Leadership @ 4 Bash @ 6 Networking @ 4 PyTorch @ 3 Puppet @ 4 Salt @ 4 CUDA @ 4 GPU @ 4

Details

NVIDIA is seeking a technical leader on the GPU AI/HPC Infrastructure team to design and implement GPU compute clusters that support demanding deep learning, high performance computing, and computationally intensive workloads. The role focuses on compute, networking, and storage design for large-scale, high-performance workloads, resource utilization in heterogeneous environments, private/public cloud strategy, capacity modeling, and global growth planning.

Responsibilities

  • Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
  • Develop and improve the ecosystem around GPU-accelerated computing, including developing scalable automation solutions.
  • Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
  • Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet evolving user needs.
  • Support researchers to run their workloads, including performance analysis and optimizations.
  • Conduct root cause analysis and recommend corrective action; proactively find and fix issues before they occur.

Requirements

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field, or equivalent experience.
  • Minimum 5+ years of experience designing and operating large-scale compute infrastructure.
  • Experience with AI/HPC advanced job schedulers, such as Slurm, Kubernetes, PBS, RTDA or LSF.
  • Proficient in administering CentOS/RHEL and/or Ubuntu Linux distributions.
  • Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt.
  • In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud.
  • Proficiency in Python programming and bash scripting.
  • Applied experience with AI/HPC workflows that use MPI.
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads.
  • Passion for continual learning and staying ahead of emerging technologies and effective approaches in HPC and AI/ML infrastructure.

Nice to have / Ways to stand out

  • Background with NVIDIA GPUs, CUDA programming, NCCL and MLPerf benchmarking.
  • Experience with machine learning and deep learning concepts, algorithms and models.
  • Familiarity with InfiniBand with IBOP and RDMA.
  • Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
  • Familiarity with deep learning frameworks like PyTorch and TensorFlow.

Compensation and Benefits

Location and Schedule

  • Location: Santa Clara, California, United States.
  • Time type: Full time.
  • Applications accepted at least until October 22, 2025.

Equal Opportunity

NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. NVIDIA does not discriminate based on any protected characteristic and values diversity in current and future employees.