Senior AI-HPC Cluster Engineer - MLOps

at Nvidia
USD 184,000-356,500 per year
SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

CentOS @ 6 Docker @ 4 Grafana @ 3 Kubernetes @ 4 Linux @ 6 Prometheus @ 3 Python @ 6 Algorithms @ 4 Machine Learning @ 4 TensorFlow @ 4 Leadership @ 4 Bash @ 6 Communication @ 4 Networking @ 4 Rust @ 6 PyTorch @ 4 CUDA @ 4 GPU @ 4

Details

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. Today, NVIDIA is tapping into the unlimited potential of AI to define the next era of computing. Design and implement GPU compute clusters for deep learning and high-performance computing. This role focuses on building, operating, and optimizing large-scale GPU-accelerated compute clusters and tooling to support researchers and production AI/HPC workloads.

Responsibilities

  • Provide leadership and strategic mentorship on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
  • Develop and improve the ecosystem around GPU-accelerated computing including developing scalable automation solutions.
  • Build and nurture customer and cross-team relationships to consistently support clusters and address changing user needs.
  • Support researchers to run their workloads including performance analysis and optimizations.
  • Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
  • Build innovative tooling to accelerate researchers' velocity, troubleshooting, and software performance at scale.

Requirements

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
  • Minimum of 6 years of experience crafting and operating large scale compute infrastructure.
  • Experience with AI/HPC job schedulers and orchestrators, such as Slurm, Kubernetes (K8s) or LSF. Applied experience with AI/HPC workflows that use MPI and NCCL.
  • Proficient in using Linux including CentOS/RHEL and/or Ubuntu distributions.
  • Solid understanding of container technologies like Enroot, Docker and Podman.
  • Proficiency in one scripting language (Python, Bash) and at least one compiled language (Golang, Rust, C, C++...).
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads. Strong problem-solving skills to analyze complex systems, identify bottlenecks, and implement scalable solutions.
  • Excellent communication and teamwork skills.
  • Passion for continual learning and staying ahead of new technologies and effective approaches in the HPC and AI/ML infrastructure fields.

Ways to stand out from the crowd (additional / nice-to-have)

  • Experience with NVIDIA GPUs, CUDA programming, NCCL and MLPerf benchmarking.
  • Experience with Machine Learning and Deep Learning concepts, algorithms and models.
  • Familiarity with high-speed networking for HPC including InfiniBand, RDMA, RoCE and Amazon EFA.
  • Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
  • Experience working with deep learning frameworks including PyTorch, MegatronLM and TensorFlow.
  • Familiarity with metrics collection and visualization at scale with Prometheus, OpenSearch and Grafana.

Benefits and additional information

  • Competitive base salary. Base salary range: 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
  • Eligible for equity and benefits (link to NVIDIA benefits provided in original listing).
  • Applications accepted at least until September 12, 2025.
  • NVIDIA is an equal opportunity employer and committed to diversity and inclusion.