Used Tools & Technologies
Not specified
Required Skills & Competences ?
Ansible @ 3 CentOS @ 5 Docker @ 3 Kubernetes @ 3 Linux @ 5 Python @ 3 Algorithms @ 3 TensorFlow @ 3 Bash @ 3 PyTorch @ 3 Puppet @ 3 Salt @ 3 CUDA @ 3 GPU @ 3Details
NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. Our technology powers everything from generative AI to autonomous systems. Within this mission, the Managed AI Superclusters (MARS) team builds and scales the infrastructure, platforms, and tools that enable researchers and engineers to develop the next generation of AI/ML systems. This role helps design solutions that power advanced AI/ML computing workloads and supports internal research clusters both on-premises and in multi-cloud environments.
Responsibilities
- Support day-to-day operations of production on-premises and multi-cloud AI/HPC clusters, ensuring system health, user satisfaction, and efficient resource utilization.
- Directly administer internal research clusters, conduct upgrades, incident response, and reliability improvements.
- Develop and improve the ecosystem around GPU-accelerated computing, including developing scalable automation solutions.
- Maintain heterogeneous AI/ML clusters on-premises and in the cloud.
- Support researchers running workloads, including performance analysis and optimizations.
- Analyze and optimize cluster efficiency, job fragmentation, and GPU waste to meet internal SLA targets.
- Support root cause analysis and suggest corrective actions; proactively find and fix issues before they occur.
- Triage and support postmortems for reliability incidents affecting users or infrastructure.
- Participate in a shared on-call rotation supported by automation, clear incident response paths, and well-defined workflows.
Requirements
- Bachelor’s degree in Computer Science, Electrical Engineering or related field, or equivalent experience.
- Minimum 2 years of experience administering multi-node compute infrastructure.
- Background in managing AI/HPC job schedulers (examples listed: Slurm, Kubernetes, PBS, RTDA, BCM, LSF).
- Proficient in administering CentOS/RHEL and/or Ubuntu Linux distributions.
- Proven understanding of cluster configuration management tools (Ansible, Puppet, Salt, etc.) and container technologies (Docker, Singularity, Podman, Shifter, Charliecloud).
- Experience with Python programming and bash scripting.
- Passion for continual learning and staying current with emerging technologies and approaches in HPC and AI/ML infrastructure.
Ways to stand out
- Background with NVIDIA GPUs, CUDA programming, NCCL, and MLPerf benchmarking.
- Experience with AI/ML concepts, algorithms, models, and frameworks (PyTorch, TensorFlow).
- Experience with InfiniBand, IBOP, and RDMA.
- Understanding of fast, distributed storage systems such as Lustre and GPFS for AI/HPC workloads.
- Applied knowledge in AI/HPC workflows that involve MPI.
Compensation and benefits
- Base salary range: 120,000 USD - 189,750 USD (determined based on location, experience, and pay of employees in similar positions).
- Eligible for equity and benefits (link to NVIDIA benefits provided in original posting).
Additional info
- Location: Santa Clara, CA, United States.
- Employment type: Full time.
- Applications accepted at least until January 6, 2026.
- NVIDIA is an equal opportunity employer and states a commitment to diversity and nondiscrimination.