Senior Site Reliability Engineer - GPU Clusters

at Nvidia

📍 Santa Clara, United States

$180,000-339,200 per year

SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Ansible @ 6 Docker @ 6 Go @ 4 Kubernetes @ 6 Linux @ 7 Ruby @ 4 IaC @ 6 Terraform @ 6 Python @ 4 GCP @ 4 CI/CD @ 6 Machine Learning @ 4 AWS @ 4 Azure @ 4 Communication @ 7 Networking @ 4

Details

NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars. NVIDIA is looking for phenomenal people like you to help us accelerate the next wave of artificial intelligence. We are seeking a highly skilled and experienced Staff Software Engineer to lead the design, deployment, and management of our large-scale GPU clusters. These clusters will power AI workloads across multiple teams and projects, making a significant impact on the future of machine learning and artificial intelligence at NVIDIA. Join our engineering team and collaborate with researchers, AI engineers, and infrastructure teams to ensure our GPU clusters perform efficiently, scale well, and remain reliable.

Responsibilities

  • Design, deploy and support large-scale, distributed GPU clusters to run high-performance AI and machine learning workloads.
  • Continuously improve infrastructure provisioning, management, and monitoring through automation.
  • Ensure the highest level of uptime and quality of service (QoS) through operational excellence, proactive monitoring, and incident resolution.
  • Support a globally distributed, cloud environment like AWS, GCP, Azure or OCI as well as on-prem.
  • Define and implement service level objectives (SLOs) and service level indicators (SLIs) to measure and ensure infrastructure quality.
  • Write high-quality Root Cause Analysis (RCA) reports for production-level incidents and work towards preventing future occurrences.
  • Participate in the team's on-call rotation to support critical infrastructure.
  • Drive the evaluation and integration of new GPU - like GB200 - and cloud technologies to improve system performance.

Requirements

  • Minimum BS degree in Computer Science (or equivalent experience), with 7+ years of software engineering experience, including at least 3+ years managing GPU clusters or similar high-performance computing environments.
  • Expertise in designing, deploying, and running production-level cloud services.
  • Proficiency with orchestration and containerization tools like Kubernetes, Docker, or similar.
  • Experience coding/scripting in at least two high-level programming languages (e.g., Python, Go, Ruby).
  • Strong proficiency with Linux operating systems and TCP/IP fundamentals.
  • Proficient in modern CI/CD techniques, GitOps, and Infrastructure as Code (IaC) using tools such as Terraform or Ansible.
  • Diligent with strong communication and documentation skills.

Ways to stand out from the crowd

  • Experience managing large-scale Slurm and/or BCM deployments in production environments.
  • Expertise in modern container networking and storage architectures.
  • Proven track record to define and drive operational excellence in highly distributed, high-performance environments.