Director of Capacity Engineering – DGX Cloud

at Nvidia
USD 284,000-425,500 per year
SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Kubernetes @ 3 GCP @ 4 Leadership @ 6 Azure @ 4 Networking @ 7 API @ 4 CUDA @ 4 GPU @ 4

Details

Join the team building the backbone of the world’s most sophisticated AI cloud.

NVIDIA DGX Cloud delivers multi-exascale, GPU-accelerated computing on demand. We are looking for a senior engineering leader to own capacity strategy, fleet reliability, and operational excellence as DGX Cloud scales globally. If you thrive with large-scale infrastructure challenges and want to invent the future of AI computing, we’d love to hear from you!

Responsibilities

  • Lead end-to-end capacity strategy and forecasting for DGX Cloud across regions and cloud partners (Azure, OCI, GCP, etc.).
  • Define and implement golden-image standards for DGX nodes: firmware, CUDA/NVIDIA drivers, NCCL/InfiniBand, NVLink/NVSwitch fabrics.
  • Invent and operate automated maintenance and upgrade frameworks with near-zero customer impact, including guardrails, rollback plans, and buffer management.
  • Own service-level objectives (SLOs) for GPU availability, efficiency, and training/inference reliability; drive continuous improvement and root-cause analysis.
  • Guide development of orchestration tools and APIs coordinated with NVIDIA tools and DGX Cloud provisioning systems.
  • Partner with DGX Cloud software, data-center engineering, supply chain, and finance to align capacity, cost, and rollout priorities.
  • Recruit, mentor, and lead an elite team of capacity engineers, SREs, and tooling developers.

Requirements

  • 12+ overall years in large-scale infrastructure or site-reliability engineering, with 5+ years in senior leadership.
  • Bachelors or Masters in an engineering field or equivalent experience.
  • Deep understanding of GPU-accelerated compute, including DGX systems, NVLink/NVSwitch fabrics, InfiniBand/Ethernet networking, and high-performance storage.
  • Shown success in capacity planning and fleet consistency across multi-region or multi-cloud environments.
  • Expertise in driver/firmware management (CUDA stack, NCCL, OS/kernel dependencies) and distributed training workloads.
  • Proven track record to deliver against strict availability and performance SLOs at hyperscale.

Ways to stand out

  • Experience with hybrid cloud deployments and hyperscale partnerships.
  • Familiarity with Kubernetes GPU scheduling, and AI/ML workload patterns.
  • Track record of influencing hardware/system roadmaps (DGX, Grace Hopper, next-gen GPUs) based on capacity insights.
  • Strong interpersonal skills to align executives, engineers, and partners around ambitious capacity targets.

Company and compensation

NVIDIA is leading developments in Artificial Intelligence, High-Performance Computing and Visualization. The work enables creativity and discovery across many domains. NVIDIA is looking for extraordinary people to help accelerate the next wave of artificial intelligence.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 284,000 USD - 425,500 USD. You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until October 19, 2025.

Location & schedule

Location provided: Santa Clara, California, United States. Full time position.