Senior MLOps Engineer

at Nvidia
USD 184,000-356,500 per year
SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Security @ 4 Go @ 7 Grafana @ 4 Kubernetes @ 4 Prometheus @ 4 Python @ 7 ETL @ 6 Airflow @ 4 CI/CD @ 4 MLOps @ 4 TensorFlow @ 4 Communication @ 7 KubeFlow @ 4 MLFlow @ 4 Rust @ 7 Experimentation @ 4 PyTorch @ 4 Compliance @ 4 GPU @ 4

Details

NVIDIA is seeking a Senior MLOps Engineer to help design and scale the infrastructure that powers AI research and product development. You will partner closely with research scientists and product teams to accelerate their success on the latest GPU and accelerator platforms. By building robust ML pipelines and scalable systems, you will ensure that hardware innovations translate directly into faster experiments, more efficient training, and reproducible deployments at scale.

Responsibilities

  • Identify infrastructure and software bottlenecks to improve ML job startup time, data load/write time, resiliency, and failure recovery.
  • Translate research workflows into automated, scalable, and reproducible systems that accelerate experimentation.
  • Build CI/CD workflows tailored for ML to support data preparation, model training, validation, deployment, and monitoring.
  • Develop observability frameworks to monitor performance, utilization, and health of large-scale training clusters.
  • Collaborate with hardware and platform teams to optimize models for emerging GPU architectures, interconnects, and storage technologies.
  • Develop guidelines for dataset versioning, experiment tracking, and model governance to ensure reliability and compliance.
  • Mentor and guide engineering and research partners on MLOps patterns, scaling NVIDIA's impact from research to production.
  • Collaborate with NVIDIA Research teams and the DGX Cloud Customer Success team to continuously enhance MLOps automation.

Requirements

  • BS in Computer Science, Information Systems, Computer Engineering, or equivalent experience.
  • 8+ years of experience in large-scale software or infrastructure systems, with 5+ years dedicated to ML platforms or MLOps.
  • Proven track record designing and operating ML infrastructure for production training workloads.
  • Expert knowledge of distributed training frameworks (PyTorch, TensorFlow, JAX) and orchestration systems (Kubernetes, Slurm, Kubeflow, Airflow, MLflow).
  • Strong programming experience in Python plus at least one systems language (Go, C++, Rust).
  • Deep understanding of GPU scheduling, container orchestration, and cloud-native environments.
  • Experience integrating observability stacks (Prometheus, Grafana, ELK) with ML workloads.
  • Familiarity with storage and data platforms that support large-scale training (object stores, feature stores, versioned datasets).
  • Strong communication abilities; ability to collaborate effectively with research teams and convert requirements into scalable engineering solutions.

Ways to stand out

  • Practical experience supporting research teams in expanding models on the newest GPU or accelerator hardware.
  • Contributions to open-source MLOps or ML infrastructure projects.
  • Proficiency in optimizing multi-node training tasks across large GPU clusters and experience with extensive ETL and data pipeline software/infrastructure for structured and unstructured data.
  • Knowledge of security, compliance, and governance requirements for ML in regulated environments.
  • Demonstrated capability connecting research and production by providing guidelines and reliable infrastructure to scientists.

Company & Compensation

NVIDIA is at the forefront of advances in Artificial Intelligence, High-Performance Computing, and Visualization. The company emphasizes enabling research teams to turn ideas into breakthroughs.

  • Base salary ranges: 184,000 USD - 287,500 USD for Level 4; 224,000 USD - 356,500 USD for Level 5.
  • You will also be eligible for equity and benefits.
  • Applications accepted at least until October 6, 2025.

Additional details

  • This is a full-time role based in Santa Clara, CA, United States. The position involves close collaboration with research and platform teams and focuses on scaling ML infrastructure, observability, CI/CD for ML, dataset/versioning practices, and GPU/cluster optimization.