Senior MLOps Engineer, GenAI Framework

at Nvidia
USD 184,000-356,500 per year
SENIOR
βœ… On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

System Administration @ 4 Ansible @ 4 Docker @ 4 Jenkins @ 4 Kubernetes @ 4 Linux @ 4 DevOps @ 4 Python @ 7 R @ 4 GitHub @ 4 GitHub Actions @ 4 CI/CD @ 4 Algorithms @ 4 TensorFlow @ 4 Communication @ 4 Performance Optimization @ 4 Jira @ 4 Debugging @ 4 LLM @ 4 PyTorch @ 4 CUDA @ 4 GPU @ 4

Details

NVIDIA is looking for a dedicated and motivated senior build and continuous integration (CI/CD) engineer for its GenAI Frameworks (Megatron-LM and NeMo Framework) team. Megatron-LM and NeMo Framework are open-source, scalable and cloud-native frameworks built for researchers and developers working on Large Language Models (LLM), Multimodal (MM), and Video Generation. These frameworks provide end-to-end model training, including data curation, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. The role involves enabling GenAI framework software engineers, deep learning algorithm engineers, and research scientists to work efficiently with a variety of deep learning algorithms and software stacks while focusing on performance optimization and continuous quality software delivery.

Responsibilities

  • Architect and manage continuous integration pipelines and release processes for the Generative AI framework and libraries related to Megatron-LM and NeMo Framework.
  • Design and implement efficient, scalable DevOps solutions for faster software releases with high quality and performance.
  • Work with industry-standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira) in hybrid on-premise and cloud environments.
  • Assist with cluster operations and system administration, including management of servers, team accounts, and clusters.
  • Automate recurring tasks such as accuracy and performance regression detection to accelerate R&D cycles.
  • Develop new quality control measures, including code analysis, backwards compatibility, and regression testing, while advancing best practices.
  • Collaborate with DL frameworks and libraries teams (CUDA, cuDNN, cuBLAS, PyTorch) and other NVIDIA engineering teams for software, testing, and release infrastructure.

Requirements

  • BS or MS degree in Computer Science, Computer Architecture, or related technical field, or equivalent experience.
  • 6+ years of industry experience in DevOps and infrastructure engineering.
  • Strong system-level programming skills in Python and shell scripting.
  • Extensive understanding of build/release systems, CI/CD tools like GitLab, GitHub, Jenkins.
  • Experience with Linux system administration.
  • Proficiency with containerization and cluster management technologies such as Docker and Kubernetes.
  • Experience with build tools like Make and Cmake.
  • Strong background in source code management (SCM) solutions such as GitLab, GitHub, Perforce.
  • Excellent problem-solving, debugging, interpersonal, and written communication skills.
  • Strong teamwork and collaboration capabilities.

Ways to Stand Out

  • Proven track record with GPU accelerated systems at scale.
  • Well-versed in deep learning frameworks such as PyTorch, Jax, or TensorFlow.
  • Expertise in cluster and cloud compute technologies, e.g. SLURM, Lustre, Kubernetes.
  • Experience with software and hardware benchmarking on high-performance computing systems.

Benefits

  • Base salary range is 184,000 USD to 356,500 USD, determined by location, experience, and peer pay.
  • Eligibility for equity and other benefits.
  • Commitment to diversity and equal opportunity employment.