Senior MLOps Engineer, GenAI Framework

at Nvidia

πŸ“ Santa Clara, United States

$180,000-339,200 per year

SENIOR
βœ… On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

System Administration @ 4 Ansible @ 4 Docker @ 4 Jenkins @ 4 Kubernetes @ 4 Linux @ 4 DevOps @ 4 Python @ 7 GitHub @ 4 GitHub Actions @ 4 CI/CD @ 4 Algorithms @ 4 TensorFlow @ 4 Communication @ 6 Performance Optimization @ 4 Jira @ 4 Debugging @ 4 LLM @ 4 PyTorch @ 4 CUDA @ 4

Details

NVIDIA is looking for a dedicated and motivated senior build and continuous integration (CI/CD) engineer for its GenAI Frameworks (NeMo, Megatron Core) team. NVIDIA NeMo is an open-source, scalable and cloud-native framework built for researchers and developers working on Large Language Models (LLM), Multimodal (MM), and Speech AI. NeMo provides end-to-end model training, including data curation, alignment, customization, evaluation, deployment and tooling to optimize performance and user experience. Building upon modern DevOps tools, your work will enable GenAI framework software engineers and deep learning algorithm engineers to work efficiently with a wide variety of deep learning algorithms and software stacks as they vigilantly seek out opportunities for performance optimization and continuously deliver high quality software.

Does the idea of pushing the boundaries of state-of-the-art research and development excite you? Are you interested in getting exposure to the entire DL SW stack? Then come join our technically diverse team of DL algorithm engineers and performance optimization specialists to unlock unprecedented deep learning performance in every domain.

Responsibilities

  • Architect and lead the build-release continuous integration processes of our Generative AI framework and libraries related to NeMo framework and Megatron Core.
  • Propose, implement, and deploy efficient and scalable DevOps solutions to allow our fast growing team to release software more frequently while maintaining high-quality and top performance.
  • Work with industry standard tools (Kubernetes, Docker, Slurm, Ansible, GitLab, GitHub Actions, Jenkins, Artifactory, Jira).
  • Assist with cluster operations and system administration (managing: servers, team accounts, clusters).
  • Automate away recurring tasks (DL algorithm accuracy and performance regression detection, designing and developing new quality control measures, e.g. code analysis) while employing and advancing best-practices.
  • Work closely with DL framework and libraries (CUDA, cuDNN, cuBLAS) team and with other relevant teams within NVIDIA that provide software build, testing, and release related infrastructure.

Requirements

  • BS or MS degree in Computer Science, Computer Architecture or related technical field or equivalent experience.
  • 5+ years of industry experience in infrastructure engineering, DevOps.
  • Strong system level programming in languages like Python and shell scripting.
  • Strong understanding of build/release systems, CI/CD and experience with solutions like Gitlab, Github, Jenkins etc.
  • Experience with Linux system administration.
  • Proficient with containerization and cluster management technologies like Docker and Kubernetes.
  • Experience in build tools, including Make, Cmake.
  • Experience using or deploying source code management (SCM) solutions such as GitLab, GitHub, Perforce, etc.
  • Excellent problem-solving and debugging skills.
  • Great teammate who can collaborate and influence in a dynamic environment with excellent interpersonal and written communication skills.

Ways to stand out from the crowd

  • Previous experience with GPU accelerated systems.
  • Hands on experience with DL frameworks (PyTorch, JAX, Tensorflow).
  • Cluster/cloud technologies, e.g.: SLURM, Lustre, k8s.
  • Experience with HPC hardware systems such as compute clusters and HPC software performance benchmarking on such systems.