Senior Performance and Development Engineer

at Nvidia
USD 224,000-425,500 per year
SENIOR
βœ… On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Distributed Systems @ 7 Communication @ 4 Parallel Programming @ 7 System Architecture @ 7 PyTorch @ 4 CUDA @ 4 GPU @ 4

Details

Joining NVIDIA's AI Efficiency Team means contributing to the infrastructure that powers our leading-edge AI research. This team focuses on optimizing efficiency and resiliency of ML workloads, as well as developing scalable AI infrastructure tools and services. Our objective is to deliver a stable, scalable environment for NVIDIA's AI researchers, providing them with the necessary resources and scale to foster innovation.

We're redefining the way Deep Learning applications run on tens of thousands of GPUs. Join a team of experts and help build a supercharged AI platform that improves efficiency, resilience, and Model FLOPs Utilization (MFU). In this position you will collaborate with a diverse team that cuts across many areas of the Deep Learning HW/SW stack in building a highly scalable, fault tolerant and optimized AI platform.

Responsibilities

  • Build AI models, tools and frameworks that provide real time application performance metrics that can be correlated with system metrics.
  • Develop automation frameworks that empower applications to thoughtfully predict and overcome system/infrastructure failures, ensuring fault tolerance.
  • Collaborate with software teams to pinpoint performance bottlenecks. Design, prototype, and integrate solutions that deliver demonstrable performance gains in production environments.
  • Adapt and enhance communication libraries to seamlessly support innovative network topologies and system architectures.
  • Design or adapt optimized storage solutions to boost Deep Learning efficiency, resilience, and developer productivity.

Requirements

  • BS/MS/PhD (or equivalent experience) in Computer Science, Electrical Engineering or a related field.
  • 12+ years of proven experience in areas including:
    • Analyzing and improving performance of training applications using PyTorch or similar frameworks.
    • Building distributed software applications using collective communication libraries such as MPI, NCCL, or UCC.
    • Constructing storage solutions for Deep Learning applications.
    • Building automated, fault tolerant distributed applications.
    • Building tools for bottleneck analysis and automation of fault tolerance in distributed environments.
  • Strong background in parallel programming and distributed systems.
  • Experience analyzing and optimizing large scale distributed applications.
  • Excellent verbal and written communication skills.

Ways To Stand Out

  • Deep understanding of HPC and distributed system architecture.
  • Hands-on experience with large state-of-the-art AI models, performance analysis and profiling of Deep Learning workloads.
  • Comfortable navigating and working with the PyTorch codebase.
  • Proven understanding of CUDA and GPU architecture.

Compensation & Benefits

  • Base salary range (determined by location, experience, and comparable pay):
    • Level 5: 224,000 USD - 356,500 USD
    • Level 6: 272,000 USD - 425,500 USD
  • You will also be eligible for equity and benefits (see NVIDIA benefits page).

Additional Information

  • Applications for this job will be accepted at least until November 4, 2025.
  • NVIDIA is an equal opportunity employer and committed to fostering a diverse work environment. The company does not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.