Senior Deep Learning Performance Architect, Compute Energy Efficiency

at Nvidia
USD 184,000-287,500 per year
SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Python @ 7 Algorithms @ 4 Parallel Programming @ 4 PyTorch @ 4 CUDA @ 4 GPU @ 4

Details

NVIDIA is seeking outstanding Performance Architects with a background in performance and energy efficiency of parallel architectures, performance modeling, architecture simulation, and workload profiling to help analyze and develop the next generation of architectures at scale to accelerate AI and high-performance computing applications.

Responsibilities

  • Develop innovative architectures to extend the state of the art in deep learning performance and efficiency.
  • Prototype key deep learning algorithms and applications.
  • Analyze performance, cost and energy trade-offs by developing analytical models, simulators and test suites.
  • Characterize performance and energy efficiency on scale-out systems and work with architecture and systems teams to identify and evaluate features to increase at-scale efficiency of systems running AI workloads.
  • Understand and analyze the interplay of hardware and software architectures on future algorithms, programming models and applications.
  • Actively collaborate with software, systems and research teams to guide the direction of deep learning hardware and software.

Requirements

  • Masters degree (or equivalent experience) and 5+ years of relevant experience, or PhD and 2+ years of experience in Computer Science, Electrical Engineering, Computer Engineering, or related field.
  • Good foundation in deep learning model architectures and performance trade-offs.
  • Experience with energy-efficient high-performance analysis, architecture/system co-design and/or simulation, profiling, and visualizations.
  • Strong programming skills in Python, C, C++.
  • Experience with parallel computing architectures, or workload analysis on deep learning accelerators.

Ways to stand out

  • Background with GPU computing and parallel programming models such as CUDA.
  • Experience with deep neural network training, inference and optimization in leading frameworks (e.g. PyTorch, JAX).

Benefits / Compensation

  • Base salary range: 184,000 USD - 287,500 USD (base salary determined based on location, experience, and pay of employees in similar positions).
  • You will also be eligible for equity and benefits (see NVIDIA benefits page).

Other details

  • Applications for this job will be accepted at least until August 19, 2025.
  • NVIDIA is an equal opportunity employer and committed to fostering a diverse work environment.