Senior DL Algorithms Engineer - Inference Optimizations

at Nvidia

📍 Santa Clara, United States

$148,000-276,000 per year

SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Software Development @ 4 Python @ 4 Algorithms @ 7 LLM @ 4 CUDA @ 1

Details

NVIDIA is seeking senior engineers who are mindful of performance analysis and optimization to help us squeeze every last clock cycle out of Deep Learning inference, one of today's most essential workloads in the world. If you are unafraid of working across all layers of the hardware/software stack, from GPU architecture to Deep Learning Framework, to achieve peak performance, we want to hear from you! This role offers an opportunity to directly impact the hardware and software roadmap in a fast-growing technology company that leads the AI revolution while helping deep learning users around the globe enjoy ever-higher training speeds.

Responsibilities

  • Understand, analyze, profile, and optimize deep learning inference workloads on state-of-the-art hardware and software platforms.
  • Collaborate with researchers and engineers across NVIDIA, providing guidance on improving the performance of workloads.
  • Implement production-quality software across NVIDIA's deep learning platform stack, such as TensorRT, TensorRT-LLM and vLLM.
  • Build tools to automate workload analysis, workload optimization, and other critical workflows.

Requirements

  • 5+ years of experience.
  • MSc or PhD in CS, EE or CSEE or equivalent experience.
  • Strong background in deep learning and neural networks, in particular training and inference optimizations.
  • Deep understanding of computer architecture, and familiarity with the fundamentals of GPU architecture.
  • Proven experience analyzing, modeling and tuning application performance.
  • Programming skills in C++ and Python.

Ways to stand out from the crowd

  • Strong fundamentals in algorithms.
  • Experience with production deployment of Deep Learning models, especially LLMs and multimodal.
  • Proven experience with processor and system-level performance modeling.
  • GPU programming experience (CUDA or OpenCL) is a strong plus but not required.

As NVIDIA makes inroads into the Datacenter business, our team plays a central role in getting the most out of our exponentially growing data center deployments and establishing a data-driven approach to hardware design and system software development. We collaborate with a broad cross-section of teams at NVIDIA ranging from DL research teams to CUDA Kernel and DL Framework development teams, to Silicon Architecture Teams. As our team grows and as we seek to identify and take advantage of long-term opportunities, our skills needs are growing as well. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!

The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.