Senior AI System Engineer
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 3 Algorithms @ 4 Technical Proficiency @ 6 Communication @ 4 Data Analysis @ 3 LLM @ 4 PyTorch @ 4 CUDA @ 4 GPU @ 4Details
At NVIDIA, we are at the forefront of advancing the capabilities of artificial intelligence. We are seeking an ambitious and forward-thinking AI/ML System Performance Engineer to contribute to the development of next‑generation inference optimizations and deliver industry‑leading performance. In this role you will investigate and prototype scalable inference strategies—driving down per‑token latency and maximizing system throughput by applying cross‑stack optimizations that span algorithmic innovations (e.g., attention variants, speculative decoding, inference‑time scaling), system‑level techniques (e.g., model sharding, pipelining, communication overlap), and hardware‑level enhancements.
We collaborate extensively with teams across deep learning research, framework development, compiler and systems engineering, and silicon architecture. Thriving in this high‑impact, interdisciplinary environment demands not only technical proficiency but also a growth mindset and a pragmatic attitude—qualities that fuel our collective success in shaping the future of datacenter technology. Sample projects include Helix Parallelism and Disaggregated Inference.
Responsibilities
- Optimize inference deployment by pushing the Pareto frontier of accuracy, throughput, and interactivity at datacenter scale.
- Develop high‑fidelity performance models to prototype emerging algorithmic techniques and hardware optimizations to drive model‑hardware co‑design for Generative AI.
- Prioritize features to guide future software and hardware roadmaps based on detailed performance modeling and analysis.
- Model end‑to‑end performance impact of emerging GenAI workflows (such as agentic pipelines and inference‑time compute scaling) to understand future datacenter needs.
- Keep up with the latest deep learning research and collaborate with DL researchers, hardware architects, and software engineers across disciplines.
Requirements
- Master’s degree (or equivalent experience) in Computer Science, Electrical Engineering, or a related field.
- 3+ years of hands‑on experience in system evaluation of AI/ML workloads or performance analysis, modeling and optimizations for AI.
- Strong background in computer architecture, roofline modeling, queuing theory, and statistical performance analysis techniques.
- Solid understanding of ML fundamentals, model parallelism, and inference serving techniques.
- Proficiency in Python (used for simulator design and data analysis); familiarity with C++ is optional.
- Experience with GPU computing (CUDA).
- Experience with deep learning frameworks and inference toolchains such as PyTorch, TRT‑LLM, VLLM, and SGLang.
- Growth mindset and a pragmatic “measure, iterate, deliver” approach.
Ways to Stand Out
- Comfortable defining metrics, designing experiments, and visualizing large performance datasets to identify resource bottlenecks.
- Proven track record of working in cross‑functional teams spanning algorithms, software, and hardware architecture.
- Ability to distill complex analyses into clear recommendations for both technical and non‑technical stakeholders.
Compensation and Benefits
- Base salary ranges by level:
- Level 3: 148,000 USD - 235,750 USD
- Level 4: 184,000 USD - 287,500 USD
- Eligible for equity and company benefits.
Location & Employment Type
- Location: Santa Clara, California, United States.
- Employment type: Full time.
- Applications accepted at least until September 1, 2025.
NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.