Senior Deep Learning Algorithm Engineer - Agentic LLM Inference
at Nvidia
📍 Santa Clara, United States
$180,000-339,200 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 6 Algorithms @ 4 Communication @ 4 LLM @ 4 CUDA @ 4Details
At NVIDIA, we are at the forefront of the constantly evolving field of large language models, and their application in agentic use cases. As the scale and complexity of these agentic systems continues to increase, we are seeking outstanding engineers to join our team and help shape the future agentic inference.
Our team is dedicated to pushing the boundaries of what's possible with agentic LLMs by improving the algorithmic performance and efficiency of systems that represent them. We constantly reflect on how to improve these systems, developing new inference algorithms and protocols, improving existing models, and seamlessly integrating improvements to ensure NVIDIA's solutions can efficiently handle large-scale, sophisticated tasks.
Responsibilities
- Research and Development: Explore and incorporate contemporary research on generative AI, agents, and inference systems into the NVIDIA LLM software stack.
- Workload Analysis and Optimization: Conduct in-depth analysis, profiling, and optimization of agentic LLM workloads to significantly reduce request latency and increase request throughput while maintaining workflow fidelity.
- System Design and Implementation: Design and implement scalable systems to accelerate agentic workflows and efficiently handle sophisticated datacenter-scale use cases.
- Collaboration and Communication: Advise future iterations of NVIDIA software, hardware, and system by engaging with a diverse set of teams at NVIDIA and external partners and formalizing the strategic requirements presented by their workloads.
Requirements
- BS, MS, PhD in Computer Science, Electrical Engineering, Computer Engineering, or a related field (or equivalent experience).
- 3+ years of experience in deep learning and deep learning systems design.
- Proficiency in Python and C++ programming.
- Strong understanding of computer architecture, and GPU/parallel datacenter computing fundamentals.
- Proven interest in analyzing, modeling, and tuning application performance.
Ways to stand out from the crowd:
- 2+ years of experience in building large-scale LLM inference systems, especially those involving compound AI.
- Experience with agentic LLM frameworks such as Langchain and LLamaIndex.
- Experience with processor and system-level performance modeling.
- GPU programming experience with CUDA or OpenCL.
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent.