Used Tools & Technologies
Not specified
Required Skills & Competences ?
Machine Learning @ 6 Product Management @ 5 System Architecture @ 3 LLM @ 6 GPU @ 3Details
NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. Today NVIDIA is tapping into the unlimited potential of AI to define the next era of computing. This role is part of the Accelerated Computing team and focuses on GPU product definition for data center workloads, especially Large Language Models (LLMs).
Responsibilities
- Guide the architecture of the next generation of GPUs through an intuitive and comprehensive grasp of how GPU architecture affects performance for datacenter applications, especially Large Language Models (LLMs).
- Drive the discovery of opportunities for innovation in GPU, system, and data-center architecture by analyzing the latest data center workload trends, Deep Learning research, analyst reports, competitive landscape, and token economics.
- Find opportunities where NVIDIA can uniquely address customer needs and translate these into compelling GPU value propositions and product proposals.
- Distill sophisticated analyses into clear recommendations for both technical and non-technical audiences.
Requirements
- 5+ years of total experience in technology; previous product management, AI-related engineering, design, or development experience is highly valued.
- BS or MS or equivalent experience in engineering, computer science, or another technical field. MBA is a plus.
- Deep understanding of fundamentals of GPU architecture, Machine Learning, Deep Learning, and LLM architecture with ability to articulate the relationship between application performance and GPU and data center architecture.
- Ability to develop intuitive models on the economics of data center workloads including data center total cost of operation and token revenues.
- Demonstrated ability to fully contribute to the above areas within 3 months.
- Strong desire to learn, motivated to tackle complex problems and able to make sophisticated trade-offs.
Ways to stand out (Preferred)
- 2+ years direct experience in developing or deploying large scale GPU-based AI applications, like LLMs, for training and inference.
- Ability to quickly develop intuitive, first-principles based models of Generative AI workload performance using GPU and system architecture (FLOPS, bandwidths, etc.).
- Comfort and drive to constantly stay updated with the latest in deep learning research (academic papers) and industry news.
- Track record of managing multiple parallel efforts, collaborating with diverse teams, including performance engineers, hardware architects, and product managers.
Compensation & Benefits
- Base salary ranges (determined by location, experience, and pay of employees in similar positions):
- Level 3: 144,000 USD - 218,500 USD
- Level 4: 168,000 USD - 258,750 USD
- You will also be eligible for equity and benefits. See www.nvidiabenefits.com for details.
Additional Information
- Applications for this job will be accepted at least until September 20, 2025.
- NVIDIA is an equal opportunity employer and is committed to fostering a diverse work environment.