Principal Machine Learning Engineer - Enterprise AI

at Nvidia
USD 272,000-425,500 per year
SENIOR
✅ Hybrid

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Python @ 3 TensorFlow @ 3 NLP @ 4 LLM @ 3 PyTorch @ 3 CUDA @ 3 GPU @ 4

Details

NVIDIA's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI—the next era of computing—with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”

Responsibilities

  • Develop Intelligent AI Solutions – Leverage NVIDIA AI technologies and GPUs to build cutting-edge NLP and Generative AI solutions such as Retrieval-Augmented Generation (RAG) pipelines and agentic workflows that solve real-world enterprise and supply-chain problems.
  • Lead AI Product Development – Guide engineers and researchers in developing large-language-model-powered applications, chatbots, and optimization engines that directly improve chip-design supply-chain efficiency and resilience.
  • Design ML & Optimization Architectures – Create and implement machine-learning and combinatorial-optimization architectures (e.g., using NVIDIA cuOpt) tailored to semiconductor supply-chain use cases such as multi-echelon inventory, yield-constrained scheduling, and supplier risk mitigation.
  • Collaborate Across NVIDIA – Partner with supply-chain operation teams to identify high-impact opportunities, translate requirements into ML solutions, and drive measurable business outcomes.

Requirements

  • Master’s or Ph.D. in Computer Science, Operations Research, Industrial Engineering, or a related technical field, or equivalent experience.
  • 12+ years of experience designing, building, and deploying ML models and systems in production.
  • Demonstrated, hands-on experience applying AI techniques to supply-chain challenges (e.g., demand forecasting, wafer-level yield optimization, capacity planning, material logistics, or supplier risk analytics).
  • Strong knowledge of transformers, attention mechanisms, and modern NLP/GenAI techniques.
  • Expert-level Python plus deep-learning frameworks such as PyTorch or TensorFlow; familiarity with CUDA-accelerated libraries (cuOpt, TensorRT-LLM) is a plus.
  • Proven ability to think independently, drive research and development efforts, and mentor multidisciplinary engineering teams.
  • Highly motivated, curious, and eager to push the boundaries of what AI can do for complex supply-chain systems.

Preferred Qualifications

  • Agentic AI Expertise – Practical experience with frameworks such as LangChain or LangGraph and a deep understanding of multi-step reasoning and planning.
  • LLM Inference Optimization – Expertise in accelerating LLM inference (e.g., KV caching) to achieve sub-second latency at scale.
  • End-to-End ML Systems Design – A portfolio showing ownership of the full ML lifecycle, from data ingestion to monitoring and continuous improvement.
  • Research Impact – Publications or patents that advance NLP or supply-chain AI.

Benefits

  • Eligible for equity and benefits.
  • Work at one of the technology world’s most desirable employers with a forward-thinking and dedicated team.

#LI-Hybrid