Senior Research Engineer - Autonomous Vehicles
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Kubernetes @ 4 Python @ 7 Algorithms @ 3 MLOps @ 8 TensorFlow @ 4 Communication @ 7 Debugging @ 4 NLP @ 4 PyTorch @ 4 CUDA @ 4 GPU @ 4Details
We are recruiting top research engineers in the Autonomous Vehicles Research team at NVIDIA. The role focuses on software engineering and AI topics such as deep learning, reinforcement learning, and generative modeling for autonomous driving. You will work on AI models for agent behavior, end-to-end AV architectures, AI safety, closed-loop training, and AV foundation models (VLMs, reasoning models). The position balances fundamental publishable research with opportunities to impact products across CUDA, simulation, graphics, NLP, robotics, and more. Strong communication and collaboration with cross-functional teams and domain scientists is essential.
Responsibilities
- Develop large-scale supervised learning and reinforcement learning training frameworks to support multi-modal foundation models for AVs capable of running on thousands of GPUs.
- Optimize GPU and cluster utilization for efficient model training and fine-tuning on massive datasets.
- Implement scalable data loaders and preprocessors tailored for multimodal datasets (video, text, sensor data).
- Build and optimize simulation infrastructure (GPU-accelerated simulators) to support training of driving policies at scale.
- Collaborate with researchers to integrate cutting-edge model architectures into scalable training pipelines.
- Develop sim-to-real transfer pipelines and work with AV product teams to deploy to real-world vehicles.
- Propose scalable solutions combining LLMs with policy learning and apply reinforcement learning to fine-tune multimodal LLMs.
- Develop robust monitoring and debugging tools to ensure reliability and performance of training workflows on large GPU clusters.
Requirements
- Bachelor's degree in Computer Science, Robotics, Engineering, or a related field, or equivalent experience.
- 10+ years of full-time industry experience in large-scale MLOps and AI infrastructure.
- Proven experience designing and optimizing distributed training systems using frameworks such as PyTorch, JAX, or TensorFlow.
- Deep familiarity with reinforcement learning algorithms (PPO, SAC, Q-learning) and experience tuning hyperparameters and reward functions.
- Experience with policy learning techniques including reward shaping, domain randomization, and curriculum learning.
- Strong understanding of GPU acceleration and CUDA programming; experience optimizing compute on large GPU clusters and HPC environments.
- Experience with cluster management and orchestration tools such as Kubernetes and job schedulers like SLURM.
- Strong programming skills in Python and a high-performance language such as C++.
- Experience implementing scalable data pipelines, multimodal preprocessing, and simulation-based training for AVs.
Technologies & Tools Mentioned
- Deep Learning, Generative Modeling, Reinforcement Learning
- PyTorch, JAX, TensorFlow
- CUDA, GPU acceleration, HPC, SLURM, Kubernetes
- Python, C++
- Large-scale distributed training, MLOps, multi-modal datasets, simulation infrastructure, LLMs
Benefits & Compensation
- Base salary range (by level and location):
- Level 4: 184,000 USD - 287,500 USD
- Level 5: 224,000 USD - 356,500 USD
- Eligible for equity and additional benefits (see NVIDIA benefits page).
- Applications accepted at least until October 11, 2025.
NVIDIA is an equal opportunity employer and is committed to fostering a diverse work environment.