Deep Learning Scientist, AI Safety

at Nvidia
USD 120,000-235,800 per year
MIDDLE SENIOR
βœ… On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Security @ 3 Python @ 3 Algorithms @ 6 Machine Learning @ 5 MLOps @ 3 Hiring @ 3 Communication @ 3 LLM @ 3 PyTorch @ 3 Compliance @ 3

Details

NVIDIA is hiring a Deep Learning Scientist focused on AI safety for large language models (LLMs) and multimodal models. The role centers on content safety, ML fairness, robustness, and AI/model security across research and production engineering teams. You will assess, quantify, and improve safety and inclusivity of LLMs in a scalable way and contribute tools and MLOps practices to support safe deployment.

Responsibilities

  • Develop datasets and models for training and evaluating models and end-to-end systems for Content Safety and ML Fairness.
  • Research and implement techniques for bias detection and mitigation in LLMs and systems that use LLMs (including retrieval-augmented generation β€” RAGs).
  • Define and track key metrics for responsible LLM behavior and usage.
  • Follow and contribute to MLOps best practices: automation, monitoring, scaling, and safety.
  • Contribute to the MLOps platform and develop safety tools to help ML teams be more effective.
  • Collaborate with engineers, data scientists, and researchers to implement solutions for content safety and ML fairness challenges.

Requirements

  • Master's or PhD in Computer Science, Electrical Engineering, or a related field, or equivalent experience.
  • 3+ years of experience developing and deploying machine learning models in production.
  • Strong understanding of machine learning principles and algorithms.
  • Hands-on programming experience in Python and in-depth knowledge of ML frameworks such as Keras or PyTorch.
  • 2+ years of experience in one or more of: Content Safety, ML Fairness, Robustness, AI Model Security, or related areas.
  • Experience with content-safety problem areas (hate/harassment, sexualized content, harmful/violent content, etc.).
  • Experience working with large multimodal datasets and multimodal models.
  • Strong problem-solving and analytical skills.
  • Excellent collaboration and communication skills; demonstrates trust-building behaviors (humility, transparency, respect, intellectual honesty).

Ways to stand out

  • Experience with alignment and fine-tuning of LLMs (including VLMs or any-to-text workflows).
  • Experience with multimodal and/or multilingual content safety and legal/regulatory compliance.
  • Background addressing robustness issues such as hallucinations, digressions, and generative misinformation.
  • Experience with GenAI security topics: prompt stability, model extraction, confidentiality/data extraction, integrity, availability, and adversarial robustness.
  • Strong publication and research record in AI/ML and safety-related areas.

Compensation & Benefits

  • Base salary ranges (dependent on level, location, and experience):
    • Level 2: 120,000 USD - 189,750 USD
    • Level 3: 148,000 USD - 235,750 USD
  • Eligible for equity and comprehensive benefits.

Additional information

  • Location: Santa Clara, CA, United States.
  • Full-time role. Applications accepted at least until October 12, 2025.
  • NVIDIA is an equal opportunity employer committed to diversity and inclusion.