Senior GenAI Algorithms Engineer

at Nvidia
USD 148,000-287,500 per year
SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Python @ 4 GitHub @ 4 Algorithms @ 4 Distributed Systems @ 4 Mathematics @ 6 Debugging @ 4 API @ 4 PyTorch @ 4 GPU @ 4

Details

NVIDIA is seeking engineers to design, develop, and optimize Artificial Intelligence solutions for diverse real-world problems. The role focuses on deep learning, particularly large language models (LLMs) and multimodal variants. You will collaborate with internal partners, users, and the open source community to implement highly optimized AI algorithms, perform performance and accuracy tuning, define APIs, and build coherent toolsets and libraries.

Responsibilities

  • Contribute to the open source NeMo framework.
  • Develop and maintain state-of-the-art Generative AI models (e.g., large language models, multimodal LLMs).
  • Work on large-scale distributed systems for end-to-end AI training and inference deployment, including data fetching, preprocessing, orchestration of model training and tuning, and model serving.
  • Analyze, influence, and improve AI/DL libraries, frameworks, and APIs following good engineering practices.
  • Research, prototype, and develop tools and infrastructure pipelines.
  • Publish results on GitHub and in scientific publications.

Requirements

  • PhD or Master's degree (or equivalent experience) and 5+ years of industry experience in Computer Science, AI, Applied Mathematics, or a related field.
  • Strong mathematical fundamentals and solid AI/deep learning algorithms skills.
  • Excellent programming skills (especially Python), debugging, performance analysis, test design, and documentation skills.
  • Experience with AI/DL frameworks (for example, PyTorch, JAX).

Ways to stand out

  • Prior experience with generative AI techniques applied to LLMs and multimodal variants (image, video, speech, etc.).
  • Exposure to large-scale AI training and understanding of compute system concepts (latency/throughput bottlenecks, pipelining, multiprocessing) and related performance analysis and tuning.
  • Hands-on experience with inference and deployment environments (for example, TensorRT, ONNX, Triton).
  • Knowledge of GPU/CPU architecture and related numerical software.

Compensation & Benefits

  • Base salary ranges by level:
    • Level 3: 148,000 USD - 235,750 USD
    • Level 4: 184,000 USD - 287,500 USD
  • Eligible for equity and benefits.

Additional information

  • Applications accepted at least until September 14, 2025.
  • NVIDIA is an equal opportunity employer and is committed to fostering a diverse work environment.