Senior Software Engineer, Deep Learning Inference - Windows

at Nvidia

πŸ“ Santa Clara, United States

$148,000-276,000 per year

SENIOR
βœ… Hybrid

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Software Development @ 6 Python @ 4 Machine Learning @ 4 Communication @ 7 Performance Optimization @ 4 API @ 4

Details

At NVIDIA, we're at the forefront of innovation, driving advancements in AI and machine learning to solve some of the world’s most challenging problems. We're seeking talented and motivated engineers to join our TensorRT team in developing the industry-leading deep learning inference software for NVIDIA AI accelerators.

Responsibilities

  • Design, implement and optimize TensorRT components to achieve tightly coordinated and responsive Generative AI inference applications for PCs and workstations.
  • Develop software in C++, Python, CUDA, and DirectML to accelerate systems that enable seamless and efficient deployment of next-gen AI models.
  • Collaborate with deep learning experts and GPU architects throughout the company.

Requirements

  • BS, MS, PhD or equivalent experience in Computer Science, Computer Engineering or a related field.
  • 5+ years of software development experience on a large codebase or project.
  • Strong proficiency in C++ and Python programming languages.
  • Experience with development of: Deep Learning Frameworks, Compilers, or System Software.
  • Foundational knowledge of Machine Learning techniques or GPU optimizations.
  • Excellent problem-solving skills and the ability to learn and work effectively in a fast-paced, collaborative environment.
  • Strong communication skills and the ability to articulate complex technical concepts.

Ways to stand out from the crowd

  • Experience in developing DirectML backend for GPU or NPU.
  • Windows application and middleware development using DirectX or DirectML API.
  • Knowledge of GPU programming using CUDA or OpenCL.
  • Experience with deploying AI models in production environments.
  • Knowledge of additional performance optimization tools and techniques as well as contributions to open-source projects or publications in relevant areas.