Senior System Software Engineer - Dynamo And Triton Inference Server

at Nvidia
πŸ“ United States
USD 184,000-356,500 per year
SENIOR
βœ… Remote

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Docker @ 4 Kubernetes @ 4 Python @ 4 Algorithms @ 7 Distributed Systems @ 4 TensorFlow @ 7 Hiring @ 4 gRPC @ 4 Protobuf @ 4 Rust @ 4 Debugging @ 4 HTTP @ 4 JSON @ 4 PyTorch @ 7 Agile @ 4 GPU @ 4

Details

We are now looking for a Senior System Software Engineer to work on Dynamo & Triton Inference Server! NVIDIA is hiring software engineers for its GPU-accelerated deep learning software team. Academic and commercial groups around the world are using GPUs to power a revolution in AI, enabling breakthroughs in problems from image classification to speech recognition to natural language processing. We are a fast-paced team building back-end services and software to make design and deployment of new AI models easier and accessible to all users.

Responsibilities

In this role, you will develop open source software to serve inference of trained AI models running on GPUs. You will balance a variety of objectives: build robust, scalable, high performance software components to support our distributed inference workloads; work with team leads to prioritize features and capabilities; load-balance asynchronous requests across available resources; optimize prediction throughput under latency constraints; and integrate the latest open source technology.

Requirements

  • Masters or PhD or equivalent experience
  • 6+ years in Computer Science, Computer Engineering, or related field
  • Ability to work in a fast-paced, agile team environment
  • Excellent Rust, Python, C++ programming and software design skills, including debugging, performance analysis, and test design
  • Experience with high scale distributed systems and ML systems

Ways to stand out from the crowd

  • Prior work experience improving performance of AI inference systems
  • Background with deep learning algorithms and frameworks, especially experience with Large Language Models and frameworks such as PyTorch, TensorFlow, TensorRT, and ONNX Runtime
  • Experience building and deploying cloud services using HTTP REST, gRPC, protobuf, JSON and related technologies
  • Experience with container technologies, such as Docker and container orchestrators, such as Kubernetes
  • Familiarity with the latest AI research and working knowledge of how these systems are efficiently implemented

Benefits

NVIDIA offers equity and benefits. The company is committed to fostering a diverse work environment and is an equal opportunity employer.