Principal Software Engineer - Inference as a Service
at Nvidia
π Santa Clara, United States
USD 248,000-391,000 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Go @ 7 Grafana @ 3 Kubernetes @ 4 Prometheus @ 3 Python @ 7 CI/CD @ 4 Datadog @ 3 Distributed Systems @ 8 Leadership @ 4 Mathematics @ 4 Performance Optimization @ 4 Debugging @ 4 API @ 4 Technical Leadership @ 4 System Architecture @ 7 OpenTelemetry @ 3 GPU @ 4Details
NVIDIA is building the next era of computing powered by AI. The Software Infrastructure Team within the NVIDIA AI Factory initiative is developing the core infrastructure that powers closed and open source AI models. This Principal Software Engineer role is a key technical leadership position focused on designing and building an Inference-as-a-Service platform that manages GPU resources and delivers high-performance, low-latency inference at massive scale.
Responsibilities
- Lead design and development of a scalable, robust, and reliable platform for serving AI models (Inference as a Service).
- Architect and implement systems for dynamic GPU resource management, autoscaling, and efficient scheduling of inference workloads.
- Build and maintain core infrastructure components including load balancing and rate limiting to ensure availability and stability.
- Define and implement APIs for model deployment, monitoring, and management.
- Optimize system performance and latency for various model types (large language models, computer vision models) to ensure high throughput and responsiveness.
- Integrate deployment, monitoring, and performance telemetry into CI/CD pipelines in collaboration with engineering teams.
- Develop tools and frameworks for real-time observability, performance profiling, and debugging of inference services.
- Drive architectural decisions and best practices for long-term platform evolution and scalability.
- Contribute to NVIDIA's AI Factory initiative by building foundational model-serving platforms.
Requirements
- 15+ years of software engineering experience with deep expertise in distributed systems or large-scale backend infrastructure.
- BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience).
- Strong programming skills in Python, Go, or C++ with a track record of building production-grade, highly available systems.
- Proven experience with container orchestration technologies such as Kubernetes.
- Deep understanding of system architecture for high-performance, low-latency API services.
- Experience designing, implementing, and optimizing systems for GPU resource management.
- Familiarity with modern observability tools (e.g., DataDog, Prometheus, Grafana, OpenTelemetry).
- Demonstrated experience with deployment strategies and CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment.
Ways to stand out
- Experience with specialized inference serving frameworks.
- Open-source contributions to AI/ML, distributed systems, or infrastructure projects.
- Hands-on experience with performance optimization techniques for AI models (e.g., quantization, model compression).
- Expertise in building platforms that support a wide variety of AI model architectures.
- Strong understanding of the full lifecycle of an AI model, from training to deployment and serving.
Compensation & Benefits
- Base salary range: 248,000 USD - 391,000 USD (determined by location, experience, and comparable pay).
- Eligible for equity and company benefits (link to NVIDIA benefits provided in original posting).
Additional info
- Location: Santa Clara, CA, United States.
- Employment type: Full time.
- Application window: Applications accepted at least until August 24, 2025.
- NVIDIA is an equal opportunity employer committed to diversity and inclusion.