Senior Software Engineer - NIM Factory Container and Cloud Infrastructure
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Security @ 4 Docker @ 4 Kubernetes @ 4 Python @ 4 CI/CD @ 4 Communication @ 6 Helm @ 4 SRE @ 4 Microservices @ 4 API @ 4 LLM @ 4 CUDA @ 4 GPU @ 4Details
NVIDIA is the platform upon which every new AI-powered application is built. We are seeking a Senior Software Engineer focused on container and cloud infrastructure. You will help design and implement our core container strategy for NVIDIA Inference Microservices (NIMs) and our hosted services. You will build enterprise-grade software and tooling for container build, packaging, and deployment. You will help improve reliability, performance, and scale across thousands of GPUs, including support for disaggregated LLM inference and other emerging deployment patterns.
Responsibilities
- Design, build, and harden containers for NIM runtimes and inference backends; enable reproducible, multi-architecture, CUDA-optimized builds.
- Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses; enforce quality with typing, linting, and unit/integration tests.
- Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi-cluster rollouts.
- Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing.
- Evolve base image strategy, dependency management, and artifact/registry topology.
- Collaborate across research, backend, SRE, and product teams to ensure day-0 availability of new models.
- Mentor teammates and set high engineering standards for container quality, security, and operability.
Requirements
- 10+ years building production software with a strong focus on containers and Kubernetes.
- Strong Python skills for building production-grade tooling and services.
- Experience with Python SDKs/clients for Kubernetes and cloud services.
- Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi-stage builds, and registry workflows.
- Deep experience operating workloads on Kubernetes, including GPU scheduling and autoscaling.
- Strong understanding of LLM inference features such as structured output, KV-cache, and LoRA adapters.
- Hands-on experience building and running GPU workloads in Kubernetes, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation.
- Excellent collaboration and communication skills; ability to influence cross-functional design.
- Degree in Computer Science, Computer Engineering, or related field (BS or MS) or equivalent experience.
Nice to have / Ways to stand out
- Expertise with Helm chart design systems, Kubernetes Operators, and platform APIs serving many teams.
- Experience with OpenAI API and Hugging Face API and familiarity with different inference backends (vLLM, SGLang, TRT-LLM).
- Background in benchmarking and optimizing inference container performance and startup latency at scale.
- Prior experience designing multi-tenant, multi-cluster, or edge/air-gapped container delivery.
- Contributions to open-source container, Kubernetes, or GPU ecosystems.
Benefits and compensation
With competitive salaries and a generous benefits package, NVIDIA is widely considered one of the technology industry’s most desirable employers. You will be eligible for equity and benefits. Base salary will be determined based on location and experience. Base salary ranges provided in the posting:
- Level 4: 184,000 USD - 287,500 USD
- Level 5: 224,000 USD - 356,500 USD
Applications for this job will be accepted at least until September 21, 2025.
Diversity & inclusion
NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.