Senior Software Engineer - NIM Factory Container And Cloud Infrastructure

at Nvidia
USD 184,000-356,500 per year
SENIOR
βœ… On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Security @ 4 Docker @ 4 Kubernetes @ 4 Python @ 4 CI/CD @ 4 Communication @ 6 Helm @ 4 SRE @ 4 Microservices @ 4 API @ 4 LLM @ 4 CUDA @ 4 GPU @ 4

Details

NVIDIA is seeking a Senior Software Engineer focused on container and cloud infrastructure to design and implement core container strategy for NVIDIA Inference Microservices (NIMs) and hosted services. The role involves building enterprise-grade software and tooling for container build, packaging, and deployment, improving reliability, performance, and scale across thousands of GPUs, and supporting emerging deployment patterns including disaggregated LLM inference.

Responsibilities

  • Design, build, and harden containers for NIM runtimes and inference backends; enable reproducible, multi-arch, CUDA-optimized builds.
  • Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses; enforce quality with typing, linting, and unit/integration tests.
  • Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi-cluster rollouts.
  • Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing.
  • Evolve base image strategy, dependency management, and artifact/registry topology.
  • Collaborate across research, backend, SRE, and product teams to ensure day-0 availability of new models.
  • Mentor teammates and set high engineering standards for container quality, security, and operability.

Requirements

  • 10+ years building production software with a strong focus on containers and Kubernetes.
  • Strong Python skills building production-grade tooling/services.
  • Experience with Python SDKs and clients for Kubernetes and cloud services.
  • Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi-stage builds, and registry workflows.
  • Deep experience operating workloads on Kubernetes.
  • Strong understanding of LLM inference features, including structured output, KV-cache, and LoRA adapters.
  • Hands-on experience building and running GPU workloads in Kubernetes, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation.
  • Excellent collaboration and communication skills; ability to influence cross-functional design.
  • A degree in Computer Science, Computer Engineering, or a related field (BS or MS) or equivalent experience.

Ways To Stand Out (Preferred / Nice-to-have)

  • Expertise with Helm chart design systems, Operators, and platform APIs serving many teams.
  • Experience with OpenAI API and Hugging Face API; understanding differences between inference backends (vLLM, SGLang, TRT-LLM).
  • Background in benchmarking and optimizing inference container performance and startup latency at scale.
  • Prior experience designing multi-tenant, multi-cluster, or edge/air-gapped container delivery.
  • Contributions to open-source container, Kubernetes, or GPU ecosystems.

Compensation & Benefits

  • Base salary ranges by level (location, experience, and internal pay considered):
    • Level 4: 184,000 USD - 287,500 USD
    • Level 5: 224,000 USD - 356,500 USD
  • Eligible for equity and company benefits (link provided in original posting).

Additional Information

  • Applications for this job will be accepted at least until September 21, 2025.
  • NVIDIA is an equal opportunity employer committed to fostering a diverse work environment.