Solutions Architect, Inference Deployments

at Nvidia
USD 148,000-235,800 per year
MIDDLE
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Grafana @ 3 Kubernetes @ 3 Prometheus @ 3 DevOps @ 3 AWS @ 2 Networking @ 2 Microservices @ 3 AWS Lambda @ 2 LLM @ 3 OpenTelemetry @ 3 GPU @ 3

Details

We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect (Inference Focus), you will collaborate closely with engineering, DevOps, and customer success teams to foster enterprise AI adoption and introduce generative AI to production.

Responsibilities

  • Help customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on Kubernetes for large language models (LLMs) and generative AI workloads.
  • Enhance performance tuning using TensorRT / TensorRT-LLM, NVIDIA NIM, and Triton Inference Server to improve GPU utilization and model efficiency.
  • Architect zero-downtime deployments and autoscaling (e.g., HPA or equivalent experience with custom metrics).
  • Integrate cloud-native observability and telemetry tools (e.g., OpenTelemetry, Prometheus, Grafana) into inference deployments.
  • Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to customers implementing AI at scale.
  • Troubleshoot and optimize GPU orchestration, including allocation using NVIDIA GPU Operator and optimization with Multi-Instance GPU (MIG).

Requirements

  • 5+ years in Solutions Architecture with a proven track record of moving AI inference from POC to production on Kubernetes.
  • Experience architecting GPU allocation using NVIDIA GPU Operator and NVIDIA NIM Operator.
  • Troubleshoot sophisticated GPU orchestration and ensure efficient utilization in Kubernetes environments.
  • Proficiency with TensorRT-LLM, Triton, and TensorRT for model optimization and serving.
  • Demonstrated success optimizing LLMs for low-latency inference in enterprise environments.
  • BS or equivalent experience in Computer Science / Engineering.

Preferred / Ways to stand out

  • Prior experience deploying NVIDIA NIM microservices for multi-model inference.
  • Serverless inference experience and familiarity with FaaS patterns (e.g., Google Cloud Run, AWS Lambda, NVCF) on NVIDIA GPUs.
  • NVIDIA Certified AI Engineer or similar certification.
  • Active contributions to Kubernetes SIGs or AI inference projects (e.g., KServe, Dynamo, SGLang or similar).
  • Familiarity with networking concepts that support multi-node inference such as MPI, LWS or similar.

Benefits

  • Base salary range: 148,000 USD - 235,750 USD (base salary determined by location, experience, and comparable employee pay).
  • Eligible for equity and benefits (see company benefits page).
  • Applications accepted at least until August 14, 2025.

NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.