Used Tools & Technologies
Not specified
Required Skills & Competences ?
Grafana @ 3 Kubernetes @ 3 Prometheus @ 3 DevOps @ 3 Hiring @ 3 AWS @ 3 Networking @ 2 Microservices @ 3 AWS Lambda @ 3 LLM @ 3 OpenTelemetry @ 3 GPU @ 3Details
We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect (Inference Focus), you will collaborate closely with engineering, DevOps, and customer success teams to foster enterprise AI adoption and introduce generative AI to production.
Responsibilities
- Help customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on Kubernetes for large language models (LLMs) and generative AI workloads.
- Enhance performance tuning using TensorRT / TensorRT-LLM, NVIDIA NIM, and Triton Inference Server to improve GPU utilization and model efficiency.
- Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to customers implementing AI at scale.
- Architect zero-downtime deployments, autoscaling (e.g., HPA or equivalent experience with custom metrics), and integration with cloud-native observability tools (e.g., OpenTelemetry, Prometheus, Grafana).
Requirements
- 5+ years in Solutions Architecture with a proven track record of moving AI inference from POC to production on Kubernetes.
- Experience architecting GPU allocation using NVIDIA GPU Operator and NVIDIA NIM Operator. Troubleshoot sophisticated GPU orchestration and optimize with Multi-Instance GPU (MIG) to ensure efficient utilization in Kubernetes environments.
- Proficiency with TensorRT-LLM, Triton, and TensorRT for model optimization and serving.
- Demonstrated success optimizing LLMs for low-latency inference in enterprise environments.
- BS or equivalent experience in Computer Science or Engineering.
Preferred / Ways to stand out
- Prior experience deploying NVIDIA NIM microservices for multi-model inference.
- Serverless inference / knowledge of FaaS patterns (e.g., Google Cloud Run, AWS Lambda, NVCF) with NVIDIA GPUs.
- NVIDIA Certified AI Engineer or similar certification.
- Active contributions to Kubernetes SIGs or AI inference projects (e.g., KServe, Dynamo, SGLang or similar).
- Familiarity with networking concepts which support multi-node inference such as MPI, LWS or similar.
Benefits & Compensation
- Base salary range: 148,000 USD - 235,750 USD (base salary will be determined based on location, experience, and pay of employees in similar positions).
- Eligible for equity and benefits: https://www.nvidia.com/en-us/benefits/
Other details
- Location: Santa Clara, CA, United States.
- Employment type: Full time.
- Applications accepted at least until August 30, 2025.
Equal Opportunity
NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate in hiring or promotion practices on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.