Senior Solutions Architect - AI Infrastructure Engineer
at Nvidia
USD 184,000-287,500 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Ansible @ 4 Docker @ 4 Grafana @ 3 Kubernetes @ 4 Prometheus @ 3 Mathematics @ 4 Mentoring @ 4 Networking @ 4 Performance Optimization @ 3 Debugging @ 4 LLM @ 4 GPU @ 4Details
Are you an experienced systems architect with an interest in advancing artificial intelligence (AI) and high-performance computing (HPC) in academic and research environments? We are looking for a Solutions Architect to join the higher education and research team. In this role you will work with universities and research institutions to optimize the design and deployment of AI infrastructure. Our team applies expertise in accelerated software and hardware systems to help enable groundbreaking advancements in AI, deep learning, and scientific research. This role requires a strong background in building and deploying research computing clusters, deploying AI and HPC workloads, and optimizing system performance at scale.
Responsibilities
- Collaborate as a key technical advisor for the design, build-out, and optimization of university-level research computing infrastructures powering GPU-accelerated scientific workflows.
- Work with university research computing to optimize hardware utilization with software orchestration tools such as NVIDIA Base Command, Kubernetes, Slurm, and Jupyter notebook environments.
- Implement systems monitoring and management tools to help optimize resource utilization, and gain insight into the most demanding application workloads at research computing centers.
- Document findings and learnings: build targeted training, write whitepapers, blogs, and wiki articles, and work through hard problems with researchers.
- Collaborate with researchers to gather feature requests and product feedback for product and engineering teams.
Requirements
- BS/MS/PhD in Engineering, Mathematics, Physical Sciences, or Computer Science (or equivalent experience).
- 8+ years of relevant work experience.
- Strong experience in designing and deploying GPU-accelerated computing infrastructure.
- In-depth knowledge of cluster orchestration and job scheduling technologies (e.g., Slurm, Kubernetes, Ansible, Open OnDemand).
- Experience with container tools (Docker, Singularity) including at-scale deployment of containerized environments.
- Expertise in systems monitoring, telemetry, and systems performance optimization of research computing environments; familiarity with tools such as Prometheus, Grafana, and NVIDIA DCGM.
- Understanding of datacenter networking technologies (InfiniBand, Ethernet, OFED) and hands-on experience with network configuration.
- Familiarity with power and cooling systems architecture for data center infrastructure.
Preferred / Ways to stand out
- Background in deploying LLM training and inference workflows in a research computing environment.
- Experience deploying and evaluating cluster performance using benchmarks such as MLPerf and/or HPL.
- Experience in delivering technical training, workshops, or mentoring researchers on using HPC/AI systems.
- Applications and systems-level knowledge of OpenMPI and NCCL.
- Experience with debugging and profiling tools such as Nsight Systems, Nsight Compute, Compute Sanitizer, GDB, or Valgrind.
Compensation & Benefits
- Base salary range: 184,000 USD - 287,500 USD (determined based on location, experience, and internal pay parity).
- Eligible for equity and company benefits (see benefits reference link).
Other details
- Location: US (Massachusetts) and Remote options.
- Employment type: Full time.
- Applications accepted at least until July 29, 2025.
- NVIDIA is an equal opportunity employer committed to diversity and non-discrimination.