Senior Solutions Architect - AI Infrastructure Engineer
at Nvidia
USD 184,000-287,500 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Ansible @ 4 Docker @ 4 Grafana @ 3 Kubernetes @ 4 Prometheus @ 3 Mathematics @ 4 Mentoring @ 4 Networking @ 4 Performance Optimization @ 3 Debugging @ 4 LLM @ 4 GPU @ 4Details
Are you an experienced systems architect with an interest in advancing artificial intelligence (AI) and high-performance computing (HPC) in academic and research environments? We are looking for a Solutions Architect to join the higher education and research team. In this role you will work with universities and research institutions to optimize the design and deployment of AI infrastructure. The team applies expertise in accelerated software and hardware systems to help enable advancements in AI, deep learning, and scientific research. This role requires a strong background in building and deploying research computing clusters, deploying AI and HPC workloads, and optimizing system performance at scale.
Responsibilities
- Collaborate as a key technical advisor for the design, build-out, and optimization of university-level research computing infrastructures powering GPU-accelerated scientific workflows.
- Work with university research computing to optimize hardware utilization with software orchestration tools such as NVIDIA Base Command, Kubernetes, Slurm, and Jupyter notebook environments.
- Implement systems monitoring and management tools to help optimize resource utilization and gain insight into the most demanding application workloads at research computing centers.
- Document findings and produce targeted training, whitepapers, blogs, and wiki articles; work through hard problems directly with researchers.
- Collaborate with researchers to gather feature requests and product feedback for product and engineering teams.
Requirements
- BS/MS/PhD in Engineering, Mathematics, Physical Sciences, or Computer Science (or equivalent experience).
- 8+ years of relevant work experience.
- Strong experience designing and deploying GPU-accelerated computing infrastructure.
- In-depth knowledge of cluster orchestration and job scheduling technologies (e.g., Slurm, Kubernetes, Ansible, Open OnDemand).
- Experience with container tools (Docker, Singularity) including at-scale deployment of containerized environments.
- Expertise in systems monitoring, telemetry, and performance optimization for research computing environments; familiarity with Prometheus, Grafana, and NVIDIA DCGM.
- Understanding of datacenter networking technologies (InfiniBand, Ethernet, OFED) and experience with network configuration.
- Familiarity with power and cooling systems architecture for data center infrastructure.
Ways to stand out
- Background in deploying LLM training and inference workflows in a research computing environment.
- Experience deploying and evaluating cluster performance using benchmarks such as MLPerf and/or HPL.
- Experience delivering technical training, workshops, or mentoring researchers on HPC/AI systems.
- Applications and systems-level knowledge of OpenMPI and NCCL.
- Experience with debugging and profiling tools (e.g., Nsight Systems, Nsight Compute, Compute Sanitizer, GDB, Valgrind).
Compensation & Benefits
- Base salary range: 184,000 USD - 287,500 USD (determined based on location, experience, and pay of employees in similar positions).
- You will also be eligible for equity and benefits (link to benefits provided in original posting).
Application
- Applications for this job will be accepted at least until July 29, 2025.
Equal Opportunity
- NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. The employer does not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.