Senior AI-HPC Storage Engineer

at Nvidia

📍 Santa Clara, United States

$180,000-339,200 per year

SENIOR
✅ Hybrid

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

CentOS @ 6 Docker @ 4 Linux @ 6 Python @ 6 GCP @ 7 Leadership @ 4 AWS @ 7 Azure @ 7 Bash @ 6

Details

As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking fast storage solutions to enable runs of demanding deep learning, high-performance computing, and computationally intensive workloads. We seek an expert to identify architectural changes and/or completely new approaches for our GPU Compute Clusters fast storage. As an expert, you will help us with the next-gen storage solutions strategic challenges we encounter with storage design for large scale, high performance workloads, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.

Responsibilities

  • Research and implementation of distributed storage services.
  • Design, implement an on-prem AI/HPC infrastructure supplemented with cloud computing to support the growing needs of NVIDIA.
  • Design and implement scalable and efficient next-gen storage solutions tailored for data-intensive applications, optimizing performance and cost-effectiveness.
  • Develop tooling to automate management of large-scale infrastructure environments, to automate operational monitoring and alerting, and to enable self-service consumption of resources.
  • Document the general procedures and practices, perform technology evaluations, related to distributed file systems.
  • Collaborate across teams to better understand developers' workflows and gather their infrastructure requirements.
  • Influence and guide methodologies for building, testing, and deploying applications to ensure optimal performance and resource utilization.
  • Supporting our researchers to run their flows on our clusters including performance analysis and optimizations of deep learning workflows.
  • Root cause analysis and suggest corrective action for problems large and small scales.

Requirements

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
  • 8+ years of experience designing and operating large scale storage infrastructure.
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads.
  • Experience with one or more parallel or distributed filesystems such as Lustre, GPFS is a must.
  • Proficient in Centos/RHEL and/or Ubuntu Linux distros including Python programming and bash scripting.
  • Strong Experience operating services in any of the leading Cloud environment [ AWS, Azure or GCP].
  • Experience with AI/HPC cluster job schedulers such as SLURM, LSF.
  • In-depth understanding of container technologies like Docker, Enroot.
  • Experience with AI/HPC workflows that use MPI.

Benefits

NVIDIA offers highly competitive salaries and a comprehensive benefits package. We have some of the most resourceful and talented people in the world working for us and, due to unprecedented growth, our extraordinary engineering teams are growing fast. If you're a creative and autonomous engineer with real passion for technology, we want to hear from you.