Senior Software Architect - Deep Learning and HPC Communications

at Nvidia

📍 Santa Clara, United States

$180,000-339,200 per year

SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Linux @ 7 Algorithms @ 4 TensorFlow @ 4 Communication @ 4 Networking @ 4 Parallel Programming @ 4 Debugging @ 4 System Architecture @ 7 PyTorch @ 4 CUDA @ 4

Details

NVIDIA is leading groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU -- our invention -- serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables groundbreaking creativity and discovery, and powers inventions that were once considered science fiction, including artificial intelligence to autonomous cars.

What we are seeking:

We are the GPU Communications Libraries and Networking team at NVIDIA. We build communication libraries like NCCL, NVSHMEM, and UCX that are crucial for scaling Deep Learning and HPC. We’re seeking a Senior Software Architect to help co-design next-gen data center platforms and scalable communications software.

DL and HPC applications have huge compute demands and already run at scales of up to tens of thousands of GPUs. GPUs are connected with high-speed interconnects (e.g. NVLink, PCIe) within a node and with high-speed networking (e.g. InfiniBand, Ethernet) across nodes. Efficient and fast communication between GPUs directly impacts end-to-end application performance. This impact continues to grow with the increasing scale of next generation systems. This is an outstanding opportunity to advance the state-of-the-art, break performance barriers, and deliver platforms the world has never seen before. Are you ready to build the new and innovative technologies that will help realize NVIDIA's vision?

What you will be doing:

  • Investigate opportunities to improve communication performance by identifying bottlenecks in today's systems.
  • Design and implement new communication technologies to accelerate AI and HPC workloads.
  • Explore innovative solutions in HW and SW for our next generation platforms as part of co-design efforts involving GPU, Networking, and SW architects.
  • Build proofs-of-concept, conduct experiments, and perform quantitative modeling to evaluate and drive new innovations.
  • Use simulation to explore the performance of large GPU clusters (think scales of 100s of 1000s of GPUs).

What we need to see:

  • M.S./Ph.D. degree in CS/CE or equivalent experience.
  • 5+ years of relevant experience.
  • Excellent C/C++ programming and debugging skills.
  • Experience with parallel programming models (MPI, SHMEM) and at least one communication runtime (MPI, NCCL, NVSHMEM, OpenSHMEM, UCX, UCC).
  • Deep understanding of operating systems, computer and system architecture.
  • Solid in fundamentals of network architecture, topology, algorithms, and communication scaling relevant to AI and HPC workloads.
  • Strong experience with Linux.
  • Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate environment.

Ways to stand out from the crowd:

  • Expertise in related technology and passion for what you do. Experience with CUDA programming and NVIDIA GPUs. Knowledge of high-performance networks like InfiniBand, RoCE, NVLink, etc.
  • Experience with Deep Learning Frameworks such as PyTorch, TensorFlow, etc. Knowledge of deep learning parallelisms and mapping to the communication subsystem. Experience with HPC applications.
  • Strong collaborative and interpersonal skills and a proven track record of effectively guiding and influencing within a dynamic and multi-functional environment.