Senior Deep Learning Systems Software Engineer - AI Infrastructure

at Nvidia

πŸ“ Santa Clara, United States

$180,000-339,200 per year

SENIOR
βœ… On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Linux @ 4 Python @ 6 Algorithms @ 7 Distributed Systems @ 4 Networking @ 4 Performance Optimization @ 4 PyTorch @ 7

Details

NVIDIA is an industry leader with groundbreaking developments in High-Performance Computing, Artificial Intelligence and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery and powers what were once science fiction inventions from artificial intelligence to autonomous cars. NVIDIA is seeking senior engineers who are mindful of performance analysis and optimization to help us squeeze every last clock cycle out of all facets of Deep Learning such as training and inferencing, one of today's most important workloads in the world. If you are unafraid to work across all layers of the hardware/software stack from GPU architecture to Deep Learning Framework to achieve peak performance, we want to hear from you! This role offers an opportunity to directly impact the hardware and software roadmap in a fast-growing technology company that leads the AI revolution while helping deep learning users around the globe enjoy ever-higher training speeds.

Responsibilities

  • Understand, analyze, profile, and optimize deep learning workloads on state-of-the-art hardware and software platforms.
  • Build tools to automate workload analysis, workload optimization, and other critical workflows.
  • Collaborate with cross-functional teams to analyze and optimize cloud application performance on diverse GPU architectures.
  • Identify bottlenecks and inefficiencies in application code and propose optimizations to enhance GPU utilization.
  • Drive end-to-end platform optimization from a hardware level to the application and service levels.
  • Design and implement performance benchmarks and testing methodologies to evaluate application performance.
  • Provide guidance and recommendations on optimizing cloud-native applications for speed, scalability, and resource efficiency.
  • Share knowledge and best practices with domain expert teams as they transition applications to distributed environments.

Requirements

  • Masters in CS, EE or CSEE or equivalent experience.
  • 8+ years of experience in application performance engineering.
  • Experience using large scale multi-node GPU infrastructure on premise or in CSPs.
  • Background in deep learning model architectures and experience with Pytorch and large scale distributed training.
  • Experience with application profiling tools such as NVIDIA NSight, Intel VTune etc.
  • Deep understanding of computer architecture, and familiarity with the fundamentals of GPU architecture. Experience with NVIDIA's Infrastructure and software stacks.
  • Proven experience analyzing, modeling and tuning DL application performance.
  • Proficiency in Python and C/C++ for analyzing and optimizing application code.

Ways to stand out from the crowd:

  • Strong fundamentals in algorithms and GPU programming experience (CUDA or OpenCL).
  • Understanding of NVIDIA's server and software ecosystem.
  • Hands-on experience in performance optimization and benchmarking on large-scale distributed systems.
  • Hands-on experience with NVIDIA GPUs, HPC storage, networking, and cloud computing.
  • In-depth understanding of storage systems, Linux file systems, RDMA networking.

NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you.