Senior Machine Learning Infrastructure Engineer - DGX Cloud
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Security @ 4 System Administration @ 7 Go @ 4 Kubernetes @ 4 Linux @ 7 Python @ 4 Algorithms @ 4 Data Structures @ 4 Distributed Systems @ 3 Machine Learning @ 4 Data Science @ 4 Hiring @ 4 Communication @ 3 Mathematics @ 4 Networking @ 3 GPU @ 4Details
NVIDIA is hiring engineers to scale up its AI Infrastructure. We expect you to have a strong programming background, knowledge of datacenter hardware, operations, and networking, familiarity with software testing and deployment, familiarity with distributed systems, and excellent communication and planning abilities. We also welcome out-of-the-box thinkers who can provide new ideas with a strong execution bias. Expect to be constantly challenged, improving, and evolving for the better. You and other engineers on this team will help advance NVIDIA's capacity to build and deploy leading infrastructure solutions for a broad range of AI-based applications that affect core data science.
For two decades, we have pioneered visual computing, the art and science of computer graphics. With the invention of the GPU - the engine of modern visual computing - the field has expanded to encompass video games, movie production, product design, medical diagnosis and scientific research. Today, we stand at the beginning of the next era, the AI computing era, ignited by a new computing model, GPU deep learning.
Responsibilities
- Contribute to a comprehensive platform that automates GPU asset provisioning, configuration, and lifecycle management across cloud providers.
- Build end-to-end automation of datacenter operations, break/fix, and lifecycle management for large-scale Machine Learning systems.
- Implement monitoring and health management capabilities for reliability, availability, and scalability of GPU assets using multiple data streams including GPU hardware diagnostics and cluster/network telemetry.
- Build automated test infrastructure to qualify distributed systems for operation.
- Partner with engineering teams across NVIDIA to integrate software from hardware up to AI training applications.
- Constantly innovate, discover new problems, and their solutions.
Requirements
- Highly motivated with strong communication skills able to work with multi-functional teams, principles, and architects across organizational boundaries and geographies.
- 5+ years software engineering experience on large-scale production systems.
- BS in Computer Science/Engineering/Physics/Mathematics or equivalent experience.
- Expert level knowledge of systems programming languages (Go, Python) and solid understanding of Data Structures and Algorithms.
- Strong background in Linux system administration and management.
- Experience with cluster management systems (Kubernetes, SLURM).
- Understanding of performance, security, and reliability in complex distributed systems, including system-level architecture, data synchronization, fault tolerance, and state management.
Ways to Stand Out
- Proficiency in architecting and managing large-scale distributed systems independent of cloud providers.
- Deep knowledge of datacenter operations and GPU hardware.
- Hands-on experience with RDMA networking.
- Advanced hands-on experience and understanding of Kubernetes and SLURM cluster management.
- Hands-on experience in Machine Learning Operations.
- Experience with Bright Cluster Manager and hardware fleet management systems.
- Proven operational excellence in designing and maintaining AI infrastructure.
Benefits
- Base salary range: 148,000 USD - 287,500 USD, determined based on location, experience, and peer pay.
- Eligibility for equity and benefits.
- NVIDIA is committed to diversity and is an equal opportunity employer.