Senior System Architect, GPU
at Nvidia
π Santa Clara, United States
USD 224,000-425,500 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 6 Leadership @ 4 Communication @ 7 Networking @ 7 Data Analysis @ 6 System Architecture @ 4Details
We are looking for a Senior System Architect for our GPU team!
A key part of NVIDIA's strength is to innovate and deliver the highest performance in the world for AI and accelerated computing. We are constantly looking for ways to improve our GPU architecture and maintain our leadership. NVIDIA is seeking a motivated system architect to define future aspects of our GPU through employing pioneering technologies. Your role will be cross-disciplinary, working with software, ASIC design, verification, physical design, VLSI and platform teams. Our system architects excel at pushing the state of the art while making the best engineering trade-offs.
Responsibilities
- Develop GPU architecture innovations and improvements, optimizing along the axes of performance, power efficiency, complexity, area, yield, effort, and schedule.
- Evaluate and benchmark GPU configurations (core counts, memory bandwidth, interconnect topologies) from employing different advanced packaging technologies, and identifying optimal designs for different future data center workloads.
- Develop and enhance performance analysis infrastructure, including performance simulators, testbench components and analysis tools, to evaluate configurations under different constraints.
- Implement and maintain high-level functional and performance models. Analyze application workloads and performance simulation results to identify areas of architecture improvements.
- Document decisions in system architecture specifications, working with multi-functional teams in the organization including ASIC design, software, and VLSI to review, and explore architecture trade-offs, define overall solutions and track development progress.
- Collaborate with other functional teams (ASIC, Floorplan Designs, Package Designs and Systems Engineering, etc) to validate packaging choices against performance, cost, and scalability targets.
Requirements
- Masterβs/PhD in Computer Engineering, Computer Science or related fields (or equivalent experience)
- A minimum of 8 years of relevant work experience in GPU or CPU System Architecture development
- Proficiency in data analysis (Python, Excel) to correlate configuration changes with performance metrics.
- Deep understanding of accelerated computing and AI data center requirements and tradeoffs, including performance bottlenecks, TCO, Power Delivery Network (PDN), DC Networking, etc
- Strong communication and interpersonal skills, as well as the ability to thrive in a dynamic, collaborative, distributed team.
Ways to stand out from the crowd:
- Experience with GPU architecture, especially in off-chip IO, memory subsystem, and/or Network-on-Chip (NoC)/Interconnect.
- Expertise in analyzing performance scaling and bottlenecks at device and system levels for AI/accelerated computing workloads.
- Knowledgeable in advanced packaging technologies, and their costs and benefits.
- Consistent track record of efficiently implementing complex architectural features.
- Exceptional problem-solving skills with a focus on optimizing performance, area, complexity, and power.