Senior Timing CAD Engineer, Applied AI
at Nvidia
π Santa Clara, United States
USD 136,000-212,800 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 6 TensorFlow @ 6 Hiring @ 4 Debugging @ 4 Experimentation @ 7 PyTorch @ 6Details
NVIDIA's ASIC-PD Methodology organization is building the next generation of AI-assisted timing and constraint sign-off, integrating advanced analytics, orchestration frameworks, and domain-specific reasoning to accelerate design closure for multi-billion transistor chips.
You will join a cross-disciplinary team as an Applied AI Engineer to lead end-to-end solution development spanning data generation, model training, orchestration, and agentic automation for timing and constraint analysis workflows. The role focuses on systems that learn from sign-off data, reason across flows, and assist engineers to achieve faster and more predictable closure.
Responsibilities
- Architect and develop AI-driven solutions for static timing analysis, constraints quality, and closure prediction.
- Integrate heterogeneous data sources (timing reports, constraint graphs, design metadata, silicon correlation) into structured knowledge bases and training pipelines.
- Develop autonomous analysis agents that interact with timing tools (e.g., PrimeTime, Nanotime, Tempus) to perform multi-corner, multi-mode optimization and constraint debugging.
- Implement scalable orchestration across Flow-Server and Digital Engineer platforms to enable AI-in-loop decision-making for sign-off readiness.
- Collaborate with methodology and sign-off teams to validate models on live projects, improving coverage, predictability, and engineering productivity.
- Build interpretable AI pipelines using graph neural networks, large language models, and process-aware reasoning engines for timing closure recommendations.
- Own the end-to-end lifecycle from data curation and model training to deployment, monitoring, and continuous improvement in production environments.
Requirements
- BS (or equivalent experience) in Electrical or Computer Engineering with at least 3 years of experience in AI/ML solution development, ideally for EDA, semiconductor, or complex data domains.
- Strong background in VLSI/ASIC design with deep understanding of timing, constraints, STA, or sign-off workflows.
- Proficiency in Python and in PyTorch and/or TensorFlow.
- Experience with graph or agentic AI frameworks (examples given: LangGraph, LangChain, Ray, NetworkX).
- Experience developing data pipelines, knowledge graphs, or process models for structured engineering data.
- Working knowledge of timing tools (PrimeTime, Nanotime, Tempus) and scripting integration with EDA environments.
- Experience with AI orchestration frameworks, prompt-based reasoning, and multi-agent automation is highly desirable.
- Strong problem-solving skills, technical depth, and a mentality for experimentation and continuous learning.
Ways to stand out
- Experience with constraint validation, false-path detection, and timing-exception modeling.
- Prior exposure to AI in physical design automation, silicon/process modeling, or EDA flow automation.
- Contributions to open-source AI or flow automation projects.
- Publications or patents in AI for design automation or semiconductor engineering.
Compensation & Benefits
- Base salary range: 136,000 USD - 212,750 USD (final base salary determined by location, experience, and comparable employee pay).
- Eligible for equity and benefits (link to company benefits provided in original posting).
Other details
- Applications accepted at least until October 26, 2025.
- NVIDIA is an equal opportunity employer and values diversity in hiring and promotion practices.