Research-Hardware Codesign Engineer

at OpenAI
USD 230,000-460,000 per year
MIDDLE
✅ Hybrid
✅ Relocation

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Python @ 6 Communication @ 6 Prioritization @ 6 Rust @ 6 Debugging @ 3 System Architecture @ 3 PyTorch @ 1 CUDA @ 3

Details

About the Team

OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.

About the Role

We’re seeking a Research-Hardware Codesign Engineer to operate at the boundary between model research and silicon/system architecture. You’ll help shape the numerics, architecture, and technology bets of future OpenAI silicon in collaboration with both Research and Hardware.

Your work will include debugging gaps between rooflines and reality, writing quantization kernels, derisking numerics via model evals, quantifying system architecture tradeoffs, and implementing novel numeric RTL. This is a hands-on role for people who go looking for hard problems, get to ground truth, and drive it to production. Strong prioritization and clear, honest communication are essential.

Location: San Francisco, CA (Hybrid: 3 days/week onsite). Relocation assistance available.

Compensation: $230K – $460K • Offers Equity

Responsibilities

  • Build on our roofline simulator to track evolving workloads, and deliver analyses that quantify the impact of system architecture decisions and support technology pathfinding.
  • Debug gaps between performance simulation and real measurements; clearly communicate root cause, bottlenecks, and invalid assumptions.
  • Write emulation kernels for low-precision numerics and lossy compression schemes, and get Research the information they need to trade efficiency with model quality.
  • Prototype numerics modules by pushing RTL through synthesis; hand off novel numerics cleanly, or occasionally own an RTL module end-to-end.
  • Proactively pull in new ML workloads, prototype them with rooflines and/or functional simulation, and drive initial evaluation of new opportunities or risks.
  • Understand the whole picture from ML science to hardware optimization, and slice this end-to-end objective into near-term deliverables.
  • Build ad-hoc collaborations across teams with very different goals and areas of expertise, and keep progress unblocked.
  • Communicate design tradeoffs clearly with explicit assumptions and confidence levels; produce a trail of evidence that enables confident execution.

Requirements

  • Exceptional track record of high-quality technical output, and a bias for shipping a prototype now and iterating later in the absence of clear requirements.
  • Strong Python, and C++ or Rust, with a cautious attitude toward correctness and an intuition for clean extensibility.
  • Experience writing Triton, CUDA, or similar, and an understanding of the resulting mapping of tensor ops to functional units.
  • Working knowledge of PyTorch or JAX; experience in large ML codebases is a plus.
  • Practical understanding of floating point numerics, the ML tradeoffs of reduced precision, and the current state of the art in model quantization.
  • Deep understanding of transformer models, and strong intuition for transformer rooflines and the tradeoffs of sharded training and inference in large-scale ML systems.
  • Experience writing RTL (especially for floating point logic) and understanding of PPA tradeoffs is a plus.
  • Strong cross-functional communication (e.g. across ML researchers and hardware engineers); ability to slice ambiguous early-incubation ideas into concrete arenas in which progress can be made.

Benefits

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts.
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit).
  • 401(k) retirement plan with employer match.
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks).
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees.
  • 13+ paid company holidays, multiple paid coordinated company office closures throughout the year, plus paid sick or safe time as required by law.
  • Mental health and wellness support; employer-paid basic life and disability coverage.
  • Annual learning and development stipend.
  • Daily meals in offices and meal delivery credits as eligible.
  • Relocation support for eligible employees.
  • Additional taxable fringe benefits such as charitable donation matching and wellness stipends.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. We are an equal opportunity employer and provide reasonable accommodations to applicants with disabilities. Background checks for applicants will be administered in accordance with applicable law.