Software Engineer, Accelerators

at OpenAI
USD 310,000-380,000 per year
MIDDLE
βœ… On-site
βœ… Relocation

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Communication @ 3 Debugging @ 3 PyTorch @ 3

Details

The Kernels team at OpenAI builds the low-level software that accelerates our most ambitious AI research. We work at the boundary of hardware and software, developing high-performance kernels, distributed system optimizations, and runtime improvements to make large-scale training and inference more efficient. Our work enables OpenAI to push the limits by ensuring models β€” from LLMs to recommender systems β€” run reliably on advanced supercomputing platforms. That includes adapting our software stack to new types of accelerators, tuning system performance end-to-end, and removing bottlenecks across every layer of the stack.

Responsibilities

  • Prototype and enable OpenAI's AI software stack on new, exploratory accelerator platforms.
  • Optimize large-scale model performance (LLMs, recommender systems, distributed AI workloads) for diverse hardware environments.
  • Develop kernels, sharding mechanisms, and system scaling strategies tailored to emerging accelerators.
  • Collaborate on optimizations at the model code level (e.g., PyTorch) and below to enhance performance on non-traditional hardware.
  • Perform system-level performance modeling, debug bottlenecks, and drive end-to-end optimization.
  • Work with hardware teams and vendors to evaluate alternatives to existing platforms and adapt the software stack to their architectures.
  • Contribute to runtime improvements, compute/communication overlapping, and scaling efforts for frontier AI workloads.

Requirements

  • 3+ years of experience working on AI infrastructure, including kernels, systems, or hardware-software co-design.
  • Hands-on experience with accelerator platforms for AI at data center scale (e.g., TPUs, custom silicon, exploratory architectures).
  • Strong understanding of kernels, sharding, runtime systems, or distributed scaling techniques.
  • Familiarity with optimizing LLMs, CNNs, or recommender models for hardware efficiency.
  • Experience with performance modeling, system debugging, and software stack adaptation for novel architectures.
  • Ability to operate across multiple levels of the stack, rapidly prototype solutions, and navigate ambiguity in early hardware bring-up phases.
  • Interest in shaping the future of AI compute through exploration of alternatives to mainstream accelerators.

Benefits

  • Base pay range: $310,000 - $380,000 plus equity and potential performance-related bonuses.
  • Medical, dental, and vision insurance with employer contributions to Health Savings Accounts.
  • Pre-tax accounts (Health FSA, Dependent Care FSA, commuter expenses).
  • 401(k) retirement plan with employer match.
  • Paid parental, medical, and caregiver leave; flexible PTO; paid company holidays and office closures.
  • Mental health and wellness support; employer-paid basic life and disability coverage.
  • Annual learning and development stipend; daily meals in offices and meal delivery credits as eligible.
  • Relocation support for eligible employees.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of AI capabilities and seek to safely deploy them to the world through our products. OpenAI is an equal opportunity employer and is committed to providing reasonable accommodations to applicants with disabilities.