Used Tools & Technologies
Not specified
Required Skills & Competences ?
Go @ 4 Java @ 4 Distributed Systems @ 4 gRPC @ 4 Rust @ 4 Debugging @ 6Details
About the Team
Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprises and developers alike to use and access our state-of-the-art AI models, allowing them to do things that they’ve never been able to before. We focus on performant and efficient model inference, as well as accelerating research progression via model inference.
About the Role
We’re looking for a senior engineer to design and build the load balancer that will sit at the very front of our research inference stack — routing the world’s largest AI models with millisecond precision and bulletproof reliability. This system will serve research jobs where requests must stay “sticky” to the same model instance for hours or days and where even subtle errors can directly degrade model performance.
Responsibilities
- Architect and build the gateway / network load balancer that fronts all research jobs, ensuring long-lived connections remain consistent and performant.
- Design traffic stickiness and routing strategies that optimize for both reliability and throughput.
- Instrument and debug complex distributed systems — with a focus on building world-class observability and debuggability tools (distributed tracing, logging, metrics).
- Collaborate closely with researchers and ML engineers to understand how infrastructure decisions impact model performance and training dynamics.
- Own the end-to-end system lifecycle: from design and code to deploy, operate, and scale.
- Work in an outcome-oriented environment where everyone contributes across layers of the stack, from infra plumbing to performance tuning.
Requirements / Qualifications
- Deep experience designing and operating large-scale distributed systems, particularly load balancers, service gateways, or traffic routing layers.
- 5+ years experience designing in theory for and debugging in practice for the algorithmic and systems challenges of consistent hashing, sticky routing, and low-latency connection management.
- 5+ years experience as a software engineer and systems architect working on high-scale, high-reliability infrastructure.
- Strong debugging mindset and comfort working with tracing, logs, and metrics to untangle distributed failures.
- Comfortable writing and reviewing production code in Rust or similar systems languages (C/C++, Java, Go, Zig, etc.).
- Experience operating in big tech or high-growth environments is valued.
- Ownership mentality: able to take problems end-to-end and build foundational systems that affect model interactions.
Nice to have
- Experience with gateway or load balancing systems (e.g., Envoy, gRPC, custom LB implementations).
- Familiarity with inference workloads (e.g., reinforcement learning, streaming inference, KV cache management, etc.).
- Exposure to debugging and operational excellence practices in large production environments.
Compensation & Benefits
- Salary range: $325,000 - $490,000 per year; offers equity.
- Base pay may vary by market location, knowledge, skills, and experience. Total compensation may include equity and performance-related bonuses.
- Benefits include medical, dental, and vision insurance; employer contributions to HSAs; pre-tax FSAs; 401(k) with employer match; paid parental and medical leave; PTO; paid company holidays; mental health and wellness support; employer-paid basic life and disability coverage; annual learning & development stipend; daily meals / meal credits; and other fringe benefits.
- Relocation support for eligible employees.
Other notes
- Background checks will be administered in accordance with applicable law. OpenAI is an equal opportunity employer and is committed to providing reasonable accommodations to applicants with disabilities.