Threat Modeler Lead

at OpenAI
USD 325,000 per year
SENIOR
✅ On-site
✅ Relocation

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Security @ 7 Prioritization @ 4

Details

The Safety Systems team is responsible for ensuring OpenAI's models can be safely deployed to benefit society. This role owns OpenAI’s holistic approach to identifying, modeling, and forecasting frontier risks from frontier AI systems, ensuring evaluation frameworks, safeguards, and taxonomies are robust, high-coverage, and forward-looking. The Threat Modeling Lead will connect technical, governance, and policy perspectives to prioritize and mitigate severe misuse and alignment risks.

Responsibilities

  • Develop and maintain comprehensive threat models across all misuse areas (bio, cyber, attack planning, etc.).
  • Develop plausible and convincing threat models across loss of control, self-improvement, and other alignment risks from frontier AI systems.
  • Forecast risks by combining technical foresight, adversarial simulation, and emerging trends.
  • Partner closely with technical teams on capability evaluations to ensure coverage of severe risks enabled by frontier AI systems.
  • Work with Bio and Cyber Leads to size residual risk from safeguards and translate threat models into actionable mitigation designs.
  • Explain the rationale for high-investment mitigation efforts and help stakeholders understand prioritization decisions.
  • Serve as a central node connecting technical, governance, and policy perspectives on misuse risk.

Requirements

  • Deep experience in threat modeling, risk analysis, or adversarial thinking (e.g., security, national security, or safety).
  • Strong grasp of AI alignment literature and risks from frontier AI systems.
  • Knowledge of how AI evaluations work and ability to connect evaluation results to capability testing and safeguard sufficiency.
  • Experience working across technical and policy domains to drive rigorous, multidisciplinary risk assessments.
  • Ability to communicate complex risks clearly to both technical and non-technical audiences.
  • Systems thinking and anticipation of second-order and cascading risks.

Benefits

  • Competitive base pay (listed at $325K for this role) with equity and possible performance-related bonuses.
  • Medical, dental, and vision insurance with employer contributions to Health Savings Accounts.
  • Pre-tax Health FSA, Dependent Care FSA, and commuter expense accounts.
  • 401(k) retirement plan with employer match.
  • Paid parental leave, medical and caregiver leave, and flexible PTO.
  • 13+ paid company holidays and coordinated office closures.
  • Mental health and wellness support; employer-paid basic life and disability coverage.
  • Annual learning and development stipend.
  • Daily meals in offices and meal delivery credits as eligible.
  • Relocation support for eligible employees.
  • Additional taxable fringe benefits (charitable donation matching, wellness stipends).

About the Team

The Safety Systems team focuses on safety work to ensure our best models are safely deployed to the real world, advancing OpenAI's mission to build and deploy safe AGI.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The company values diverse perspectives and is an equal opportunity employer. Background checks and candidate accommodations are handled in accordance with applicable laws and policies.