Lead Research Engineer / Scientist, Cyber

at OpenAI
USD 460,000 per year
SENIOR
✅ On-site
✅ Relocation

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Security @ 4 Algorithms @ 7 Data Structures @ 7 TensorFlow @ 6 Leadership @ 6 PyTorch @ 6

Details

The Safety Systems team is responsible for work that ensures OpenAI's models can be safely deployed to benefit society. The team focuses on AI safety, learning from deployment, and distributing the benefits of AI while ensuring responsible and safe use.

Role summary

As the lead researcher for cybersecurity risks, you will design, implement, and oversee an end-to-end mitigation stack to prevent severe cyber misuse across OpenAI’s products. This role requires technical depth, decisive leadership, and cross-company influence to ensure safeguards are enforceable, scalable, and effective. You will set technical strategy, drive execution, and ensure products cannot be misused for severe harm.

Responsibilities

  • Lead the full-stack mitigation strategy and implement solutions for model-enabled cybersecurity misuse — from prevention to monitoring, detection, and enforcement.
  • Integrate safeguards across products so protections are consistent, low-latency, and scale with usage and new model surfaces.
  • Make decisive technical trade-offs within the cybersecurity risk domain, balancing coverage, latency, model utility, and user privacy.
  • Partner with risk/threat modeling leadership to align mitigation design with anticipated attacker behaviors and high-impact scenarios.
  • Drive rigorous testing and red-teaming, stress-testing the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across product surfaces.

Requirements / Qualifications

  • Passion for AI safety and motivation to make cutting-edge AI models safer for real-world use.
  • Demonstrated experience in deep learning and transformer models.
  • Proficiency with frameworks such as PyTorch or TensorFlow.
  • Strong foundation in data structures, algorithms, and software engineering principles.
  • Familiarity with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.
  • Experience working collaboratively with cross-functional teams across research, security, policy, product, and engineering.
  • Decisive leadership in high-stakes, ambiguous environments.
  • Significant experience designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale.
  • (Nice to have) Background knowledge in cybersecurity or adjacent fields.

Benefits and compensation details

  • Estimated base salary: $460,000 (compensation summary indicates offers equity).
  • Base pay may vary depending on market location, knowledge, skills, and experience. Total compensation can include equity and performance-related bonuses where eligible.
  • Medical, dental, and vision insurance with employer contributions to Health Savings Accounts.
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses.
  • 401(k) retirement plan with employer match.
  • Paid parental leave, paid medical and caregiver leave, and flexible PTO / paid time off for non-exempt employees.
  • 13+ paid company holidays and additional company office closures.
  • Mental health and wellness support; employer-paid basic life and disability coverage.
  • Annual learning and development stipend.
  • Daily meals in offices and meal delivery credits as eligible.
  • Relocation support for eligible employees.
  • Additional taxable fringe benefits such as charitable donation matching and wellness stipends.

Other information

  • Background checks will be administered in accordance with applicable law; certain duties require access to secure information systems.
  • OpenAI is an equal opportunity employer and provides reasonable accommodations to applicants with disabilities.