Used Tools & Technologies
Not specified
Required Skills & Competences ?
Machine Learning @ 3Details
About the Team
The Safety Systems team is dedicated to ensuring the safety, robustness, and reliability of AI models towards their deployment in the real world. OpenAI’s charter calls on us to ensure the benefits of AI are distributed widely. Our Health AI team is focused on enabling universal access to high-quality medical information. We work at the intersection of AI safety research and healthcare applications, aiming to create trustworthy AI models that can assist medical professionals and improve patient outcomes.
About the Role
We’re seeking strong researchers who are passionate about advancing AI safety and improving global health outcomes. As a Research Scientist, you will contribute to the development of safe and effective AI models for healthcare applications by implementing practical and general methods to improve the behavior, knowledge, and reasoning of models in these settings. This will require research into safety and alignment techniques that can generalize towards safe and beneficial AGI.
This role is based in San Francisco, CA. The team uses a hybrid work model of 3 days in the office per week and offers relocation assistance to new employees.
Responsibilities
- Design and apply practical and scalable methods to improve safety and reliability of models, including RLHF, automated red teaming, scalable oversight, etc.
- Evaluate methods using health-related data to ensure models provide accurate, reliable, and trustworthy information.
- Build reusable libraries for applying general alignment techniques to models.
- Proactively identify areas of risk and understand the safety of models and systems.
- Work with cross-team stakeholders to integrate methods into core model training and launch safety improvements in OpenAI’s products.
Requirements
- Passion for AI safety and improving global health outcomes; alignment with OpenAI’s charter.
- 4+ years of experience with deep learning research and large language models (LLMs), especially practical alignment topics such as RLHF, automated red teaming, scalable oversight, etc.
- Hold a Ph.D. or other degree in computer science, AI, machine learning, or a related field.
- Experience making practical model improvements for AI model deployment.
- Ability to own problems end-to-end, learn missing knowledge as needed, and work collaboratively in a team environment.
- Bonus: experience in health-related AI research or deployments.
Benefits
- Base pay in range listed separately; total compensation may include equity and performance-related bonuses.
- Medical, dental, and vision insurance with employer contributions to Health Savings Accounts.
- Pre-tax accounts (Health FSA, Dependent Care FSA, commuter expenses).
- 401(k) retirement plan with employer match.
- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents); paid medical and caregiver leave (up to 8 weeks).
- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees.
- 13+ paid company holidays and coordinated office closures; paid sick or safe time as required by law.
- Mental health and wellness support; employer-paid basic life and disability coverage.
- Annual learning and development stipend; daily meals in offices and meal delivery credits as eligible.
- Relocation support for eligible employees.
- Additional taxable fringe benefits (charitable donation matching, wellness stipends) as applicable.
Additional Notes
- Background checks will be administered in accordance with applicable law for US-based candidates.
- OpenAI is an equal opportunity employer and is committed to providing reasonable accommodations to applicants with disabilities.