Used Tools & Technologies
Not specified
Required Skills & Competences ?
Security @ 5 Data Science @ 3 Leadership @ 3 Communication @ 3 Prioritization @ 3Details
The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. The Strategic Intelligence & Analysis (SIA) team provides safety intelligence for OpenAI’s products by monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. This role focuses on safety and abuse risks in AI-social environments (e.g., Sora content and sharing, group chats, messaging, and AI-assisted brand and creator experiences).
Responsibilities
-
Map and prioritize the AI-social risk landscape:
- Build and continuously refine a picture of how AI is used in social-like products (Sora-powered clips, group chats, messaging assistants, creator tools).
- Design and maintain harm taxonomies tailored to AI-mediated communication (synthetic harassment, coordinated AI-assisted brigading, synthetic identity/brand misuse, reputational and intimate harms).
- Maintain a risk register and prioritization framework by severity, prevalence, exposure, and trajectory.
-
Detect and deep dive into emerging abuse patterns:
- Partner with investigations, operations, and product teams to surface new patterns of misuse across Sora, chats, and partner integrations.
- Run structured deep dives on incidents (synthetic impersonation, scams, targeted harassment, coordinated influence using AI-generated media).
- Connect individual incidents into system-level stories about actors, incentives, product design weaknesses, and cross-product spillover.
-
Turn analysis into actionable risk intelligence:
- Translate findings into clear, ranked risk lists and concrete mitigation proposals for product, safety, and policy teams.
- Collaborate with Safety Systems, Integrity, and Product to scope solutions (classification improvements, UX guardrails, friction, enforcement flows, detection signals).
- Track mitigation effectiveness: follow indicators, pressure-test assumptions, and push for course corrections when needed.
-
Build early warning and measurement capabilities:
- Help define core metrics and signals for AI-social safety (harm prevalence, severity distributions, escalation rates, brand safety issues).
- Work with data science and visualization colleagues to shape monitoring views and dashboards highlighting leading indicators and unusual changes.
- Propose targeted probes, structured reviews, and experiments around major launches and feature changes.
-
Provide strategic analysis and future-looking perspectives:
- Produce concise, decision-ready briefs on AI-social risks for leadership, safety forums, and partner teams.
- Run scenario analyses exploring how AI-social harms might evolve over 6–24 months.
- Benchmark OpenAI’s AI-social risk profile and mitigations against external incidents and other platforms.
-
Shape safety readiness for social-like AI products:
- Contribute to product readiness and launch reviews by outlining expected abuse modes, risk tradeoffs, and monitoring/response plans.
- Turn risk insights into practical guidance for internal teams and, where appropriate, external partners.
- Develop reusable frameworks, playbooks, FAQs, and briefing materials for consistent organizational response.
Requirements
- Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work focused on social media, messaging, online communities, or adjacent environments.
- Demonstrated ability to analyze complex online harms (harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety) and convert analysis into prioritized recommendations.
- Strong analytical skills and comfort working with qualitative and quantitative inputs including casework, incident reports, OSINT, product context, policy frameworks, and basic metrics in partnership with data science (harm prevalence, severity profiles, exposure, escalation rates).
- Strong adversarial and product intuition; ability to foresee how actors might adapt AI-social tools for misuse and evaluate how product mechanics and UX influence risk.
- Experience designing and using risk frameworks and taxonomies (harm classification schemes, severity/likelihood matrices, prioritization models).
- Proven cross-functional collaboration with product, engineering, data science, operations, legal, and policy teams.
- Excellent written and verbal communication skills, including producing concise, executive-ready briefs.
- Comfort operating in fast-changing, ambiguous environments: identify weak signals, form hypotheses, test quickly, and adjust as the product and threat landscape evolves.
Benefits
- Base pay varies by market location, knowledge, skills, and experience. Total compensation also includes equity and performance-related bonuses where eligible.
- Medical, dental, and vision insurance with employer contributions to Health Savings Accounts.
- Pre-tax Health FSA, Dependent Care FSA, and commuter expense accounts.
- 401(k) retirement plan with employer match.
- Paid parental leave (up to 24 weeks birth parents, 20 weeks non-birthing), plus paid medical and caregiver leave.
- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees; 13+ paid company holidays and additional office closures.
- Mental health and wellness support; employer-paid basic life and disability coverage.
- Annual learning and development stipend; daily meals in offices and meal delivery credits as eligible; relocation support for eligible employees.
- Additional taxable fringe benefits (charitable donation matching, wellness stipends) as applicable.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring general-purpose artificial intelligence benefits all of humanity. The organization emphasizes safety, diverse perspectives, and reasonable accommodations for applicants with disabilities. Background checks are administered in accordance with applicable law for US-based candidates.