Research Operations & Strategy Lead - Coding & Cybersecurity Data
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Security @ 4 Python @ 7 Communication @ 7 Prioritization @ 4 Product Management @ 6Details
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. This role will build and scale data operations that advance Claude's coding and cybersecurity capabilities. You will partner with research teams to design and execute data strategies, manage vendor relationships, and own the entire data pipeline from requirements to production. This is a zero-to-one role focused on strategy and execution rather than hands-on engineering — think technical founder who evolved from writing code to building the business.
Responsibilities
- Develop and execute data strategies for coding capabilities, cybersecurity evaluations, and agentic AI research.
- Partner with research leaders to translate technical requirements into operational frameworks.
- Build data collection and evaluation systems through internal tools, vendor partnerships, and new approaches.
- Identify, evaluate, and manage specialized contractors and vendors for technical data collection.
- Implement quality control processes to ensure data meets training requirements.
- Manage multiple complex projects simultaneously, balancing technical needs with delivery timelines.
- Track metrics and communicate progress to stakeholders.
Requirements
- 3+ years in technical operations, product management, or entrepreneurial experience building from zero to scale.
- Strong technical foundations: proficiency in Python and understanding of ML workflows and evaluation frameworks.
- Familiarity with how LLMs work and the ability to describe how models like Claude are trained.
- Strong communication skills to engage with both technical and non-technical stakeholders (internal and external).
- Highly organized with the ability to manage multiple parallel workstreams effectively.
- Comfortable navigating ambiguity and balancing strategic prioritization with rapid, high-quality execution.
- Thrive in fast-paced research environments with shifting priorities and novel technical challenges.
- Passion for AI safety and understanding of the critical importance of high-quality data in building beneficial AI systems.
Nice-to-have / Strong candidates may also have
- Experience at companies training AI models, agents, or creating AI training data, evaluations, or environments.
- Knowledge of AI safety research methodologies and evaluation frameworks.
- Experience with RLHF or similar human-in-the-loop training methods.
- Domain expertise in software engineering or cybersecurity.
- Track record of building and scaling operations teams.
Compensation and Logistics
- Expected base annual salary: $250,000 - $365,000 USD. Total compensation includes equity, benefits, and may include incentive compensation.
- Education: At least a Bachelor's degree in a related field or equivalent experience required.
- Location-based hybrid policy: staff are expected to be in one of the offices at least 25% of the time; some roles may require more office presence.
- Visa sponsorship: Anthropic will make reasonable efforts to sponsor visas when an offer is made and retains an immigration lawyer to assist.
About the Team and Impact
The data strategies and operations you build will directly determine how well models can write code, understand software systems, and identify security vulnerabilities. You will work with researchers on frontier AI capabilities while building operational infrastructure to scale these efforts. Anthropic values big-science, collaborative research focused on steerable, trustworthy AI and hosts frequent research discussions to align on high-impact work.
Benefits
Anthropic offers competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and collaborative office spaces. They provide guidance on candidates' AI usage during the application process and encourage applicants from diverse backgrounds to apply.