Research Engineer, Model Evaluations

USD 320,000-405,000 per year
MIDDLE
✅ Hybrid

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Python @ 6 Machine Learning @ 3 Leadership @ 3 Communication @ 6 Technical Leadership @ 3

Details

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. The Model Evaluations team builds the evaluation platform and methodologies that determine how Anthropic understands, measures, and improves model capabilities and safety. This role is a technical leadership position combining research and engineering to design, implement, and scale evaluation systems that directly inform training decisions and the model development roadmap.

Responsibilities

  • Design novel evaluation methodologies to assess model capabilities across domains including reasoning, safety, helpfulness, and harmlessness
  • Lead the design and architecture of Anthropic's evaluation platform to ensure scalability with evolving model capabilities and research needs
  • Implement and maintain high-throughput evaluation pipelines that run during production training and provide real-time insights to guide training decisions
  • Analyze evaluation results to identify patterns, failure modes, and opportunities for model improvement; translate findings into actionable insights
  • Partner with research teams to develop domain-specific evaluations that probe emerging capabilities and potential risks
  • Build infrastructure to enable rapid iteration on evaluation design, supporting both automated and human-in-the-loop assessment approaches
  • Establish best practices and standards for evaluation development across the organization
  • Mentor team members and contribute to the growth of evaluation expertise at Anthropic
  • Coordinate evaluation efforts during critical training runs to ensure comprehensive coverage and timely results
  • Contribute to research publications and external communications about evaluation methodologies and findings

Requirements

  • Experience designing and implementing evaluation systems for machine learning models, particularly large language models
  • Demonstrated technical leadership experience, either formally or through leading complex technical projects
  • Strong programming skills in Python
  • Experience with distributed computing frameworks
  • Skilled at both systems engineering and experimental design; comfortable building infrastructure while maintaining scientific rigor
  • Experience with statistical analysis and drawing conclusions from large-scale experimental data
  • Ability to translate between research needs and engineering constraints, finding pragmatic solutions to complex problems
  • Results-oriented and able to thrive in fast-paced environments where priorities can shift based on research findings
  • Strong communication skills and ability to collaborate with diverse stakeholders
  • Commitment to AI safety and consideration of societal impacts
  • Minimum: Bachelor's degree in a related field or equivalent experience

Strong candidates may also have

  • Experience with evaluation during model training in production environments
  • Familiarity with safety evaluation frameworks and red teaming methodologies
  • Background in psychometrics, experimental psychology, or other measurement/assessment fields
  • Experience with reinforcement learning evaluation or multi-agent systems
  • Contributions to open-source evaluation benchmarks or frameworks
  • Knowledge of prompt engineering and its role in evaluation design
  • Experience managing evaluation infrastructure at scale (thousands of experiments)
  • Published research in machine learning evaluation, benchmarking, or related areas

Representative projects

  • Designing comprehensive evaluation suites assessing models across hundreds of capability dimensions
  • Building real-time evaluation dashboards that surface critical insights during multi-week training runs
  • Developing novel evaluation approaches for emerging capabilities like multi-step reasoning or tool use
  • Creating automated systems to detect regression in model performance or safety properties
  • Implementing efficient evaluation sampling strategies that balance coverage with computational constraints
  • Collaborating with external partners to develop industry-standard evaluation benchmarks
  • Building infrastructure to support human evaluation at scale, including quality control and aggregation systems

Compensation

Annual base salary: $320,000 - $405,000 USD. Total compensation includes equity, benefits, and may include incentive compensation.

Logistics & Other Details

  • Locations: San Francisco, CA; New York City, NY; Seattle, WA
  • Location-based hybrid policy: staff expected to be in one of Anthropic's offices at least 25% of the time (hybrid)
  • Education: at least a Bachelor's degree in a related field or equivalent experience
  • Visa sponsorship: Anthropic may sponsor visas and retains an immigration lawyer to assist when possible

Why join

The team values large-scale, high-impact AI research, collaborative work, and strong communication. The role contributes directly to building safer, more steerable AI systems and offers opportunities to publish research and influence model development practices.