Research Scientist/Engineer, Honesty

USD 315,000-340,000 per year
MIDDLE
✅ Hybrid

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Python @ 3 GitHub @ 3 Machine Learning @ 3 Data Science @ 3 Communication @ 3

Details

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you'll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains.

Note: The team is based in New York and so we have a preference for candidates who can be based in New York.

For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year.

Responsibilities

  • Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model’s knowledge
  • Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model
  • Create and maintain comprehensive honesty benchmarks and evaluation frameworks
  • Implement techniques to ground model outputs in verified information, such as search and retrieval-augmented generation (RAG) systems
  • Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses
  • Design and implement prompting pipelines to generate data that improves model accuracy and honesty
  • Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims
  • Create tools to help human evaluators efficiently assess model outputs for accuracy

Requirements

  • MS/PhD in Computer Science, Machine Learning, or a related field (Bachelor’s or equivalent experience acceptable)
  • Strong programming skills in Python (interviewing in Python)
  • Industry experience with language model fine-tuning and classifier training
  • Proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy
  • Experience in data science or the creation and curation of datasets for fine-tuning LLMs
  • Understanding of metrics of uncertainty, calibration, and truthfulness in model outputs
  • Care about AI safety and the accuracy and honesty of both current and future AI systems

Strong candidates may also have

  • Published work on hallucination prevention, factual grounding, or knowledge integration in language models
  • Experience with fact-grounding techniques and building/maintaining factual knowledge bases
  • Background in developing confidence estimation or calibration methods for ML models
  • Familiarity with RLHF specifically applied to improving model truthfulness
  • Experience working with crowd-sourcing platforms and human feedback collection systems
  • Experience developing evaluations of model accuracy or hallucinations

Benefits & Compensation

The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.

Annual Salary: $315,000 - $340,000 USD

Anthropic offers competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and office space for collaboration.

Logistics

  • Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience (MS/PhD preferred).
  • Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
  • Visa sponsorship: We do sponsor visas, though not every role/candidate can be successfully sponsored; Anthropic will make reasonable efforts and retains immigration counsel.

How we work / Culture

Anthropic works as a cohesive team on a few large-scale research efforts, values empirical research, collaboration, and communication, and regularly hosts research discussions. The team works on high-impact directions including interpretability, scaling laws, learning from human preferences, and concrete problems in AI safety.

How to apply

The listing invites candidates to apply and includes an application form with fields for resume/CV, cover letter, LinkedIn, publications, GitHub, and key application questions (including an essay on approaches to solving hallucinations).