Cross-Functional Prompt Engineer
at Anthropic
π San Francisco, United States
USD 320,000-405,000 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 3 Data Science @ 3 Prioritization @ 3 Product Management @ 3Details
Anthropic is building reliable, interpretable, and steerable AI systems. This role sits at the intersection of research and product, combining rigorous prompt engineering with strategic thinking about model behaviors. As a Cross-functional Prompt Engineer, you will own and shape Claude's behaviors across Anthropic products to ensure consistent, safe, and beneficial user experiences.
Responsibilities
- Author and maintain behavior system prompts for each new Claude model release to ensure consistent and aligned behaviors across products.
- Deliver meta-prompts for critical research synthetic data pipelines to support alignment and training efforts.
- Review production prompt changes from product teams and serve as a resource for challenging prompting problems involving alignment and reputational risks.
- Identify, triage, and prioritize behavioral issues across Claude products; lead incident response for behavioral and policy concerns.
- Develop behavioral evaluations in collaboration with product teams and alignment research to measure and track Claude's behaviors.
- Define and streamline processes for rolling out prompt changes, including launch criteria and review practices.
- Create model-specific prompt guides documenting quirks and optimal prompting strategies for each release.
- Contribute to product evaluations and prompt infrastructure improvements.
- Track how Claude's behaviors compare to competitors, particularly on safety dimensions.
- Scale prompting best practices and define success metrics for production behaviors.
Requirements
- Extensive prompt engineering experience with large language models, including writing and evaluating complex multi-step prompts.
- Deep knowledge of Claude's behaviors, capabilities, and limitations, with strong intuition for what is promptable versus requiring model-layer changes.
- Ability to write Python and create behavioral evaluations from scratch.
- Excellent judgment about desired model behaviors across various inputs.
- Strong technical understanding, including comprehension of agent scaffold architectures and model training processes.
- Core product management skills: prioritization, requirements gathering, stakeholder management, and translating user feedback into actionable specifications.
- Ability to independently drive changes through production systems with strong execution and responsiveness.
- A strong commitment to AI safety, model welfare, and the ethical implications of model behaviors.
- Education: at least a Bachelor's degree in a related field or equivalent experience.
Strong Candidates May Also Have
- Background in philosophy, ethics, or psychology informing thinking about model behaviors and values.
- Experience with RLHF, constitutional AI, or other alignment techniques.
- Track record of writing specifications or guidelines that shape complex system behaviors.
- Experience responding to safety incidents or behavioral issues in production AI systems.
- Formal training in ethics or moral philosophy or published work in AI safety/alignment.
- Experience building and maintaining evaluation frameworks for language models.
- Background in data science with emphasis on data quality and verification.
Compensation & Logistics
- Annual salary: $320,000 - $405,000 USD (expected base compensation). Total compensation for full-time employees includes equity, benefits, and may include incentive compensation.
- Location-based hybrid policy: staff are expected to be in one of Anthropic's offices at least 25% of the time (some roles may require more time in-office).
- Visa sponsorship: Anthropic does sponsor visas and retains an immigration lawyer, though sponsorship is not guaranteed for every role.
- Education requirement: at least a Bachelor's degree in a related field or equivalent experience.
Benefits & Culture
- Competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a collaborative office environment in San Francisco.
- Anthropic emphasizes cross-team collaboration, frequent research discussions, and work on a few large-scale research efforts focused on steerable, trustworthy AI.
Application Notes
- Candidates are encouraged to apply even if they do not meet every qualification listed.
- Applicants are asked to provide either a LinkedIn profile or a Resume/CV and may be requested to submit prompt examples and test cases for evaluation.