Used Tools & Technologies
Not specified
Required Skills & Competences ?
Automated Testing @ 4 Python @ 4 Spark @ 4 Algorithms @ 4 Distributed Systems @ 4 MLOps @ 4 Communication @ 7 Performance Optimization @ 7 Cloud Computing @ 7Details
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
This role is on the Pre-training team, focused on developing the next generation of large language models. The Staff Infrastructure Engineer/TLM will work at the intersection of research and production engineering to build scalable, reliable data processing and training infrastructure for large language model development.
Responsibilities
- Design and implement high-performance data processing infrastructure for large language model training
 - Develop and maintain core processing primitives (e.g., tokenization, deduplication, chunking) with a focus on scalability
 - Build robust systems for data quality assurance and validation at scale
 - Implement comprehensive monitoring systems for data processing infrastructure
 - Create and optimize distributed computing systems for processing web-scale datasets
 - Collaborate with research teams to implement novel data processing architectures
 - Build and maintain documentation for infrastructure components and systems
 - Design and implement systems for reproducibility and traceability in data preparation
 
Requirements
- 7+ years of professional experience (outside of internships)
 - Strong software engineering skills with experience in building distributed systems
 - Expertise in Python
 - Hands-on experience with distributed computing frameworks; Apache Spark experience is required
 - Deep understanding of cloud computing platforms and distributed systems architecture
 - Experience with high-throughput, fault-tolerant system design
 - Strong background in performance optimization and system scaling
 - Excellent problem-solving skills and attention to detail
 - Strong communication skills and ability to work in a collaborative environment
 
Preferred / Additional Qualifications:
- Advanced degree in Computer Science or related field
 - Experience with language model training infrastructure
 - Strong background in distributed systems and parallel computing
 - Expertise in tokenization algorithms and techniques
 - Experience building high-throughput, fault-tolerant systems
 - Deep knowledge of monitoring and observability practices
 - Experience with infrastructure-as-code and configuration management
 - Background in MLOps or ML infrastructure
 
Strong candidates
- Significant experience building and maintaining large-scale distributed systems
 - Passion for system reliability and performance
 - Comfortable solving complex technical challenges at scale and working with ambiguous requirements
 - Able to take ownership and drive solutions independently
 - Interested in contributing to safe and ethical AI systems
 
Sample projects
- Designing and implementing distributed computing architecture for web-scale data processing
 - Building scalable infrastructure for model training data preparation
 - Creating comprehensive monitoring and alerting systems
 - Optimizing tokenization infrastructure for improved throughput
 - Developing fault-tolerant distributed processing systems
 - Implementing new infrastructure components based on research requirements
 - Building automated testing frameworks for distributed systems
 
Compensation and logistics
- Annual base salary: $340,000 - $425,000 USD
 - Total compensation package includes equity, benefits, and may include incentive compensation
 - Education: At least a Bachelor's degree in a related field or equivalent experience required
 - Location-based hybrid policy: staff expected to be in one of our offices at least 25% of the time (some roles may require more onsite time)
 - Visa sponsorship: Anthropic does sponsor visas; they will make reasonable efforts and retain an immigration lawyer, but sponsorship may not be possible for every role/candidate
 
How we're different
We view AI research as large-scale empirical science and work as a single cohesive team on a few large research efforts. We value impact, collaboration, and communication. Our research directions build on prior work (e.g., GPT-3, circuit-based interpretability, scaling laws, and learning from human preferences).
How to apply
Follow the application form on the posting. The application requests basic contact details, resume or LinkedIn profile, preferences about in-office time, and other optional information. Anthropic encourages applicants from diverse backgrounds and emphasizes that candidates need not meet every qualification to apply.