Azure Platform Engineer
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Go @ 7 TypeScript @ 7 Python @ 7 Spark @ 4 NoSQL @ 4 Distributed Systems @ 7 Azure @ 4 MongoDB @ 4 PostgreSQL @ 4Details
About Perplexity
Perplexity is an AI-powered answer engine founded in December 2022 and growing rapidly as one of the world’s leading AI platforms. Perplexity has raised over $1B in venture investment from investors including Elad Gil, Daniel Gross, Jeff Bezos, Accel, IVP, NEA, NVIDIA, and Samsung. Our objective is to build accurate, trustworthy AI that powers decision-making for people and assistive AI wherever decisions are being made. Perplexity handles more than 780 million queries every month and is scaling rapidly.
About the Role
We are looking for a Staff Data Platform Engineer to lead the strategic migration of our core data workloads to Azure Cosmos DB. You will be the primary architect responsible for moving high-volume, mission-critical data from our existing legacy infrastructure into a globally distributed Cosmos DB environment. This role involves redesigning our data layer to support the next 100x growth of our AI platform. You will own the migration lifecycle — from schema redesign and cutover strategy to performance tuning at scale.
This is a high-impact, senior/staff-level role where you will shape architecture, set standards, and drive long-term technical direction for Perplexity’s data ecosystem.
Responsibilities
Lead the Cosmos DB Migration
- Migration Strategy: Design and execute zero-downtime migration paths from legacy databases (PostgreSQL, MongoDB, or others) to Cosmos DB.
- Architectural Redesign: Lead the "lift and shift" vs. "refactor" decision-making process and redesign relational or legacy NoSQL schemas into optimized, partitioned Cosmos DB models.
- Tooling & Automation: Build or implement data migration pipelines (using Azure Data Factory, Change Data Capture, or custom Spark jobs) to ensure data integrity and consistency during the transition.
- Validation & Cutover: Establish rigorous testing frameworks to validate data parity, latency benchmarks, and failover reliability before final cutover.
Cosmos DB Operations & Scale
- RU Optimization: Post-migration, tune Request Units (RUs) and indexing policies to ensure the new system is more cost-effective and performant than the old.
- Partitioning Mastery: Implement complex partitioning strategies (including hierarchical partitioning) to eliminate hot spots in high-traffic collections.
- Global Distribution: Configure multi-region write capabilities to ensure users have sub-10ms latency regardless of their location.
Requirements
Minimum Qualifications
- 8+ years of software engineering experience, with a proven track record of leading large-scale database migrations.
- Cosmos DB specialist: prior experience operating Cosmos DB at scale.
- Migration experience: successful production migrations with zero or near-zero downtime at a previous high-growth company.
- Systems expert: strong understanding of distributed systems, CAP theorem, and techniques to manage data consistency across a migration boundary (e.g., dual-writing, shadow reads).
- Backend proficiency: strong coding skills in Python, Go, or TypeScript, with the ability to build custom migration scripts and service-level abstractions.
Benefits & Compensation
- Compensation: $200K - $300K (offers equity).
- U.S. Benefits: Full-time U.S. employees receive equity, health, dental, vision, retirement, fitness, commuter and dependent care accounts, and more.
- International Benefits: Full-time employees outside the U.S. receive a benefits program tailored to their region.
Team & Locations
- Team: Platform & Infrastructure
- Primary location: San Francisco, with additional locations in Seattle and New York City. Role is listed as Hybrid.