AI Researcher

USD 200,000-300,000 per year
MIDDLE
✅ Hybrid

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Python @ 6 Algorithms @ 3 Machine Learning @ 3 LLM @ 3 PyTorch @ 3 CUDA @ 3

Details

Perplexity is seeking top-tier AI Research Scientists and Engineers to advance AI products and capabilities. The team builds AI-powered search and agent experiences through Sonar models, Deep Research Agent, Comet Agent, and Search products, handling hundreds of millions of queries and scaling rapidly. Depending on interests and expertise, hires will join one of three specialized teams (Core Research, Agent Products, or Comet Agent) and work on state-of-the-art model development, post-training techniques, and integration into products.

Team Structure

  • Core Research Team (Horizontal): Generate and improve base models powering all products; work on foundational model capabilities, post-training techniques, RL infrastructure, and shared infra.
  • Agent Products Team (Vertical): Fine-tune and optimize models for Deep Research Agent and Labs/Canvas; bridge research and product for agent capabilities.
  • Comet Agent Team (Vertical): Develop and enhance the Comet Agent product, focusing on its specific requirements and optimizations.

Responsibilities

  • Post-train SOTA LLMs using supervised and reinforcement learning techniques (SFT/DPO/GRPO).
  • Leverage rich query/answer datasets to scale model performance across Sonar, Deep Research, Comet, and Search products.
  • Stay current with the latest LLM research in model training, optimization, and personalization techniques.
  • Implement preference optimization and personalization capabilities to enhance user experience.
  • Invent in-house improvements and optimizations to enhance SOTA models; turn research ideas into algorithms and run experiments to launch new models.
  • Own full-stack data, training, and evaluation pipelines required for model development.
  • Build robust and effective training frameworks (on top of Megatron/PyTorch) for post-training LLMs.
  • Implement infrastructure and components to support cutting-edge model training at scale.
  • Integrate models seamlessly into the product ecosystem and collaborate closely with engineering and product teams.

Requirements

  • Proven experience with large-scale LLMs and deep learning systems.
  • Strong programming skills in Python and PyTorch; versatility is a plus.
  • Experience with post-training techniques and reinforcement learning.
  • Self-starter with willingness to take ownership of tasks and a passion for tackling challenging problems.
  • Minimum 2–6 years of experience on relevant projects (depending on seniority level).

Nice-to-have

  • PhD in Machine Learning, AI, Systems, or related areas.
  • Experience in post-training LLMs with SFT/DPO/GRPO.
  • C++ and CUDA programming skills.
  • Experience building LLM training frameworks and RL infrastructure.
  • Academic publications and research impact.
  • Experience with agent systems and multi-step reasoning.
  • Background in personalization and preference learning.

Compensation & Benefits

  • Cash compensation range: $200,000 - $300,000 per year.
  • Equity is part of the total compensation package.
  • Benefits include comprehensive health, dental, and vision insurance for you and your dependents, and a 401(k) plan.

Notes

  • Final offers are determined by multiple factors including experience and expertise and may vary from the listed ranges.