Data Engineering Intern

at Sentry
USD 110,500 per year
USD 53 per hour
INTERN
✅ Hybrid

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Kafka @ 3 Python @ 3 SQL @ 3 GCP @ 3 Airflow @ 3 CI/CD @ 3 Data Engineering @ 3 Streaming Data Processing @ 2 Sentry @ 3

Details

About Sentry

Bad software is everywhere, and we’re tired of it. Sentry is on a mission to help developers write better software faster so we can get back to enjoying technology.

With more than $217 million in funding and 100,000+ organizations using Sentry, the company builds performance and error monitoring tools that help teams spend less time fixing bugs and more time building products. Sentry embraces a hybrid work model, with Mondays, Tuesdays, and Thursdays set as in-office anchor days to encourage meaningful collaboration.

Data Engineering at Sentry builds and scales the infrastructure that powers analytics, product insights, and operational decision-making. The team designs data pipelines, manages large-scale processing systems, and ensures that teams can access reliable, high-quality data. They partner closely with Business Intelligence, Product, and Engineering.

Responsibilities

  • Work with GCP services (BigQuery, Pub/Sub, Cloud Storage, etc.) to support scalable and reliable data systems
  • Develop and optimize DAGs in Airflow to schedule and automate workflows
  • Write efficient Python and SQL code to process, transform, and analyze large datasets
  • Partner with Data Engineering and Business Intelligence teams to ensure data quality, consistency, and availability across the company
  • Support initiatives to improve the scalability, monitoring, and reliability of data infrastructure

You’ll love this job if you

  • You get excited about building systems that move and process large volumes of data efficiently
  • You are curious about how raw data becomes insights and want to contribute to the foundation that makes analytics possible
  • You are a self-starter who enjoys ownership, problem-solving, and learning new technologies
  • You are energized by working in a dynamic environment where priorities evolve as the company grows

Qualifications

  • Currently pursuing a Bachelor’s degree (graduating 2027 or later) in computer science, data engineering, or a related technical discipline, with a 3.0 minimum GPA or equivalent
  • Exposure to Python and SQL for data processing and pipeline development
  • Familiarity with data engineering concepts such as batch and streaming data processing
  • Exposure to tools such as Kafka, Pub/Sub, Airflow, BigQuery, or other GCP services
  • Understanding of software engineering best practices (version control, testing, CI/CD) is a plus
  • Ability to communicate clearly and work collaboratively with technical and non-technical teams

Compensation and benefits

  • The base pay expected for this position is $53.13/hr. Actual compensation will be determined by factors including work location, education, experience, skills, and job-related knowledge.
  • Eligible to participate in Sentry’s employee benefit plans/programs applicable to the position (including incentive compensation, equity grants, paid time off, and group health insurance coverage).

Equal Opportunity

Sentry is committed to providing equal employment opportunities and reasonable accommodations. If you need assistance or an accommodation due to a disability, contact [email protected]. For details on applicant data handling, see Sentry’s Applicant Privacy Policy.