Used Tools & Technologies
Not specified
Required Skills & Competences ?
Docker @ 4 Grafana @ 4 Kubernetes @ 4 DevOps @ 4 Terraform @ 4 GCP @ 4 AWS @ 4 Azure @ 4 Communication @ 4 Experimentation @ 7 LLM @ 4Details
Grafana Labs is a remote-first, open-source powerhouse. There are more than 20M users of Grafana, the open source visualization tool, around the globe, monitoring everything from beehives to climate change in the Alps. The instantly recognizable dashboards have been spotted everywhere from a NASA launch and Minecraft HQ to Wimbledon and the Tour de France. Grafana Labs also helps more than 3,000 companies -- including Bloomberg, JPMorgan Chase, and eBay -- manage their observability strategies with the Grafana LGTM Stack, which can be run fully managed with Grafana Cloud or self-managed with the Grafana Enterprise Stack, both featuring scalable metrics (Grafana Mimir), logs (Grafana Loki), and traces (Grafana Tempo).
We’re scaling fast and staying true to what makes us different: an open-source legacy, a global collaborative culture, and a passion for meaningful work. Our team thrives in an innovation-driven environment where transparency, autonomy, and trust fuel everything we do.
You may not meet every requirement, and that’s okay. If this role excites you, we’d love you to raise your hand for what could be a truly career-defining opportunity.
This is a remote opportunity and we would be interested in applicants from Canada time zones only at this time.
Role summary
We’re looking for an AI Software Engineer with a strong software engineering background, a quick iteration mindset, and a passion for experimentation – balanced by a focus on shipping and scaling impactful features that deliver value to users. You’ll work closely with cross-functional teams to develop, test, and ship AI-powered features that contribute to improving infrastructure and observability quality through automation, while also expanding the capabilities of AI agents across the observability stack to assist users with incident response. As the team matures, there’s a broad opportunity to expand or redefine this role based on impact and initiative.
Responsibilities
- Build and deliver AI solutions: take ownership of developing high-performance AI features to help users detect, triage, and resolve incidents using observability data and tools.
- Rapid experimentation and iteration: implement a highly iterative process where you quickly prototype, test, and validate with real users, including shipping and evolving LLM- or agent-powered workflows for incident lifecycle management and automated analysis tasks.
- Collaborate cross-functionally: work with data analysts, product managers, and designers to shape AI-driven product features, including integration of agentic components with internal tools, alerting systems, runbooks, and developer workflows.
- Utilize AI tools effectively: use AI and automation tools to enhance both product functionality and your own development workflows.
- Communicate effectively in a dynamic, collaborative environment and take full ownership of the AI solutions you develop, ensuring they are scalable and maintainable.
Requirements
- Experience with LLMs, prompt engineering, and building applications powered by GenAI.
- Proven track record of delivering software that made it into production and is actively used by users.
- Exposure to working in cloud-native environments (e.g., AWS, GCP, Azure).
- Experience using observability tools to understand and troubleshoot system behavior.
- Strong engineering skills: solid experience building production software systems (backend and/or full stack). Self-starter capable of tackling complex engineering problems with minimal supervision.
- Quick iteration and experimentation mindset; comfortable releasing prototypes, collecting feedback, and iterating.
- Proven initiative and collaborative attitude: able to define scope in ambiguous situations and communicate effectively with peers, product managers, and designers.
Bonus Points
- Experience building or working with agent frameworks or multi‑agent workflows.
- Experience with infrastructure/devops related tooling: Kubernetes, Docker, Terraform or similar for deployments.
- Familiarity with model fine-tuning techniques.
- Experience building observability tooling.
Compensation & Rewards
- In Canada, the Base compensation range for this role is CAD 164,490 - CAD 197,389. Actual compensation may vary based on level, experience, and skillset as assessed throughout the interview process.
- All roles include Restricted Stock Units (RSUs).
Why You'll Thrive at Grafana Labs
- 100% Remote, Global Culture
- Scaling Organization and meaningful work
- Transparent communication and innovation-driven environment
- Open source roots and empowered teams
- Career growth pathways and in-person onboarding
- Global annual leave policy of 30 days per annum (with 3 reserved Grafana Shutdown Days)
Equal Opportunity Employer
Grafana Labs is an equal opportunity employer and may utilize AI tools in its recruitment process to assist in matching information provided in CVs to job postings.