Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 7 TensorFlow @ 7 Communication @ 7 PyTorch @ 7Details
Groq delivers fast, efficient AI inference. Our LPU-based system powers GroqCloud™, giving businesses and developers the speed and scale they need. Headquartered in Silicon Valley, we are on a mission to make high performance AI compute more accessible and affordable. When real-time AI is within reach, anything is possible. Build fast.
Responsibilities
- Deploy Groq and optimize AI inference workloads.
- Troubleshoot model performance and assist in benchmarking.
- Provide hands-on engineering support for AI/ML teams.
- Serve as a bridge between customers and Groq’s engineering teams.
Requirements
- 8+ years in AI/ML engineering or deep learning optimization.
- Experience with AI inference pipelines and hardware acceleration.
- Strong Python and AI framework experience (TensorFlow, PyTorch, ONNX).
- Fluent in both English and Arabic (written and spoken).
Personal Attributes
- Biased for action.
- Innovative, with the ability to adapt to new challenges and think creatively.
- Highly collaborative, with a customer-first mentality.
- Comfortable in a fast-paced, evolving environment.
- Strong communication and interpersonal skills to work with both internal and external stakeholders.
Attributes of a Groqster
- Humility - Egos are checked at the door.
- Collaborative & Team Savvy - We make up the smartest person in the room, together.
- Growth & Giver Mindset - Learn it all versus know it all, we share knowledge generously.
- Curious & Innovative - Take a creative approach to projects, problems, and design.
- Passion, Grit, & Boldness - no limit thinking, fueling informed risk taking.
If this sounds like you, we’d love to hear from you!
Compensation
At Groq, a competitive base salary is part of our comprehensive compensation package, which includes equity and benefits. For this role, the base salary range is 600,000 SAR to 1,000,000 SAR, determined by your skills, qualifications, experience and internal benchmarks.