Senior Solutions Architect, Telco and CDNs
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Marketing @ 4 Docker @ 4 Kubernetes @ 4 DevOps @ 4 Machine Learning @ 7 MLOps @ 4 Communication @ 7 Mathematics @ 4 Networking @ 4 Cloudflare @ 4 GPU @ 4Details
NVIDIA is seeking a Senior Solutions Architect to unite Generative AI infrastructure with Content Delivery Networks (CDNs) such as Cloudflare and Akamai. The role focuses on developing advanced architectures that optimize AI workloads for edge and distributed settings, ensuring low-latency, high-performance delivery of AI-enabled experiences. You will accelerate inference at the edge, integrate GPU platforms with CDN networks, and help customers build intelligent, scalable solutions. The role centers on NVIDIA's Enterprise Platforms, GPUs, networking, and software stacks, with emphasis on platform solutions such as MLOps, DevOps, Kubernetes, and containers.
Responsibilities
- Drive NVIDIA technology adoption and secure build wins at key Fortune 500 customers in Data Center, Edge, and Cloud deployments as part of the North America Telco & CDN Solutions Architecture Team.
- Promote NVIDIA's Accelerated Compute Platforms incorporating technologies such as virtualization, Kubernetes, Docker, and Run:AI; expand customers' DevOps and MLOps skills by applying NVIDIA architectures, solutions, and blueprints.
- Lead customer proofs-of-concept (PoCs) of next-gen platforms for deploying industry solutions to key use-cases.
- Support the business development team through the sales process for GPU/Network hardware and software products; own the technical relationship and enable customers to build solutions based on NVIDIA technologies.
- Collaborate with NVIDIA Engineering, Product, and Sales teams to contribute to solution development and share customer feedback.
Requirements
- MSc or PhD in Engineering, Mathematics, Physics, or Computer Science (or equivalent experience).
- 8+ years delivering enterprise accelerated computing infrastructure (HPC, Deep Learning, Machine Learning).
- Comprehensive industry expertise in Content Delivery Networks and experience with contemporary datacenter enterprise computing architectures (storage, compute, and software stacks).
- Experience working with enterprise developers and academic research communities supporting computer vision, data analytics, or deep learning deployments.
- Strong verbal and written communication and technical presentation skills; able to think creatively to debug and solve complex problems.
- Self-starter with a passion for growth and continuous learning; able to collaborate across Engineering/Research, Sales, Product, and Marketing teams.
Preferred / Ways to stand out
- Experience architecting large-scale edge and compute clusters.
- Specialty skills deploying large-scale GPU computing clusters to support AI workloads (training and inference).
- Experience building and refining AI Content Delivery Networks, including distributed inference pipelines, edge-accelerated model serving, and low-latency model routing strategies across global regions.
- Proficiency in deploying high-throughput inference workloads at scale using GPU-accelerated clusters, edge nodes, and hybrid on-prem/public-cloud infrastructures.
- Hands-on experience with NVIDIA GPU software stacks (Triton, TensorRT, cuOpt, NIMs) for accelerating inference, caching, and model distribution across geographically distributed nodes.
Benefits
- NVIDIA offers highly competitive salaries, equity eligibility, and a comprehensive benefits package. See https://www.nvidiabenefits.com for details.
- Base salary ranges provided by location and level (see below).
Compensation
- Base salary range for Level 4: 184,000 USD - 287,500 USD.
- Base salary range for Level 5: 224,000 USD - 356,500 USD.
Applications for this job will be accepted at least until December 8, 2025.
NVIDIA is an equal opportunity employer and is committed to fostering a diverse work environment.