Senior Dgx Cloud Software Engineer - Infrastructure Automation And Distributed Systems

at Nvidia
USD 144,000-270,200 per year
SENIOR
✅ On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Security @ 4 Docker @ 4 Go @ 6 Kubernetes @ 4 Linux @ 4 Python @ 6 Distributed Systems @ 4 Communication @ 4 Mathematics @ 4 Networking @ 4 OpenStack @ 4 SRE @ 4 GPU @ 4

Details

We are seeking Software Engineers with previous experience building and running private and public clouds at production scale. As part of the DGX Cloud team, you’ll have the opportunity to support our customers’ journeys in AI training and inference development by building the platforms, tools, and services that defend the operational capacity of our bare-metal, accelerated compute infrastructure and codify reliability best-practices in the broader DGX Cloud platform ecosystem.

Responsibilities

  • Design, build, and run cloud infrastructure services to meet business goals performing integrations, migrations, bringups, updates, and decommissions as necessary.
  • Participate in the definition of internal service level objectives and error budgets as part of the overall observability strategy.
  • Eliminate toil or automate it where the ROI of building and maintaining automation is worth it.
  • Practice sustainable blameless incident prevention and incident response while being a member of an on-call rotation.
  • Consult with and provide consultation for peer teams on systems design best practices.
  • Participate in a supportive culture of values-driven introspection, communication, and self-organization.

Requirements

  • Proficiency in one or more of the following programming languages: Python or Go.
  • Bachelor’s degree in Computer Science or related technical field involving coding (e.g., physics or mathematics) or equivalent experience.
  • 5+ years of relevant experience in infrastructure and fleet management engineering.
  • Experience with infrastructure automation and distributed systems design developing tools for running large scale private or public cloud systems with fully automated management and active customer consumption in production.
  • Track record demonstrating initiating projects, collaborating well, and convincing others to collaborate.
  • In-depth knowledge in one or more of: Linux, Slurm, Kubernetes, Local and Distributed Storage, and Systems Networking.

Ways to stand out

  • Systematic problem-solving approach, clear communication skills, ownership and results orientation.
  • Experience with bare metal as a service (BMaaS), Slurm on containers, Kubernetes clusters vending, multi-cloud infrastructure services.
  • Experience teaching reliability engineering (SRE) or scale-oriented cloud systems practices.
  • Experience running private or public cloud systems based on Kubernetes, OpenStack, Docker, or Slurm.
  • Knowledge of accelerated compute and communication tech such as BlueField Networking, Infiniband topologies, NVMesh, NVIDIA Collective Communication Library (NCCL).
  • Experience working with centralized security organizations for risk mitigation.
  • Experience or interest in ML/AI focused roles.

Company Overview

NVIDIA is leading innovations in Artificial Intelligence, High-Performance Computing, and Visualization. Our GPU technology is central to modern computing systems, powering AI, autonomous vehicles, and more. We value creativity, autonomy, and dedication.

Salary

The base salary range is 144,000 USD - 270,250 USD, determined by location, experience, and comparable roles. Additional equity and benefits are provided.

NVIDIA is an equal opportunity employer committed to diversity and non-discrimination.