Senior GPU Cluster Software Engineer

at Nvidia

πŸ“ Santa Clara, United States

$148,000-276,000 per year

SENIOR
βœ… On-site

SCRAPED

Used Tools & Technologies

Not specified

Required Skills & Competences ?

Software Development @ 4 ElasticSearch @ 4 Grafana @ 4 Prometheus @ 4 Redis @ 4 Kibana @ 4 Python @ 6 SQL @ 4 NoSQL @ 4 CI/CD @ 4 Algorithms @ 4 Data Structures @ 4 Distributed Systems @ 4 Machine Learning @ 4 Debugging @ 4 LLM @ 4 PyTorch @ 4 Agile @ 4 OpenTelemetry @ 4

Details

As a member of the System Software team, you'll be responsible for building profiling solutions for large-scale real world applications running on GPU compute clusters to make them work efficiently and improve the user experience for customers as well as engineers supporting the cluster. Much of our software development focuses on profiling varied set of applications running on different GPU clusters, being able to accurately measure and display the user experience on these clusters with actionable inputs for customers and engineers supporting the cluster. Creating a fault-tolerant distributed system while minimizing data loss and limiting time spent on reactive operational work is key to product quality and dynamic day-to-day work. We promote self-direction to work on meaningful projects while we also strive to build an environment that provides the support and mentorship needed to learn and grow.

Responsibilities

  • Work in an agile and fast-paced global environment to gather requirements, architect, design, implement, test, deploy, release, and support large scale distributed systems infrastructure with monitoring, logging, visualization, and alerting capabilities with promised uptime.
  • Build internal profiling tools for real-world ML/DL applications running on HPC GPU clusters for failure and efficiency analysis to help improve current and future generations of GPU clusters and associated HWs.
  • Understand state of the art improvements in the ML/DL domain, and work with various application owners and research teams to add/improve profiling needs for current and potential future supported features.

Requirements

  • BS+ in Computer Science or related (or equivalent experience) and 5+ years of software development (in Python).
  • Experience with Gitlab (or another source code management) branch/release, CI/CD pipeline, etc.
  • Solid understanding of algorithms, data structures, and runtime/space complexity.
  • Experience working with distributed system software architecture.
  • Basic understanding of HPC GPU cluster, Slurm.
  • Basic understanding of Machine learning concepts and terminologies.
  • Background with databases - SQL and NoSQL (Prometheus, Elasticsearch, OpenSearch, Redis, etc.).
  • Experience with distributed Data Pipeline, Telemetry, Visualizations (Kibana, Grafana, etc.), Alerting (PagerDuty, etc.).

Ways to stand out from the crowd:

  • Experience debugging functional and performance issues in HPC GPU clusters.
  • Background in running and instrumenting distributed LLM training on a multi-GPU HPC cluster.
  • Knowledge of LLM training features and libraries - Checkpointing, Parallelism, Pytorch, Megatron-LM, NCCL.
  • Experience with HPC schedulers such as Slurm.
  • Background with Opentelemetry.