Principal Engineer, Distributed Machine Learning
at Nvidia
π Santa Clara, United States
$272,000-419,800 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Software Development @ 8 Kubernetes @ 4 Python @ 4 Scala @ 4 Spark @ 4 GitHub @ 4 Algorithms @ 4 Distributed Systems @ 4 Machine Learning @ 4 TensorFlow @ 4 API @ 4 HTTP @ 4 PyTorch @ 4 XGBoost @ 4 Agile @ 3Details
NVIDIA is looking for a Principal Engineer to join our Distributed Machine Learning team focused on GPU accelerated Apache Spark. Data scientists often apply machine learning (ML) and deep learning (DL) algorithms over large datasets to train AI models. To accelerate and scale the model training, some libraries (e.g., XGBoost, RAPIDS cuML, PyTorch, and TensorFlow) have been extended for distributed training in a GPU-accelerated computer cluster. NVIDIA plans to work with open source communities to make GPU-accelerated distributed ML/DL even more widely applicable and easier to use. We aim to address the key limitations (including performance and usability) of existing solutions so that data scientists could build AI models to achieve business goals, faster, more reliably, and at a lower cost. Come join NVIDIA to design and develop GPU-accelerated distributed machine learning solutions.
Responsibilities
- Design and develop new user-friendly APIs and libraries to optimally use existing DL/ML frameworks in GPU-enabled Spark clusters for distributed DL/ML training and inference at scale.
- Design and develop GPU accelerated ML libraries for distributed training and inference on Spark clusters, e.g., to improve our existing spark-rapids-ml open source library.
- Demonstrate superior performance of developed solutions on industry standard benchmarks and datasets.
- Make technical contributions to enhance capabilities of open source projects such as RAPIDS, XGBoost, spark-rapids-ml, and Apache Spark.
- Work with NVIDIA partners and customers on deploying distributed ML algorithms in cloud or on-premise.
- Keep up with published advances in distributed ML systems and algorithms.
- Provide technical mentorship to a team of engineers.
Requirements
- BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field (or equivalent experience).
- 12+ years of work or research experience in software development.
- 5+ experience as technical lead in distributed machine learning and/or deep learning.
- 3+ years of open source development experience.
- 3+ years of hands-on experience with Spark MLlib, XGBoost, and/or PyTorch.
- Knowledge of internals of Apache Spark MLlib.
- Experience with Kubernetes, YARN, Spark, and/or Ray for distributed ML orchestration.
- Proven technical skills in designing, implementing and delivering high-quality distributed systems.
- Excellent programming skills in C++, Scala, and Python.
- Familiar with agile software development practice.
Benefits
- Familiarity with NVIDIA libraries (RAPIDS cuML, Spark-RAPIDS, NVTabular) is a plus.
- Familiarity with NVIDIA GPUs and CUDA is also a strong plus.
- Familiarity with Horovod, Petastorm and other existing/past distributed learning libraries is desirable.
- Experience working with multi-functional teams across organizational boundaries and geographies.
NVIDIA is widely considered to be one of the technology worldβs most desirable employers. We have some of the most brilliant and talented people in the world working for us. If you are passionate about what you do, creative and autonomous, we want to hear from you!