Applied Deep Learning Research Scientist, Sparsity
at Nvidia
π Santa Clara, United States
$160,000-287,500 per year
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 6Details
We are seeking DL researchers and/or computer architects to help future accelerators and neural networks take advantage of sparsity. This is an applied research position, with the goal of developing sparsity-related techniques (in isolation and combined with reduced precision) and transferring the most promising to SW and HW products. Work entails investigations, publication, and collaboration with researchers and practitioners (internal and external to NVIDIA).
Responsibilities
- Researching methods (adjusting model architectures, training procedures, etc.) to enable sparsity in neural networks while maintaining the quality of the results.
- Proposing hardware features to enable sparsity, studying their impact on DL acceleration and efficiency.
Requirements
- MS or PhD degree in computer science, computer engineering, electrical engineering or related field or equivalent experience.
- At least 3+ years of relevant work experience
- Experience with neural network pruning and sparsity, training networks for various tasks, exploring model architectures.
- Experience with modern DL training frameworks and/or inference engines.
- Background in computer architecture, performance analysis and optimization.
- Fluency in Python, C++, or ideally both.
- Experience with GPU computing, CUDA is not required but a big plus.
Benefits
- The base salary range is 160,000 USD - 287,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits.