Senior Software Engineer, Real-Time AI and Rendering - Holoscan SDK
SCRAPED
Used Tools & Technologies
Not specified
Required Skills & Competences ?
Python @ 4 Algorithms @ 4 Parallel Programming @ 7 API @ 4 CUDA @ 4 GPU @ 4Details
At NVIDIA, we are building the future of real-time AI for sensor-driven applications. Holoscan Platform is an open-source framework for sensor AI, enabling developers to build, optimize, and deploy GPU-accelerated pipelines that process multimodal sensor data in real time. Holoscan is used across domains—from healthcare imaging to robotics, industrial automation, astronomy, and earth observation—where high-throughput sensor processing, AI inference, and visualization meet.
This role focuses on bringing generative and multimodal foundation-model capabilities into Holoscan by enabling GPU-resident generative methods and building the next-generation Holoscan SDK APIs and GPU-accelerated pipelines for real-time perception, simulation, and visualization.
Responsibilities
- Architect the next generation of Holoscan SDK by developing intuitive, scalable APIs for real-time sensor, imaging, and multimodal data processing—balancing developer usability with peak GPU performance.
- Prototype GPU-accelerated algorithms for computer vision, imaging, sensor fusion, and low-latency rendering—translating research into production-grade software.
- Build and optimize core GPU libraries for accelerated I/O, streaming, decoding, and visualization, employing CUDA, Vulkan, and GPU-resident data paths.
- Contribute to real-time visualization frameworks for medical, robotic, or industrial applications—integrating Vulkan, OpenGL, or Omniverse/RTX-based rendering back-ends.
- Benchmark performance rigorously, profiling and optimizing across the full pipeline (Sensor → AI → Render → Display, Sensor → AI → Robotic Control).
- Combine generative models with the Holoscan Sensor Bridge (HSB), Isaac Sim, ISAAC Lab and Omniverse to create real-time “AI-powered virtual sensors”.
- Prototype and optimize neural field (NeRF/SDF/Gaussian) operators for real-time scene reconstruction, view synthesis, and 3D perception within Holoscan’s streaming architecture.
- Integrate and optimize vision-language and multimodal foundation models for real-time, GPU-resident sensor pipelines.
Requirements
- Strong communicator and collaborator able to work across AI, compute, graphics, and visualization domains.
- Dynamic programming expertise in modern C++ and proven Python skills for prototyping and tooling.
- Deep passion for real-time AI, computer vision, and sensor-driven systems, and for high-performance visualization and rendering.
- 8+ years of experience building and shipping complex, high-performance imaging, sensor, or rendering software.
- Master’s/PhD or equivalent experience in Computer Science, Applied Math, Electrical or Computer Engineering, or related fields.
- Familiarity with multimodal or vision-language models and an understanding of how to adapt them to streaming or real-time workloads is a strong plus.
- Familiarity with GPU processing and rendering pipelines, synchronization, GPU memory management, and multi-GPU rendering is a plus.
- Experience designing APIs and frameworks that scale and provide good developer experience.
Preferred / Ways to Stand Out
- Experience adapting VLMs or multimodal foundation models to real-time sensor or video pipelines.
- Background integrating real-time GPU-accelerated processing and visualization pipelines (e.g., CUDA ↔ Vulkan interop).
- Hands-on expertise with CUDA C/C++ and deep knowledge of GPU architecture and parallel programming paradigms.
- Knowledge of Omniverse Kit or other GPU rendering frameworks for real-time visualization.
- Acute understanding of low-latency streaming pipelines for multimodal sensor fusion.
Benefits
- Base salary range (location- and level-dependent):
- Level 4: 184,000 USD - 287,500 USD
- Level 5: 224,000 USD - 356,500 USD
- Eligibility for equity and company benefits (see NVIDIA benefits page).
- NVIDIA is an equal opportunity employer committed to diversity and inclusion.
Applications for this job will be accepted at least until December 12, 2025.