Get in-depth performance insights for deep learning model-based applications targeting CPU, GPU, and NPU.
The OpenVINO toolkit streamlines development, integration, and deployment of performant DL models in domains like computer vision, LLMs, and GenAI.
But given the ubiquity of heterogeneous compute environments, there’s a good chance your models and model-based apps must run on multiple hardware targets. Optimizing for each can take a bit of sleuthing.
This session addresses that issue, showing you how to configure and run analysis on your OpenVINO workloads to uncover bottlenecks on target hardware—CPU, GPU, NPU—using Intel® VTune™ Profiler and Intel® Advisor
Topics covered:
- The OpenVINO framework to boost AI application performance.
- Using Intel Advisor for performance modeling and CPU/GPU roofline generation.
- Using VTune Profiler for performance analysis across CPU (memory bottlenecks), GPU (Xe Vector Engine (XVE) utilization issues), and NPU (memory bandwidth).
- An overview of VTune’s native Instrumentation and Tracing Technology API (ITT API) utilities provided with the OpenVINO framework.
Sign up today.
Skill level: Intermediate
Featured software
Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries.
Intel, the Intel logo and VTune are trademarks of Intel Corporation or its subsidiaries.
Rupak Roy
Software Technical Consulting Engineer, Intel
Cory Levels
Software Technical Consulting Engineer, Intel