Thursday, August 1, 2024, from 9:00am to 11:00am PT
Build, optimize, and deploy AI apps on an AI PC, taking advantage of diverse processors, including CPUs, GPUs (both integrated and discrete), and neural processing units (NPUs). The Intel® Distribution of OpenVINO™ Toolkit connects with PyTorch to deliver new capabilities and expanded features, deploying optimized models on dedicated AI PCs.
Various modes of inferencing are available. For instance, you can use OpenVINO toolkit as a standalone AI inference runtime; developers can convert PyTorch models to OpenVINO IR format and load them into the OpenVINO runtime for optimized inference. Or you can use the OpenVINO PyTorch frontend that directly loads PyTorch into OpenVINO. Alternatively, you can use the new PyTorch 2.0, incorporating PyTorch in OpenVINO as a TorchDynamo backend with torch.compile to enable inferencing with PyTorch APIs.
The workshop provides examples of each of these approaches, giving you the opportunity to gain hands-on experience with Intel® Tiber™ Developer Cloud (ITDC).
The topics covered include:
This workshop is geared to all levels of programmers, from novice to advanced. An ITDC account is required to participate in hands-on activities. If you don’t have one, get one here.
Join us at this workshop to discover a variety of techniques granted by the Intel Distribution of OpenVINO when coupled with PyTorch.
Presenter
AI PC Evangelist, Intel
Q&A moderator
Manager Developer Evangelist Team, Intel