Learn to develop high performance applications and enable deep learning inference from edge to cloud.
In this training session, Sri Harsha Gajavalli will talk about the Intel® Distribution of OpenVINO™ Toolkit. The toolkit helps accelerate AI workloads including computer vision, audio, speech, language, and recommendation systems, and enables high-performing applications and algorithms to be deployed across various Intel architecture including CPUs, integrated and discrete GPUs, VPUs and FPGAs from edge to cloud.
November 28, 2021 11:00am - 12:00pm India Standard Time (GMT+5.5)Watch now
This webinar demonstrates how to use the Deep Learning Workbench to analyze and optimize your model and more.
Explore AI inferencing and how to run different models using OpenVINO™ toolkit.
Listen in to get the latest announcements spanning new products, developer tools and technologies, and Intel’s focus on empowering an open ecosystem, ensuring choice for developers to use tools and environments they prefer, and building trust and partnership across cloud service providers, open-source communities, startups, and others.
Take the stress out of training models with done-for-you integration and extensions ready and optimized for inferencing with OpenVINO™ toolkit.
In this webinar, you will learn:
The OpenVINO Notebooks repo on GitHub is a collection of ready-to-run Jupyter Notebooks, that feature various aspects, AI models, and use cases to show off the capabilities of the OpenVINO™ Toolkit. One of the examples is a notebook, where you can train a model using TensorFlow*, and then run it in both the native TensorFlow and then using the OpenVINO™ Toolkit. This tutorial shows how to take this notebook and modify it to train the same model but with a different dataset.Begin
Find out how you can accelerate AI workloads for computer vision, audio, speech, language, and recommendation systems using OpenVINO™ toolkit. Watch this self-paced video training series to advance your skills in AI and deep learning. The training will take you through the workflow using OpenVINO™ toolkit including support for accelerated deep learning algorithm deployments in your application. Learn about Intel® DevCloud, a cloud-based development sandbox that allows you to actively prototype and experiment with AI inference workloads on the latest Intel® hardware. Discover tools and demos for the different stages of your development journey.
Length: 16 modules, averaging 20m eachRegister for free
OpenVINO™ toolkit integration with TensorFlow (OVTF) delivers OpenVINO™ toolkit inline optimizations and runtime needed for an enhanced level of TensorFlow compatibility. It is designed for developers who want to get started with OpenVINO™ toolkit in their inferencing applications to enhance inferencing performance with minimal code modifications. Learn how OpenVINO™ toolkit integration with TensorFlow helps accelerate inference across many AI models on a variety of Intel® silicon.Begin
In this course, you will be learning the basics of OpenVINO™ Execution Provider for ONNX* Runtime. After finishing this course you will be able to develop deep learning applications leveraging the OpenVINO Execution Provider for ONNX Runtime. We will introduce you to the environment and various sample applications.
Length: 10 modules, averaging 2m eachBegin
Learn more about Intel's reference software stack for video and sensor analytics. Find out how Intel® Edge Insights for Industrial enables you to integrate video and time-series data analytics on edge compute nodes and run concurrent workloads on ready-to-use containerized analytics pipelines. Learn to support acceleration and distribution of video analytics on CPUs, GPUs, and VPUs.
Length: 10 modules, averaging 2m each
Please download and install OpenVINO™ toolkit before you start the training.
Stay informed on the latest Edge AI training resources from Intel.
Developing an application using OpenVINO™ toolkit? Tell us about your application to be eligible for co-marketing benefits.