Intel

Get Trained on Intel® Distribution of OpenVINO™ Toolkit

Learn to develop high performance applications and enable deep learning inference from edge to cloud.

Harness Generative AI Acceleration with OpenVINO™ Toolkit

Delve into the world of transformer models, including Stable Diffusions and GPT, as well as explore how we've optimized these models to run on Intel’s wide variety of hardware. Part of DevCon 2023.

Begin

Beyond the Continuum: The Importance of Quantization in Deep Learning

Quantization is a valuable process in Deep Learning of mapping continuous values to a smaller set of discrete finite values. In this talk, we will explore the different types of quantization techniques that can be applied to deep learning models. In addition, we will give an overview of the Neural Network Compression Framework (NNCF) and how it complements the OpenVINO™ toolkit to achieve outstanding performance. Part of DevCon 2023.

Begin

How To Build a Smart Queue Management System Step by Step? From Zero to Hero

Join us for a step-by-step tutorial on how to create an intelligent retail queue management system using the OpenVINO™ toolkit and YOLOv8. We'll walk you through the process of integrating these powerful open-source tools to develop an end-to-end solution that can be deployed in retail checkout environments. Whether you're an experienced developer or new to AI, this session will provide practical tips and best practices for building intelligent systems using OpenVINO. By the end of the presentation, you'll have the knowledge and resources to build your own solution. Part of DevCon 2023.

What you’ll learn:

  • Step-by-step easy-to-follow Jupyter Notebook tutorial
  • Real-time detection and tracking of people for efficient queue management and staffing optimization
  • Optimized for multi-model workloads across various Intel processors
  • Where to find resources; open-source code, dataset, videos, and a blog available on GitHub for easy customization and extension to your specific needs

Begin

Bringing Together Scientific Data and Custom AI Models with OpenVINO™ Model Server

The integration of AI in laboratory environments is rapidly changing the way clinical pharmaceutical scientists innovate and extract insights. However, the optimization and deployment of scientific pipelines is often challenging due to laboratory settings and requirements. In this talk, we will demonstrate how to efficiently build and deploy scientific AI models in using open-source technologies. We will walk through an end-to-end case study, inviting Beckman Coulter Life Sciences to share how they leveraged OpenVINO toolkit optimizations and OpenVINO model server to unlock AI performance for their CellAI toolbox. Part of DevCon 2023.

Begin

Overview: Intel® Developer Cloud for the Edge – Bare-metal Development

Intel® Developer Cloud for the Edge is designed to help you prototype, evaluate, and benchmark AI and edge solutions on Intel® hardware with immediate worldwide access. Within the Developer Cloud, the Bare-metal platform enables you to develop or import your computer vision or edge AI application using a JupyterLab environment on a bare-metal infrastructure.

In this session, you will learn:

  • Intel Developer Cloud for the Edge Overview
  • Software portfolio
  • Hardware portfolio
  • BKMs for software and hardware combinations

Begin

AI Application Benchmarking on Intel® Hardware through Red Hat OpenShift™ Data Science Platform

In this session, developers will learn how to leverage the Intel® Developer Cloud for the Edge - Container Playground as REST APIs from within Red Hat* OpenShift* Data Science Platform (RHODS), benchmark AI applications using OpenVINO™ Toolkit on Intel® CPUs, integrated GPUs and discrete GPU hardware.

Begin

Deep Learning Workbench: Simplifying the Development and Deployment of AI Models

Profile and optimize your neural network on various Intel® hardware configurations hosted in the cloud environment without any hardware setup at your end and integrate the optimized model in the user-friendly environment of JupyterLab*. Session will cover:

  • Introduction to the Deep Learning Workbench (DLWB) and its features
  • Best practices for Optimizing AI models using DLWB
  • New version Deep Learning Workbench - Cloud

Begin

Develop, Benchmark, and Deploy Cloud-native Applications Using Intel® Developer Cloud for the Edge – Container Playground

Join Meghana Rao, Intel AI Technology Evangelist, for live workflows and walkthroughs of Intel® Developer Cloud for the Edge's Container Playground.

Begin

Scale AI with (6 More!) Optimized, Domain-specific Reference Kits

Save your spot to get an overview of the newest Intel/Accenture kits, including demos and how to get them for free. The new kits include modeling solutions for traffic camera object detection, computational fluid dynamics, structured data generation, structural damage assessment, NLP for semantic search, and sensor data anomaly detection.

Begin

Food Waste Deceleration with AI Acceleration Using OpenVINO™ Toolkit

In this webinar, Anisha Udayakumar, Intel AI Evangelist, explores how she used AI and OpenVINO™ toolkit to help reduce the food waste that happens in the fresh food and produce section at the grocery store – and how you can apply these same techniques to your own projects.

Begin

Analyze & Optimize Neural Networks with Deep Learning Workbench

This webinar demonstrates how to use the Deep Learning Workbench to analyze and optimize your model and more.

Begin

Run Object Detection and Human Pose Estimation in Real Time

Explore AI inferencing and how to run different models using OpenVINO™ toolkit.

Begin

Hear the Replay of the Live Broadcast from Fort Mason with Pat and Intel Senior Leaders

Listen in to get the latest announcements spanning new products, developer tools and technologies, and Intel’s focus on empowering an open ecosystem, ensuring choice for developers to use tools and environments they prefer, and building trust and partnership across cloud service providers, open-source communities, startups, and others.

Begin

Take the Stress Out of Going from Training to Inferencing with the OpenVINO™ Toolkit

Take the stress out of training models with done-for-you integration and extensions ready and optimized for inferencing with OpenVINO™ toolkit.

Begin

Download OpenVINO™ Toolkit

Please download and install OpenVINO™ toolkit before you start the training.

Download now

Stay Connected

Stay informed on the latest Edge AI training resources from Intel.

Get connected