AI Developer Webinar Series

We’ll be covering the latest in frameworks, optimization tools, and new product launches through the year. This is your chance to expand your AI developer toolkit, in just one hour of your day. Bring your questions for our Intel experts to answer live during each webinar. Sign up for one or more sessions below, and begin sharpening your AI skills.

Join us for our FREE AI webinar series

Register for one or more webinars below. If you missed a webinar, not to worry! Click here for the AI on-demand webinars. Remember we will be adding new AI webinars regularly, so bookmark this page!

Select your webinar(s):

  Sign me up for all the webinars


November 19
9:00 AM PDT

Getting Started with PyTorch with Optimizations for Intel® Architecture


Learn how to get started with PyTorch on Intel® Architecture. This session will introduce the PyTorch programming model and then cover the optimizations in Intel®️ Math Kernel Library for Deep Neural Networks (MKL-DNN). Explore ways to use Vector Neural Network Instructions (VNNI) on Intel Xeon Scalable Processors using PyTorch.

Read more +

You Will Learn:

  • What does the PyTorch* programming model look like and how it is different from Tensorflow*
  • How to take advantage of Intel® Math Kernel Library for Deep Neural Networks optimizations to boost PyTorch performance
  • How to leverage Vector Neural Network Instructions (VNNI) on Intel® Xeon Scalable Processors for Int8 inference

Mingfei Ma

Mingfei Ma is a Deep Learning Software Engineer in the Graphics and Software Group at Intel Corporation. He focuses on performance optimization of deep learning applications on Intel Architecture. Mingfei received a Master’s Degree of Controlling Science and Technology from Harbin Institute of Technology. His interests are computer graphics, high performance computing and natural language processing.


December 5
9:00 AM PDT

Accelerating TensorFlow* Inference with Intel® Deep Learning Boost on Intel® Xeon® Scalable Processors


In this webinar, we summarize how you can obtain an TensorFlow FP32 inference graph, converting and fusing many of these ops in 8-bit precision in TensorFlow* using the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and the usage of Intel Model Zoo as one stop shop for many of these models.

Read more +

Intel introduced Intel® Deep Learning Boost (Intel® DL Boost), a new set of embedded processor technologies designed to accelerate deep learning applications. Intel DL Boost includes new Vector Neural Network Instructions (VNNI) that can be used to perform computation in 8-bit precision that essentially reduces memory usage by 4x and increases the rate of arithmetic operations executed per second compared to floating point precision with low loss of accuracy. The process includes a pre-trained model with floating point precision, and obtaining a quantized version of the model to exploit Intel DL Boost instructions and accelerate inference performance.

Niroop Ammbashankar

Niroop works for Intel’s Machine Learning Performance Group as a Deep Learning Software Engineer. His graduate studies concentrated on computer vision, medical imaging and robotics. His earlier work experiences include a self-driving telemedicine robot, 3d laser scanners, 3d mesh representations, Intel® RealSense™ depth camera, Intel Alloy virtual reality headset and Intel® Autonomous driver assistance system tools amongst others. Currently, he works on CPU optimizations for Tensorflow*.


December 10
9:00 AM PDT

How PlaidML Compiles Machine Learning Kernels


Learn how PlaidML’s compiler is structured to enable state of the art optimizations of Machine Learning workloads.

Read more +

As discussed in the “PlaidML Tensor Compiler” introductory webinar, you can use PlaidML to replace kernel libraries with PlaidML’s extensible and high performance compiler. PlaidML’s philosophy is that optimal kernels can be automatically produced from hardware descriptions if the constraints inherent to ML problems are appropriately represented. PlaidML utilizes a Nested Polyhedral Model to represent operations at a granularity suited for the loop restructuring optimizations needed by machine learning workloads. This webinar will show you how PlaidML transforms high level semantic descriptions of ML operations into optimized ML kernels.

You Will Learn:

  • How PlaidML compiles ML kernels
  • How PlaidML enables the development and deployment of novel operations and optimization techniques

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking in his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.


December 12
9:00 AM PDT

Non-visual AI inference on the Edge


This webinar will explore the compute capabilities of the Intel® Xeon Scalable Processors, Processor Graphics and Intel® Movidius Myriad X VPU for non-visual workloads. Using the functionality of The Intel® Distribution of OpenVINO toolkit, you will learn how to deploy Speech Recognition acoustic models using the Kaldi Framework and speech feature vectors.

Read more +

AI inference on the PC allows for trained models to be deployed to Intel® Xeon Scalable Processors, Processor Graphics and the Intel® Myriad VPU. In previous webinars, we have demonstrated how the Intel® Distribution of OpenVINO Toolkit can be used to ingest Image Recognition and Classification models from various frameworks, e.g. Tensorflow*, PyTorch*, and deploy them on these hardware platforms in a performant manner. In this webinar, we will focus on new capabilities of the OpenVINO Toolkit to deploy Speech Recognition models using the Kaldi Framework to CPU, Processor Graphics and the VPU. You will walk away learning how to convert a Speech Recognition model to Intermediate Representation (IR) using the Model Optimizer and infer on multiple hardware using the Inference Engine Plugins.

Learning Objectives:

  • Understand the requirements to create IR for Speech Recognition Models – A Kaldi trained model (Ex: .nnet1 or nnet2 model), Kaldi Class Counts File, etc.
  • See how to use the Model Optimizer to create the IR files (.xml and .bin)
  • Use the Inference Engine to deploy to CPU, GPU and VPU.

Rudy Cazabon

Rudy holds a Bachelor’s degree in Space Science with minor in Mechanical Engineering from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph.


Webinar Series Moderator

Meghana Rao

Artificial Intelligence Developer Evangelist at Intel

Bio: Meghana Rao is an Artificial Intelligence Developer Evangelist at Intel. In her role, she works closely with universities and developers in evangelizing Intel’s AI portfolio and solutions, helping them understand Machine Learning and Deep Learning concepts, building models and POCs using Intel optimized frameworks and libraries like Caffe*, Tensorflow* and Intel®️ Distribution of Python*. She has a Bachelor’s degree in Computer Science and Engineering and a Master’s degree in Engineering and Technology Management with past experience in embedded software development, Windows* app development and UX design methodologies.

Enter your info to sign up

(*)All fields are required

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please select a country.
Please select

By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel’s web sites and communications are subject to our Privacy Notice and Terms of Use.

You will receive an email confirmation to attend your selected webinar(s).