AI Developer Webinar Series

We’ll be covering the latest in frameworks, optimization tools, and new product launches through the year. This is your chance to expand your AI developer toolkit, in just one hour of your day. Bring your questions for our Intel experts to answer live during each webinar. Sign up for one or more sessions below, and begin sharpening your AI skills.

Join us for our FREE AI webinar series

Register for one or more webinars below. If you missed a webinar, not to worry! Click here for the AI on-demand webinars. Remember we will be adding new AI webinars regularly, so bookmark this page!

Select your webinar(s):

  Sign me up for all the webinars


November 6
9:00 AM PDT

Develop Windows*-based AI Applications Using Windows Machine Learning (AI on PC)


In this webinar we introduce to the basics of Windows Machine Learning (WinML) concepts, show you how to use existing trained models (such as ONNX) in your Windows-based applications, demonstrate how to target different devices (CPU, GPU etc.) and talk about the process of incorporating a trained model in a Windows-based UWP application.

Read more +

We will also discuss how to use the WinML APIs in loading the models, setting up the sessions, binding a model and evaluating the inputs and outputs.

Praveen Kundurthy

Praveen Kundurthy is a Developer Evangelist at Intel Corporation. Praveen has more than 15 years development experience with C++, C#, and Python and his main interests are artificial intelligence, Windows* programming, and game development. Praveen has been with Intel for more than nine years. He closely works with the developer community, trains developers on Intel tools and technologies, helps them understand how technologies can be used by developing proof of concepts, and writes blog posts and technical articles, which are posted on Intel Developer Zone web site. He has a Master of Science in Computer Engineering with experience in multiple technologies, such as Alexa* for PC, game programing and game optimizations, Windows* programing, Android programming and storage technologies.


November 14
9:00 AM PDT

Maximize the Use of CPU Resources for XGBoost* Training


Learn how to speed up your boosting algorithm workloads on CPU with Intel® Data Analytics Acceleration Library (Intel® DAAL), a highly optimized library for Intel® CPUs.

Read more +

Gradient boosting has many real-world applications as a general-purpose, supervised learning technique for regression, classification, and page ranking problems. It’s a common choice for large problem sizes yet training implementation of this method is quite complex because of the multiple kernel dependencies that impact execution time, irregular memory access and many other issues.

Join us for a webinar to learn the optimizations that have been done to XGBoost* and how to take advantage of it in your workload. We’ll also give example training workloads that compare the performance of latest XGBoost* implementation on an end to pipeline.

Abdulmecit Gungor

Abdulmecit Gungor has received Bachelor of Electronics Engineering and a minor degree in Mathematics from City University of Hong Kong with The S. H. Ho Foundation Academic achievement reward. He has worked as a research engineer, then completed his Master degree at Purdue. His interests are NLP application development on real life problems, text mining, statistical machine learning.


November 19
9:00 AM PDT

Getting Started with PyTorch with Optimizations for Intel® Architecture


Learn how to get started with PyTorch on Intel® Architecture. This session will introduce the PyTorch programming model and then cover the optimizations in Intel®️ Math Kernel Library for Deep Neural Networks (MKL-DNN). Explore ways to use Vector Neural Network Instructions (VNNI) on Intel Xeon Scalable Processors using PyTorch.

Read more +

You Will Learn:

  • What does the PyTorch* programming model look like and how it is different from Tensorflow*
  • How to take advantage of Intel® Math Kernel Library for Deep Neural Networks optimizations to boost PyTorch performance
  • How to leverage Vector Neural Network Instructions (VNNI) on Intel® Xeon Scalable Processors for Int8 inference

Mingfei Ma

Mingfei Ma is a Deep Learning Software Engineer in the Graphics and Software Group at Intel Corporation. He focuses on performance optimization of deep learning applications on Intel Architecture. Mingfei received a Master’s Degree of Controlling Science and Technology from Harbin Institute of Technology. His interests are computer graphics, high performance computing and natural language processing.


December 5
9:00 AM PDT

Accelerating TensorFlow* Inference with Intel® Deep Learning Boost on Intel® Xeon® Scalable Processors


In this webinar, we summarize how you can obtain an TensorFlow FP32 inference graph, converting and fusing many of these ops in 8-bit precision in TensorFlow* using the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and the usage of Intel Model Zoo as one stop shop for many of these models.

Read more +

Intel introduced Intel® Deep Learning Boost (Intel® DL Boost), a new set of embedded processor technologies designed to accelerate deep learning applications. Intel DL Boost includes new Vector Neural Network Instructions (VNNI) that can be used to perform computation in 8-bit precision that essentially reduces memory usage by 4x and increases the rate of arithmetic operations executed per second compared to floating point precision with low loss of accuracy. The process includes a pre-trained model with floating point precision, and obtaining a quantized version of the model to exploit Intel DL Boost instructions and accelerate inference performance.

Niroop Ammbashankar

Niroop works for Intel’s Machine Learning Performance Group as a Deep Learning Software Engineer. His graduate studies concentrated on computer vision, medical imaging and robotics. His earlier work experiences include a self-driving telemedicine robot, 3d laser scanners, 3d mesh representations, Intel® RealSense™ depth camera, Intel Alloy virtual reality headset and Intel® Autonomous driver assistance system tools amongst others. Currently, he works on CPU optimizations for Tensorflow*.


December 10
9:00 AM PDT

How PlaidML Compiles Machine Learning Kernels


Learn how PlaidML’s compiler is structured to enable state of the art optimizations of Machine Learning workloads.

Read more +

As discussed in the “PlaidML Tensor Compiler” introductory webinar, you can use PlaidML to replace kernel libraries with PlaidML’s extensible and high performance compiler. PlaidML’s philosophy is that optimal kernels can be automatically produced from hardware descriptions if the constraints inherent to ML problems are appropriately represented. PlaidML utilizes a Nested Polyhedral Model to represent operations at a granularity suited for the loop restructuring optimizations needed by machine learning workloads. This webinar will show you how PlaidML transforms high level semantic descriptions of ML operations into optimized ML kernels.

You Will Learn:

  • How PlaidML compiles ML kernels
  • How PlaidML enables the development and deployment of novel operations and optimization techniques

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking in his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.


December 12
9:00 AM PDT

Non-visual AI workloads on the Intel® Movidius Myriad X VPU


This webinar will explore flexibility of both the compute capabilities of the Intel® Movidius Myriad X VPU and that of The Intel® Distribution of OpenVINO™ toolkit to develop solutions based on non-visual AI models in various domains, e.g. speech recognition. The webinar sets out to explore a speech-recognition acoustic model inference based on Kaldi* neural networks and speech feature vectors.

Read more +

The Intel® Movidius Myriad X VPU provides exceptional inference computation capabilities in a small form-factor and with a low power consumption profile. The Intel® Distribution of OpenVINO toolkit enables developers to ingest AI models from various frameworks, e.g. Tensorflow*, PyTorch*, and target said models to execute on the VPU in a performant manner. This combination of a mature toolkit and a highly capable VPU have allowed developers all over the world to deploy AI vision solutions rather quickly. This webinar will explore flexibility of both the compute capabilities of the VPU and that of The Intel® Distribution of OpenVINO toolkit to develop solutions based on non-visual AI models in various domains, e.g. speech recognition. The webinar sets out to explore a speech-recognition acoustic model inference based on Kaldi* neural networks and speech feature vectors.

Rudy Cazabon

Rudy holds a Bachelor’s degree in Space Science with minor in Mechanical Engineering from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph.


Webinar Series Moderator

Meghana Rao

Artificial Intelligence Developer Evangelist at Intel

Bio: Meghana Rao is an Artificial Intelligence Developer Evangelist at Intel. In her role, she works closely with universities and developers in evangelizing Intel’s AI portfolio and solutions, helping them understand Machine Learning and Deep Learning concepts, building models and POCs using Intel optimized frameworks and libraries like Caffe*, Tensorflow* and Intel®️ Distribution of Python*. She has a Bachelor’s degree in Computer Science and Engineering and a Master’s degree in Engineering and Technology Management with past experience in embedded software development, Windows* app development and UX design methodologies.

Enter your info to sign up

(*)All fields are required

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please select a country.
Please select

By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel’s web sites and communications are subject to our Privacy Notice and Terms of Use.

You will receive an email confirmation to attend your selected webinar(s).