AI Developer Webinar Series

We’ll be covering the latest in frameworks, optimization tools, and new product launches through the year. This is your chance to expand your AI developer toolkit, in just one hour of your day. Bring your questions for our Intel experts to answer live during each webinar. Sign up for one or more sessions below, and begin sharpening your AI skills.

Join us for our FREE AI webinar series

Register for one or more webinars below. If you missed a webinar, not to worry! Click here for the AI on-demand webinars. Remember we will be adding new AI webinars regularly, so bookmark this page!

Select your webinar(s):

  Sign me up for all the webinars


August 29
9:00 AM PDT

Using FPGAs for Datacenter Acceleration


Learn how to deploy deep learning inference tasks on FPGAs using the Intel® Distribution of OpenVINO™ toolkit and more.

Read more +

Field-programmable gate arrays or FPGAs are user-customizable integrated circuits that can be deployed in datacenters to accelerate algorithms. In this webinar, explore Intel solutions to accelerate various workloads. Discover how deep learning inference tasks can be deployed on the FPGAs using the Intel® Distribution of OpenVINO™ toolkit and the Intel® FPGA Deep Learning Acceleration Suite. See how to use the Acceleration Stack for Intel® Xeon® CPU with FPGA to develop and deploy workload optimizations on the Intel® Programmable Acceleration Cards. Examine ways to develop custom Accelerator Functional Units for the FPGA.

You Will Learn:

  • How to leverage the solutions provided by Intel to accelerate various workloads
  • How deep learning inference tasks can be deployed on the FPGAs using the Intel Distribution of OpenVINO toolkit and the Intel FPGA Deep Learning Acceleration Suite
  • Ways to develop custom Accelerator Functional Units for the FPGA

Steven Elzinga

Steven Elzinga is an Application Engineer in the Customer Training group at Intel Programmable Solutions Group focusing on deep learning acceleration techniques for the FPGA. His FPGA experience also includes embedded systems and real time video processing as an IP and system developer. Steven holds a bachelor’s degree in electrical engineering from the University of Utah and a master’s degree in electrical engineering from the University of Colorado.


September 26
9:00 AM PDT

Optimizing Tensorflow for high performance on new backend devices using nGraph


In this webinar we introduce nGraph compiler and run time, and present its benefits in terms of graph compilation and abstraction of backends. We focus on how TensorFlow computation graph is transformed to nGraph IR, optimized and executed in a specific backend.

Read more +

Explore how the architecture of nGraph-TensorFlow bridge enables easy integration of hardware (such as CPU, GPU, and NNP) and software backend (Homomorphic Encryption and PlaidML). Walk through various features of nGraph using TensorFlow DL models and see how easy it is for end users to make minimal changes in their Tensorflow script to enable nGraph for optimizations and migration to new backends. Watch demonstrations of debugging tools, techniques to visualize the graph compilation process, and troubleshooting.

You Will Learn:

  • How compilers designed specifically for deep learning can achieve significant performance increase even with the existing hardware.
  • That deep learning compilers like nGraph & PlaidML are easy to use with TensorFlow.
  • That these compilers require minimal changes to previously written TensorFlow script to execute.

Avijit Chakraborty

Avijit Chakraborty is a Principal Engineer in the Artificial Intelligence Products Group at Intel Corporation. He currently leads the nGraph integration effort with TensorFlow*. Before joining Intel in 2017, he was a software architect and lead software development teams in deep learning framework, neuromorphic computing framework, and cellular modem technologies at Qualcomm Research and Development.


December 10
9:00 AM PDT

How PlaidML Compiles Machine Learning Kernels


Learn how PlaidML’s compiler is structured to enable state of the art optimizations of Machine Learning workloads.

Read more +

As discussed in the “PlaidML Tensor Compiler” introductory webinar, you can use PlaidML to replace kernel libraries with PlaidML’s extensible and high performance compiler. PlaidML’s philosophy is that optimal kernels can be automatically produced from hardware descriptions if the constraints inherent to ML problems are appropriately represented. PlaidML utilizes a Nested Polyhedral Model to represent operations at a granularity suited for the loop restructuring optimizations needed by machine learning workloads. This webinar will show you how PlaidML transforms high level semantic descriptions of ML operations into optimized ML kernels.

You Will Learn:

  • How PlaidML compiles ML kernels
  • How PlaidML enables the development and deployment of novel operations and optimization techniques

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking in his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.


Webinar Series Moderator

Meghana Rao

Artificial Intelligence Developer Evangelist at Intel

Bio: Meghana Rao is an Artificial Intelligence Developer Evangelist at Intel. In her role, she works closely with universities and developers in evangelizing Intel’s AI portfolio and solutions, helping them understand Machine Learning and Deep Learning concepts, building models and POCs using Intel optimized frameworks and libraries like Caffe*, Tensorflow* and Intel®️ Distribution of Python*. She has a Bachelor’s degree in Computer Science and Engineering and a Master’s degree in Engineering and Technology Management with past experience in embedded software development, Windows* app development and UX design methodologies.

Enter your info to sign up

(*)All fields are required

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please select a country.
Please select

By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel’s web sites and communications are subject to our Privacy Notice and Terms of Use.

You will receive an email confirmation to attend your selected webinar(s).