Intel AI

                  Intel® AI Developer Program

Join us for our FREE AI webinar series

Register for one or more webinars below. If you missed a webinar, not to worry! Click here for the AI on-demand webinars. Remember we will be adding new AI webinars regularly, so bookmark this page!

Select your webinar(s):

  Sign me up for all the webinars


July 10
9:00 AM PDT

Going Deep with Reinforcement Learning Coach


Learn how to get up to speed with Reinforcement Learning Coach. Dive into ways to extend Reinforcement Learning Coach to add your own Agent and Environment.

Read more +

Walk through how to use OpenAI Gym and RL Coach on the CLI and through Python. Examine how to track your progress with Coach Dashboard. Delve into some of the existing agents and environments while exploring at the extensibility of RL Coach for your own project. Last, discover how to utilize Amazon SageMaker and Intel CPUs to run your Reinforcement Learning jobs.

You Will Learn:

  • Review Reinforcement Learning basics
  • Dive into Examples of OpenAI Gym and RL Coach
  • Explore Coach Dashboard
  • Integrate your own RL Agent and Environment
  • Discover ways to integrate Amazon SageMaker

Michael Zephyr

Michael Zephyris an AI Developer Evangelist within the Intel Architecture, Graphics and Software Group at Intel. He works on promoting various Intel technologies that pertain to machine learning and artificial intelligence and regularly speaks at universities and conferences to help spread knowledge of AI. Michael holds a bachelor's degree in Computer Science from Oregon State University and a master's degree in Computer Science from the Georgia Institute of Technology. He can often be found playing board games or video games and lounging with his wife and cat in his free time.


July 17
9:00 AM PDT

The PlaidML Tensor Compiler


Tensor compilers provide an opportunity to bypass the challenges of using kernel libraries to optimize ML workloads. Learn more about PlaidML, Intel’s open-source tensor compiler, in this webinar.

Read more +

A naive implementation of a convolution can be written in 18 lines of Python, while kernel libraries typically devote tens of thousands of lines of code to implementing optimized variants of convolution, often in architecture specific ways or even assembly in the case of cuDNN*. In cases like this where specialized code is needed to achieve performance goals, massive engineering resources are required, maintenance and development costs are high, and lock-in is common. Compilers were originally created to automate construction of general purpose machine code. Nowadays, machine learning is increasingly using compilation, reducing engineering constraints and enabling automated special-case performance upgrades for workloads too rare or novel to see human optimization. PlaidML is a tensor compiler that can be used as a component in existing ML stacks to boost performance and to enable performance portability.

You Will Learn:

  • How to use PlaidML in an existing TensorFlow* program through demonstration
  • About the PlaidML internal architecture and its role in the broader ML ecosystem
  • The technical details we will be discussing further in “How PlaidML Compiles Machine Learning Kernels,” our upcoming webinar

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.


July 24
9:00 AM PDT

Getting Started with NLP Architect


Learn how to get up to speed with NLP Architect on your Natural Language Processing workload. NLP Architect is an open-source Python library for exploring the state-of-the-art deep learning topologies and techniques for natural language processing and natural language understanding by Intel® AI Lab.

Read more +

Walk through how to install NLP Architect and supported deep learning frameworks. Take a look at NLP Architect Model Zoo and showcase deployment of any of these existing pre-trained models using Rest API. Run through an end to end workload using Aspect Based Sentiment Analysis from loading/cleaning data to training and inference. Lastly, we’ll explore NLP Architect ready solutions for your NLP/NLU workloads.

You Will Learn:

  • How to you use the Installation Guide for NLP Architect
  • Deploy pre-trained models using Rest API
  • About end-to-end example case using Aspect Based Sentiment Analysis
  • More about NLP Architect Solutions

Abdulmecit Gungor

Abdulmecit Gungor has received Bachelor of Electronics Engineering and a minor degree in Mathematics from City University of Hong Kong with The S. H. Ho Foundation Academic achievement reward. He has worked as a research engineer, then completed his Master degree at Purdue. His interests are NLP application development on real life problems, text mining, statistical machine learning.

Peter Izsak

Peter is a Deep Learning Data Scientist and a member of Intel AI Lab, a research team within Intel’s AI Product Group. He holds a BSc and MSc in Computer Science and Information Retrieval from the Technion – Israel Institute of Technology. Peter is leading the development of NLP Architect – an open source library of NLP/NLU models developed within Intel AI Lab, working on novel NLP/NLU research using deep learning approaches and optimizing neural networks.


August 6
9:00 AM PDT

What are FPGAs and How Do I Use Them?


This introductory course is a high-level overview of what a field programmable gate array (FPGA) is, why FPGAs are important as inference accelerators, and how easily they can be adopted into compute clusters.

Read more +

Learn how heterogeneous parallel computing is used to solve the complex problems. Discover how FPGAs are used for efficient compute offload to overcome the limitations of scaling systems. Walk through the different programming models that exist for FPGAs. See how the Acceleration Stack for Intel® Xeon® CPU with FPGAs can be deployed transparently into data centers and cloud systems to take advantage of FPGA based acceleration.

You Will Learn:

  • About FPGAs and what programming models exist for FPGAs
  • How FPGAs are used for efficient compute offload
  • How the Acceleration Stack for Intel® Xeon® CPU with FPGAs is used in data center and cloud systems

Bill Jenkins

Bill Jenkins serves as a principal application engineer inside the Programmable Solutions Group at Intel Corporation. He focuses on the acceleration of a variety of workloads in data center, cloud and edge applications using field programmable gate arrays (FPGAs). He has also been one of the key driving forces behind adoption of higher level programming models for FPGAs to enable software developers and scientists to target the FPGA.


August 13
9:00 AM PDT

nGraph: Unlocking Next-generation Performance with Deep Learning Compilers


The rapid growth of deep learning in large-scale real-world applications has led to a rapid increase in demand for high-performance training and inference solutions. This demand is reflected in increased investment in deep learning performance by hardware manufacturers, and includes a proliferation of new application-specific accelerators.

Read more +

But performance isn’t driven by hardware alone. In the software realm, a new class of deep learning compilers has emerged, which brings to bear both classic and novel compiler techniques in order to maximize the performance of deep learning systems. Recently developed deep learning compilers include NNVM/TVM from the University of Washington, Glow from Facebook, XLA from Google, and nGraph from Intel. These deep learning compilers unlock a wealth of optimizations that encompass the whole data-flow graph. This approach achieves substantial speedups over the approach favored by existing frameworks, in which an interpreter orchestrates the invocation of per-op compute kernels that must be optimized specifically for the framework and hardware target. This webinar will offer a comprehensive overview of Intel’s nGraph deep learning compiler.

Adam Procter

Adam Procter is a deep learning software engineer in the Artificial Intelligence Products Group at Intel, where he works on the core design of the Intel nGraph deep learning compiler. He holds a PhD in computer science from the University of Missouri, where his research focused on programming language semantics, high-assurance computing, and techniques for compiling functional programming languages to reconfigurable hardware.


August 15
9:00 AM PDT

How PlaidML Compiles Machine Learning Kernels


Learn how PlaidML’s compiler is structured to enable state of the art optimizations of Machine Learning workloads.

Read more +

As discussed in the “PlaidML Tensor Compiler” introductory webinar, you can use PlaidML to replace kernel libraries with PlaidML’s extensible and high performance compiler. PlaidML’s philosophy is that optimal kernels can be automatically produced from hardware descriptions if the constraints inherent to ML problems are appropriately represented. PlaidML utilizes a Nested Polyhedral Model to represent operations at a granularity suited for the loop restructuring optimizations needed by machine learning workloads. This webinar will show you how PlaidML transforms high level semantic descriptions of ML operations into optimized ML kernels.

You Will Learn:

  • How PlaidML compiles ML kernels
  • How PlaidML enables the development and deployment of novel operations and optimization techniques

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking in his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.


August 29
9:00 AM PDT

Using FPGAs for Datacenter Acceleration


Learn how to deploy deep learning inference tasks on FPGAs using the Intel® Distribution of OpenVINO™ toolkit and more.

Read more +

Field-programmable gate arrays or FPGAs are user-customizable integrated circuits that can be deployed in datacenters to accelerate algorithms. In this webinar, explore Intel solutions to accelerate various workloads. Discover how deep learning inference tasks can be deployed on the FPGAs using the Intel® Distribution of OpenVINO™ toolkit and the Intel® FPGA Deep Learning Acceleration Suite. See how to use the Acceleration Stack for Intel® Xeon® CPU with FPGA to develop and deploy workload optimizations on the Intel® Programmable Acceleration Cards. Examine ways to develop custom Accelerator Functional Units for the FPGA.

You Will Learn:

  • How to leverage the solutions provided by Intel to accelerate various workloads
  • How deep learning inference tasks can be deployed on the FPGAs using the Intel Distribution of OpenVINO toolkit and the Intel FPGA Deep Learning Acceleration Suite
  • Ways to develop custom Accelerator Functional Units for the FPGA

Karl Qi

Karl Qi is an Application Engineer in the Customer Training group at Intel Programmable Solutions Group. He has been with the group for nine years and currently his focus is on high-level design and deep learning acceleration techniques for the FPGA..


Webinars Now On-Demand

Did you miss a live webinar? Not to worry. All of the webinars have been recorded and are available to watch at your convenience. Check the box(es) to the webinar(s) you would like to view, enter your info to sign up if you have not registered already, and you will be mailed a link to view.

Introduction to Reinforcement Learning Coach

Read more +

Reinforcement Learning Coach (RL Coach) is a comprehensive framework that enables reinforcement learning (RL) agent development, training, and evaluation.

Join us for a webinar introducing Reinforcement Learning Coach. RL Coach is a comprehensive framework that enables reinforcement learning (RL) agent development, training, and evaluation. Learn the basics of what Reinforcement Learning is, what exactly is RL Coach, and how you can get started with using RL Coach.

Michael Zephyr is an AI Developer Evangelist within the Intel Architecture, Graphics and Software Group at Intel. He works on promoting various Intel technologies that pertain to machine learning and artificial intelligence and regularly speaks at universities and conferences to help spread knowledge of AI. Michael holds a bachelor's degree in Computer Science from Oregon State University and a master's degree in Computer Science from the Georgia Institute of Technology. He can often be found playing board games or video games and lounging with his wife and cat in his free time.

Introduction to the Intel® Distribution of OpenVINO™ Toolkit and WinML*

Read more +

In this webinar you will learn how real-time inference on the PC for visual workloads such as object detection, recognition, and tracking are now easily developed with Intel® Distribution of the OpenVINO™ toolkit and Windows Machine Learning API.

Rudy Cazabon - Bachelor’s degree in Space Science (minor in Mechanical Engineering) from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph

Introduction to NLP Architect

Read more +

This webinar focuses on introducing the audience to Natural Language Processing (NLP) Architect, a Python library from the Intel® AI Lab for exploring the state-of-the-art deep learning topologies.

You will learn:

  • Intel’s AI Portfolio
  • What is Natural Language Processing
  • What is Deep Learning
  • Deep Learning Techniques with Natural Language Processing
  • How can NLP Architect be used
  • NLP Architect Library Overview

Abdulmecit Gungor has received Bachelor of Electronics Engineering and a minor degree in Mathematics from City University of Hong Kong with The S. H. Ho Foundation Academic achievement reward. He has worked as a research engineer, then completed his Master degree at Purdue. His interests are NLP application development on real life problems, text mining, statistical machine learning.

Sulaimon Ibrahim is a member of the Intel’s Technical Developer Evangelist team, focused on highlighting, training and showcasing Intel products and tools to developers worldwide. He currently focuses on Artificial Intelligence, developing coursework for Intel’s developer ecosystem and then delivering trainings for both industry and academic developers interested in using Intel’s optimized frameworks and libraries. Sulaimon has been in the tech industry for over 7 years and has a master’s Degree in Computer Science with researches in Data Mining.

Deep-dive and Use-cases with the Intel® Distribution of OpenVINO toolkit

Read more +

A previous webinar introduced the inference engine to the community of developers as the component to use to develop realtime applications. This webinar will perform a deep-dive on the capabilities of the inference engine and the API that enables creation and deployment of said applications. During this webinar a selection of use-cases will be reviewed in the context of the model/topologies used and the various hardware targets they employed.
You will learn:

  • A review of the OpenVINO toolkit
  • A deep-dive of the inference engine API via a code walkthrough
  • A survey of use-cases and applications developed for various hardware targets

Rudy Cazabon, AI Developer Evangelist at Intel

Bachelor’s degree in Space Science (minor in Mechanical Engineering) from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph.

Introduction to the 2nd Gen Intel® Xeon® Scalable Processor

Read more +

We will introduce the new 2nd Gen Intel® Xeon® Scalable processor with Intel® Deep learning boost and its benefits for Deep learning inference applications. We will demonstrate performance improvements with both the new embedded AI acceleration and software optimizations. We will also share examples of real world deployments including pointers to deploy Deep learning on Xeon.

Banu Nagasundaram is a product marketing manager with the Artificial Intelligence Products Group at Intel, where she drives overall Intel AI products positioning and AI benchmarking strategy and acts as the technical marketer for AI products including Intel Xeon and Intel Nervana Neural Network Processors. Previously, Banu was a product marketing engineer with the Data Center Group at Intel, where she supported performance marketing for Xeon Phi, Intel FPGA, and Xeon for AI; was a design engineer on the exascale supercomputing research team with Intel Federal; and worked at Qualcomm doing design verification of mobile processors. Banu holds an MS in electrical and computer engineering from the University of Florida and is working toward an MBA at UC Berkeley’s Haas School of Business.

Indu Kalyanaraman is a AI Performance Marketing Manager for Data Center products at Intel. She is responsible for driving performance analysis and product positioning for Machine Learning and Deep Learning workloads on Xeon and other data center products. In her previous roles at Intel, Indu worked on Workstation and Storage/Ethernet performance marketing; Indu also has broad engineering experience including managing a processor validation team. Indu holds an MS in Electrical and Computer Engineering from The Ohio State University

Accelerating Deep Learning Workloads in the Cloud and Data Centers

Read more +

Get started using Intel optimized AI environments for your workloads in the cloud or in your data center.
Collaboration between Intel, major cloud service providers and hardware OEM partners resulted in the availability of pre-configured Intel optimized DL environments. This webinar will get you started using Intel optimized AI environments for your workloads, be it in the cloud or in your data center.

You Will Learn:

  • Intel’s AI software and hardware strategy
  • How to launch pre-configured virtual machines with Intel optimized Deep Learning Frameworks in the cloud (AWS, Azure and GCP)
  • How to run TensorFlow* CNN benchmarks with pre-configured Intel environments
  • Technical details of the Intel optimized solutions to accelerate deployment to the Data Center via OEM partners (Dell, Lenovo, HPE, Inspur)

Ravi Panchumarthy

Ravi Panchumarthy, PhD, is a Machine Learning Engineer at Intel's Artificial Intelligence Products Group. He collaborates with Intel's customers and partners to build and optimize AI solutions. He also works with cloud service providers adopting Intel's AI optimizations for cloud instances and services. Ravi has a PhD in Computer Science & Engineering from University of South Florida. His dissertation focused on developing novel non-Boolean computing techniques for computer vision applications using nanomagnetic field-based computing. He holds two patents and several peer-reviewed publications in journals and conferences.

Webinar Series Moderator

Meghana Rao

Artificial Intelligence Developer Evangelist at Intel

Bio: Meghana Rao is an Artificial Intelligence Developer Evangelist at Intel. In her role, she works closely with universities and developers in evangelizing Intel’s AI portfolio and solutions, helping them understand Machine Learning and Deep Learning concepts, building models and POCs using Intel optimized frameworks and libraries like Caffe*, Tensorflow* and Intel®️ Distribution of Python*. She has a Bachelor’s degree in Computer Science and Engineering and a Master’s degree in Engineering and Technology Management with past experience in embedded software development, Windows* app development and UX design methodologies.

Enter your info to sign up

(*)All fields are required

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please select a country.
Please select

By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel’s web sites and communications are subject to our Privacy Notice and Terms of Use.