AI Developer Webinar Series

We’ll be covering the latest in frameworks, optimization tools, and new product launches through the year. This is your chance to expand your AI developer toolkit, in just one hour of your day. Bring your questions for our Intel experts to answer live during each webinar. Sign up for one or more sessions below, and begin sharpening your AI skills.

Webinars Now On-Demand

Did you miss a live webinar? Not to worry. All of the webinars have been recorded and are available to watch at your convenience. Check the box(es) to the webinar(s) you would like to view, enter your info to sign up if you have not registered already, and you will be mailed a link to view.

Select your webinar(s):

  Sign me up for all the webinars

Introduction to Reinforcement Learning Coach

Read more +

Reinforcement Learning Coach (RL Coach) is a comprehensive framework that enables reinforcement learning (RL) agent development, training, and evaluation.

Join us for a webinar introducing Reinforcement Learning Coach. RL Coach is a comprehensive framework that enables reinforcement learning (RL) agent development, training, and evaluation. Learn the basics of what Reinforcement Learning is, what exactly is RL Coach, and how you can get started with using RL Coach.

Michael Zephyr is an AI Developer Evangelist within the Intel Architecture, Graphics and Software Group at Intel. He works on promoting various Intel technologies that pertain to machine learning and artificial intelligence and regularly speaks at universities and conferences to help spread knowledge of AI. Michael holds a bachelor's degree in Computer Science from Oregon State University and a master's degree in Computer Science from the Georgia Institute of Technology. He can often be found playing board games or video games and lounging with his wife and cat in his free time.

Introduction to the Intel® Distribution of OpenVINO™ Toolkit and WinML* (AI on PC)

Read more +

In this webinar you will learn how real-time inference on the PC for visual workloads such as object detection, recognition, and tracking are now easily developed with Intel® Distribution of the OpenVINO™ toolkit and Windows Machine Learning API.

Rudy Cazabon - Bachelor’s degree in Space Science (minor in Mechanical Engineering) from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph

Introduction to NLP Architect

Read more +

This webinar focuses on introducing the audience to Natural Language Processing (NLP) Architect, a Python library from the Intel® AI Lab for exploring the state-of-the-art deep learning topologies.

You will learn:

  • Intel’s AI Portfolio
  • What is Natural Language Processing
  • What is Deep Learning
  • Deep Learning Techniques with Natural Language Processing
  • How can NLP Architect be used
  • NLP Architect Library Overview

Abdulmecit Gungor has received Bachelor of Electronics Engineering and a minor degree in Mathematics from City University of Hong Kong with The S. H. Ho Foundation Academic achievement reward. He has worked as a research engineer, then completed his Master degree at Purdue. His interests are NLP application development on real life problems, text mining, statistical machine learning.

Sulaimon Ibrahim is a member of the Intel’s Technical Developer Evangelist team, focused on highlighting, training and showcasing Intel products and tools to developers worldwide. He currently focuses on Artificial Intelligence, developing coursework for Intel’s developer ecosystem and then delivering trainings for both industry and academic developers interested in using Intel’s optimized frameworks and libraries. Sulaimon has been in the tech industry for over 7 years and has a master’s Degree in Computer Science with researches in Data Mining.

Deep-dive and Use-cases with the Intel® Distribution of OpenVINO toolkit (AI on PC)

Read more +

A previous webinar introduced the inference engine to the community of developers as the component to use to develop realtime applications. This webinar will perform a deep-dive on the capabilities of the inference engine and the API that enables creation and deployment of said applications. During this webinar a selection of use-cases will be reviewed in the context of the model/topologies used and the various hardware targets they employed.
You will learn:

  • A review of the OpenVINO toolkit
  • A deep-dive of the inference engine API via a code walkthrough
  • A survey of use-cases and applications developed for various hardware targets

Rudy Cazabon, AI Developer Evangelist at Intel

Bachelor’s degree in Space Science (minor in Mechanical Engineering) from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph.

Introduction to the 2nd Gen Intel® Xeon® Scalable Processor

Read more +

We will introduce the new 2nd Gen Intel® Xeon® Scalable processor with Intel® Deep learning boost and its benefits for Deep learning inference applications. We will demonstrate performance improvements with both the new embedded AI acceleration and software optimizations. We will also share examples of real world deployments including pointers to deploy Deep learning on Xeon.

Banu Nagasundaram is a product marketing manager with the Artificial Intelligence Products Group at Intel, where she drives overall Intel AI products positioning and AI benchmarking strategy and acts as the technical marketer for AI products including Intel Xeon and Intel Nervana Neural Network Processors. Previously, Banu was a product marketing engineer with the Data Center Group at Intel, where she supported performance marketing for Xeon Phi, Intel FPGA, and Xeon for AI; was a design engineer on the exascale supercomputing research team with Intel Federal; and worked at Qualcomm doing design verification of mobile processors. Banu holds an MS in electrical and computer engineering from the University of Florida and is working toward an MBA at UC Berkeley’s Haas School of Business.

Indu Kalyanaraman is a AI Performance Marketing Manager for Data Center products at Intel. She is responsible for driving performance analysis and product positioning for Machine Learning and Deep Learning workloads on Xeon and other data center products. In her previous roles at Intel, Indu worked on Workstation and Storage/Ethernet performance marketing; Indu also has broad engineering experience including managing a processor validation team. Indu holds an MS in Electrical and Computer Engineering from The Ohio State University

Accelerating Deep Learning Workloads in the Cloud and Data Centers

Read more +

Get started using Intel optimized AI environments for your workloads in the cloud or in your data center.
Collaboration between Intel, major cloud service providers and hardware OEM partners resulted in the availability of pre-configured Intel optimized DL environments. This webinar will get you started using Intel optimized AI environments for your workloads, be it in the cloud or in your data center.

You Will Learn:

  • Intel’s AI software and hardware strategy
  • How to launch pre-configured virtual machines with Intel optimized Deep Learning Frameworks in the cloud (AWS, Azure and GCP)
  • How to run TensorFlow* CNN benchmarks with pre-configured Intel environments
  • Technical details of the Intel optimized solutions to accelerate deployment to the Data Center via OEM partners (Dell, Lenovo, HPE, Inspur)

Ravi Panchumarthy

Ravi Panchumarthy, PhD, is a Machine Learning Engineer at Intel's Artificial Intelligence Products Group. He collaborates with Intel's customers and partners to build and optimize AI solutions. He also works with cloud service providers adopting Intel's AI optimizations for cloud instances and services. Ravi has a PhD in Computer Science & Engineering from University of South Florida. His dissertation focused on developing novel non-Boolean computing techniques for computer vision applications using nanomagnetic field-based computing. He holds two patents and several peer-reviewed publications in journals and conferences.

Accelerating AI inference using Intel® Deep Learning Boost and OpenVINO™ Toolkit

Read more +

In this webinar, discover how to accelerate AI and deep learning inference on the 2nd generation Intel® Xeon™ Scalable processors using Intel® DL Boost and Intel® OpenVINO™ Toolkit.

You Will Learn:

  • About the Intel® DL Boost Technology of 2nd Generation Intel® Xeon™ Scalable processors
  • How to use the capabilities of the Calibration tool
  • How to deploy an Int8 quantized model on 2nd Generation Intel® Xeon™ Scalable processors using the Inference Engine

Meghana Rao

Meghana Rao is an Artificial Intelligence Developer Evangelist at Intel. In her role, she works closely with universities and developers in evangelizing Intel’s AI portfolio and solutions, helping them understand Machine Learning and Deep Learning concepts, building models and POCs using Intel optimized frameworks and libraries like Caffe*, Tensorflow* and Intel® Distribution of Python*. She has a Bachelor’s degree in Computer Science and Engineering and a Master’s degree in Engineering and Technology Management with past experience in embedded software development, Windows* app development and UX design methodologies.

Going Deep with Reinforcement Learning Coach

Read more +

Learn how to get up to speed with Reinforcement Learning Coach. Dive into ways to extend Reinforcement Learning Coach to add your own Agent and Environment.

Walk through how to use OpenAI Gym and RL Coach on the CLI and through Python. Examine how to track your progress with Coach Dashboard. Delve into some of the existing agents and environments while exploring at the extensibility of RL Coach for your own project. Last, discover how to utilize Amazon SageMaker and Intel CPUs to run your Reinforcement Learning jobs.

You Will Learn:

  • Review Reinforcement Learning basics
  • Dive into Examples of OpenAI Gym and RL Coach
  • Explore Coach Dashboard
  • Integrate your own RL Agent and Environment
  • Discover ways to integrate Amazon SageMaker

Michael Zephyr

Michael Zephyris an AI Developer Evangelist within the Intel Architecture, Graphics and Software Group at Intel. He works on promoting various Intel technologies that pertain to machine learning and artificial intelligence and regularly speaks at universities and conferences to help spread knowledge of AI. Michael holds a bachelor's degree in Computer Science from Oregon State University and a master's degree in Computer Science from the Georgia Institute of Technology. He can often be found playing board games or video games and lounging with his wife and cat in his free time.

The PlaidML Tensor Compiler

Read more +

A naive implementation of a convolution can be written in 18 lines of Python, while kernel libraries typically devote tens of thousands of lines of code to implementing optimized variants of convolution, often in architecture specific ways or even assembly in the case of cuDNN*. In cases like this where specialized code is needed to achieve performance goals, massive engineering resources are required, maintenance and development costs are high, and lock-in is common. Compilers were originally created to automate construction of general purpose machine code. Nowadays, machine learning is increasingly using compilation, reducing engineering constraints and enabling automated special-case performance upgrades for workloads too rare or novel to see human optimization. PlaidML is a tensor compiler that can be used as a component in existing ML stacks to boost performance and to enable performance portability.

You Will Learn:

  • How to use PlaidML in an existing TensorFlow* program through demonstration
  • About the PlaidML internal architecture and its role in the broader ML ecosystem
  • The technical details we will be discussing further in “How PlaidML Compiles Machine Learning Kernels,” our upcoming webinar

Tim Zerrell

Tim Zerrell is a Deep Learning Software Engineer in Intel’s Artificial Intelligence Products Group. He works on PlaidML, focusing on representing the mathematics of machine learning effectively for performing hardware and software optimizations. He received a Master’s degree in Mathematics from the University of Washington and a Bachelor’s degree in Mathematics from Pomona College. In his free time he enjoys hiking his local Pacific Northwest wilderness.

Denise Kutnick

Denise Kutnick is a Deep Learning Software Engineer within Intel’s Artificial Intelligence Products Group. In her role, Denise works on the development and community engagement of PlaidML, an extensible, open-source deep learning tensor compiler. Denise holds a bachelor’s degree in Computer Science from Florida Atlantic University and a master’s degree in Computer Science from Georgia Institute of Technology.

Getting Started with NLP Architect

Read more +

Walk through how to install NLP Architect and supported deep learning frameworks. Take a look at NLP Architect Model Zoo and showcase deployment of any of these existing pre-trained models using Rest API. Run through an end to end workload using Aspect Based Sentiment Analysis from loading/cleaning data to training and inference. Lastly, we’ll explore NLP Architect ready solutions for your NLP/NLU workloads.

You Will Learn:

  • How to you use the Installation Guide for NLP Architect
  • Deploy pre-trained models using Rest API
  • About end-to-end example case using Aspect Based Sentiment Analysis
  • More about NLP Architect Solutions

Abdulmecit Gungor

Abdulmecit Gungor has received Bachelor of Electronics Engineering and a minor degree in Mathematics from City University of Hong Kong with The S. H. Ho Foundation Academic achievement reward. He has worked as a research engineer, then completed his Master degree at Purdue. His interests are NLP application development on real life problems, text mining, statistical machine learning.

Peter Izsak

Peter is a Deep Learning Data Scientist and a member of Intel AI Lab, a research team within Intel’s AI Product Group. He holds a BSc and MSc in Computer Science and Information Retrieval from the Technion – Israel Institute of Technology. Peter is leading the development of NLP Architect – an open source library of NLP/NLU models developed within Intel AI Lab, working on novel NLP/NLU research using deep learning approaches and optimizing neural networks.

What are FPGAs and How Do I Use Them?

Read more +

This introductory course is a high-level overview of what a field programmable gate array (FPGA) is, why FPGAs are important as inference accelerators, and how easily they can be adopted into compute clusters.

Learn how heterogeneous parallel computing is used to solve the complex problems. Discover how FPGAs are used for efficient compute offload to overcome the limitations of scaling systems. Walk through the different programming models that exist for FPGAs. See how the Acceleration Stack for Intel® Xeon® CPU with FPGAs can be deployed transparently into data centers and cloud systems to take advantage of FPGA based acceleration.

You Will Learn:

  • About FPGAs and what programming models exist for FPGAs
  • How FPGAs are used for efficient compute offload
  • How the Acceleration Stack for Intel® Xeon® CPU with FPGAs is used in data center and cloud systems

Bill Jenkins

Bill Jenkins serves as a principal application engineer inside the Programmable Solutions Group at Intel Corporation. He focuses on the acceleration of a variety of workloads in data center, cloud and edge applications using field programmable gate arrays (FPGAs). He has also been one of the key driving forces behind adoption of higher level programming models for FPGAs to enable software developers and scientists to target the FPGA.

nGraph: Unlocking Next-generation Performance with Deep Learning Compilers

Read more +

The rapid growth of deep learning in large-scale real-world applications has led to a rapid increase in demand for high-performance training and inference solutions. This demand is reflected in increased investment in deep learning performance by hardware manufacturers, and includes a proliferation of new application-specific accelerators.

But performance isn’t driven by hardware alone. In the software realm, a new class of deep learning compilers has emerged, which brings to bear both classic and novel compiler techniques in order to maximize the performance of deep learning systems. Recently developed deep learning compilers include NNVM/TVM from the University of Washington, Glow from Facebook, XLA from Google, and nGraph from Intel. These deep learning compilers unlock a wealth of optimizations that encompass the whole data-flow graph. This approach achieves substantial speedups over the approach favored by existing frameworks, in which an interpreter orchestrates the invocation of per-op compute kernels that must be optimized specifically for the framework and hardware target. This webinar will offer a comprehensive overview of Intel’s nGraph deep learning compiler.

Adam Procter

Adam Procter is a deep learning software engineer in the Artificial Intelligence Products Group at Intel, where he works on the core design of the Intel nGraph deep learning compiler. He holds a PhD in computer science from the University of Missouri, where his research focused on programming language semantics, high-assurance computing, and techniques for compiling functional programming languages to reconfigurable hardware.

Using FPGAs for Datacenter Acceleration

Read more +

Learn how to deploy deep learning inference tasks on FPGAs using the Intel® Distribution of OpenVINO™ toolkit and more.

Field-programmable gate arrays or FPGAs are user-customizable integrated circuits that can be deployed in datacenters to accelerate algorithms. In this webinar, explore Intel solutions to accelerate various workloads. Discover how deep learning inference tasks can be deployed on the FPGAs using the Intel® Distribution of OpenVINO™ toolkit and the Intel® FPGA Deep Learning Acceleration Suite. See how to use the Acceleration Stack for Intel® Xeon® CPU with FPGA to develop and deploy workload optimizations on the Intel® Programmable Acceleration Cards. Examine ways to develop custom Accelerator Functional Units for the FPGA.

You Will Learn:

  • How to leverage the solutions provided by Intel to accelerate various workloads
  • How deep learning inference tasks can be deployed on the FPGAs using the Intel Distribution of OpenVINO toolkit and the Intel FPGA Deep Learning Acceleration Suite
  • Ways to develop custom Accelerator Functional Units for the FPGA

Steven Elzinga

Steven Elzinga is an Application Engineer in the Customer Training group at Intel Programmable Solutions Group focusing on deep learning acceleration techniques for the FPGA. His FPGA experience also includes embedded systems and real time video processing as an IP and system developer. Steven holds a bachelor’s degree in electrical engineering from the University of Utah and a master’s degree in electrical engineering from the University of Colorado.

AI Prototyping With The Intel®️ Neural Compute Stick 2

Read more +

Learn how to prototype AI Inference for edge devices using the Intel®️ Neural Compute Stick 2 with the Intel®️ Distribution of OpenVINO™️ toolkit. The Intel®️ Neural Compute Stick 2 is a low cost device, with great deep learning performance in a low power form factor. Your knowledge gained of the Intel®️ Distribution of OpenVINO™️ toolkit will extend to Intel’s other AI Inference products on CPU, GPU, and FPGA.

In this webinar you will learn:

  • How to get started with the Intel®️ Neural Compute Stick 2 and the Intel®️ Distribution of OpenVINO™️ toolkit.
  • Learn about the value and flexibility of AI prototyping with the Intel®️ Neural Compute Stick 2
  • Learn about the new capabilities offered by the latest release of Intel®️ Distribution of OpenVINO™️ toolkit
  • The Intel®️ NCS 2 can be easily used with many ARM* based boards such as Raspberry Pi and others, as well as Intel architecture based boards from Aaeon called Up boards
  • The Intel®️ Distribution of OpenVINO™️ toolkit optimizes a trained models for Tensorflow, Caffe, MXNet, PyTorch and PaddlePaddle for Intel’s four vision architectures, Intel Movidius Myriad X vision processing unit (VPU) architecture found inside the Intel®️ NCS 2, as well as for Intel CPU, Intel integrated GPU, and Intel FPGA
  • Learn where to find extended resources on training of AI Inference using the Intel®️ Distribution of OpenVINO™️ toolkit

Jay Burris

Jay Burris has been working in the embedded, automotive and Internet of Things industry for 20 years. He is passionate about advocating for AI developers with use of Intel Technologies such as Neural Compute Stick 2, Intel®️ Distribution of OpenVINO™️ toolkit , and Vision Accelerator Design Products and Kits for vision inferencing in edge IOT product.

Develop Windows*-based AI Applications Using Windows Machine Learning (AI on PC)

Read more +

In this webinar we introduce to the basics of Windows Machine Learning (WinML) concepts, show you how to use existing trained models (such as ONNX) in your Windows-based applications, demonstrate how to target different devices (CPU, GPU etc.) and talk about the process of incorporating a trained model in a Windows-based UWP application.

We will also discuss how to use the WinML APIs in loading the models, setting up the sessions, binding a model and evaluating the inputs and outputs.

Praveen Kundurthy

Praveen Kundurthy is a Developer Evangelist at Intel Corporation. Praveen has more than 15 years development experience with C++, C#, and Python and his main interests are artificial intelligence, Windows* programming, and game development. Praveen has been with Intel for more than nine years. He closely works with the developer community, trains developers on Intel tools and technologies, helps them understand how technologies can be used by developing proof of concepts, and writes blog posts and technical articles, which are posted on Intel Developer Zone web site. He has a Master of Science in Computer Engineering with experience in multiple technologies, such as Alexa* for PC, game programing and game optimizations, Windows* programing, Android programming and storage technologies.

Maximize the Use of CPU Resources for XGBoost* Training

Read more +

;">Learn how to speed up your boosting algorithm workloads on CPU with Intel® Data Analytics Acceleration Library (Intel® DAAL), a highly optimized library for Intel® CPUs.

Gradient boosting has many real-world applications as a general-purpose, supervised learning technique for regression, classification, and page ranking problems. It’s a common choice for large problem sizes yet training implementation of this method is quite complex because of the multiple kernel dependencies that impact execution time, irregular memory access and many other issues.

Join us for a webinar to learn the optimizations that have been done to XGBoost* and how to take advantage of it in your workload. We’ll also give example training workloads that compare the performance of latest XGBoost* implementation on an end to pipeline.

Abdulmecit Gungor

Abdulmecit Gungor has received Bachelor of Electronics Engineering and a minor degree in Mathematics from City University of Hong Kong with The S. H. Ho Foundation Academic achievement reward. He has worked as a research engineer, then completed his Master degree at Purdue. His interests are NLP application development on real life problems, text mining, statistical machine learning.


Webinar Series Moderator

Meghana Rao

Artificial Intelligence Developer Evangelist at Intel

Bio: Meghana Rao is an Artificial Intelligence Developer Evangelist at Intel. In her role, she works closely with universities and developers in evangelizing Intel’s AI portfolio and solutions, helping them understand Machine Learning and Deep Learning concepts, building models and POCs using Intel optimized frameworks and libraries like Caffe*, Tensorflow* and Intel®️ Distribution of Python*. She has a Bachelor’s degree in Computer Science and Engineering and a Master’s degree in Engineering and Technology Management with past experience in embedded software development, Windows* app development and UX design methodologies.

Enter your info to sign up

(*)All fields are required

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please select a country.
Please select

By submitting this form, you are confirming you are an adult 18 years or older and you agree to share your personal information with Intel to use for this business request. You also agree to subscribe to stay connected to the latest Intel technologies and industry trends by email and telephone. You may unsubscribe at any time. Intel’s web sites and communications are subject to our Privacy Notice and Terms of Use.

You will receive an email confirmation to attend your selected webinar(s).