Code Ahead of the Curve with FREE Technical Trainings

Sign up today to attend LIVE SESSIONS covering the latest overviews, insights, and how-to’s on topics that drive our cross-architecture, heterogeneous-compute world—oneAPI, AI, HPC, rendering & ray tracing, video & media, IoT, and more.

Please select at least one session.
Please select at least one option.

Intro to oneTBB: A Modern C++ Library for Task-Based Parallelism on CPUs

Wednesday, June 7, 2023 | 9:00 AM PDT

Get an overview of oneAPI Threading Building Blocks, including how it simplifies the work of adding parallelism to complex applications across accelerated CPU architectures.

 

Since 2007, Threading Building Blocks (TBB) has been a widely used C++ template library for parallel programming on CPUs. With the introduction of oneAPI, it was modernized and improved for accelerated architectures and renamed to oneAPI Threading Building Blocks (oneTBB). It is a foundational layer for oneAPI languages and libraries.

This webinar introduces oneTBB and provides guidance on how to move existing code bases from TBB to oneTBB. You will learn:

  • The main features in oneTBB and how this performance library fits into the oneAPI ecosystem
  • oneTBB’s improvements and modernizations compared to TBB
  • Why it’s a good idea to migrate to oneTBB, including how to do it

Sign up today.

Skill level: All

Featured software

Download oneTBB standalone or as part of the Intel® oneAPI Base Toolkit


Code samples

Download a variety of samples on GitHub for oneTBB, including:


  • oneTBB Flow Graph shows how to split a computational kernel between CPU & GPU using asynchronous node and functional node.
  • TBB Resumable Tasks shows how to split a computational kernel between CPU & GPU using resumable task and parallel_for.
  • View all

 
Pavel Kumbrasev
Middleware Engineer, Intel

Read more +

Streamline AI Solutions for Data Generation & Large Language Models

Wednesday, June 14, 2023 | 9:00 AM PDT

Learn about the final set of open source AI reference kits, purpose-built to help you overcome the challenges of AI acceleration along the development pipeline.

 

Incorporating AI into an organization’s workloads or scaling up already-existing infrastructure is skill-heavy and computationally intensive, requiring the development of robust models trained on massive datasets and powerful GPUs on which to run them adequately.

Not every organization has the necessary resources to accomplish this.

This session focuses on a solution: a collection of open source AI reference kits from Accenture and Intel designed to make AI more accessible to organizations and optimized for improved training and inference time.

Specifically, the hour will be dedicated to the kits that target data generation and large language models: text data generation, image data generation, and voice data generation.

Key takeaways:

  • An introduction to these reference solutions, including how they address business-specific problems and speed up end-to-end AI pipelines with out-of-box optimizations
  • An overview of the kits designed for data generation and large language models
  • See one or more of the kits in action via a demo

Sign up today.

Skill level: Intermediate

Featured software

Each AI reference kit is built using the Intel® AI Analytics Toolkit —familiar Python tools and frameworks to accelerate end-to-end data science and analytics pipelines.


Get the reference code

Download this session’s showcased reference kits from GitHub:


  • Text Data Generation for generating synthetic text, such as the provided source dataset, using a large language model (LLM)
  • Image Data Generation for generating synthetic images using generative adversarial networks (GANs).
  • Voice Data Generation for translating input text data to generate speech using transfer learning with VOCODER models.

 
Pramod Pai
AI Software Solutions Engineer, Intel

Read more +

Why oneMKL? Speed Up Math Computation on Latest Hardware

Wednesday, June 21, 2023 | 9:00 AM PDT

If complex math comprises the underpinnings of your applications and solutions, sign up for this session focused on the power and performance delivered by oneMKL.

 

With 20 years of maturity under its belt, Intel’s Math Kernel Library remains the fastest and most-used math library for Intel-based systems and continues to hold this distinction based on continual optimizations that result in best-in-class performance.

This session focuses on its most recent iteration: Intel® oneAPI Math Kernel Library (oneMKL), optimized for implementing fast math-processing routines targeting heterogeneous, multiarchitecture compute.

It will cover a lot, including:

  • How to use oneMKL to take best advantage of the latest built-in hardware acceleration engines such as Intel® AVX-512, Intel® AMX, and the new BFLOAT16 data type commonly used for machine learning
  • An illustration—with syntax specifics—of how function domains (BLAS, LAPACK, FFT, RND, PARDISO) takes advantage of the 4th Gen Intel® Xeon® Scalable Processor and Intel® Max Series Product Family
  • How oneMKL supports the latest OpenMP standard, expansion into SYCL, and open standards-based C++ cross-architectural compute framework
  • How to map CUDA math library calls (e.g., cuBLAS, cuFFT, and cuRAND libraries) to oneMKL, including a demo of how to do it.

Skill level: Intermediate & Expert

Featured software

Download the Intel oneAPI Math Kernel Library standalone or as part of the Intel® oneAPI Base Toolkit


Code Samples (GitHub):



 
Robert Mueller-Albrech
Product Marketing Engineer, Intel

Read more +

PyTorch 2.0: A Technical Deep Dive into What’s New and Its DL Compiler

Wednesday, June 28, 2023 | 9:00 AM PDT

Get an introduction to the new features of PyTorch 2.0, including a deep dive into the framework’s deep-learning compiler technologies.

 

Initially released by Facebook (now Meta) in Fall of 2016, PyTorch has become one of the most popular deep learning frameworks for compute-intensive training and inference.

This session focuses on PyTorch 2.0. Released on March 15, 2023, it offers the same eager-mode development and user experience while fundamentally changing and supercharging how PyTorch operates at the compiler level.

Sign up to hear engineers from Meta and Intel discuss:

  • What’s new in PyTorch 2.0
  • Its deep-learning compiler stack
  • TorchInductor, which is the DL compiler backend that supports training and multiple backend targets
  • Intel-contributed features to the new release, such as technologies from Intel® Extension for PyTorch and INT8 inference optimizations provided by oneDNN

They’ll also provide a live demo.

Sign up today.

Skill level: Intermediate

Featured software



 
Bin Bao
Software Engineer, Meta

Jiong Gong
Principal Engineer, Intel

Eikan Wang
Staff Engineer, Intel

Read more +

Register to save your seat

Required Fields(*)

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please enter a company name.
Company name must be at least 2 characters long.
Company name must be less than 250 characters long.
Please enter a company name.
Please select a country.
Your registration cannot proceed. The materials on this site are subject to U.S. and other applicable export control laws and are not accessible from all locations.
Please select a developer role.
Please select a profession.
Please select at least one development language.
Please select at least one option.
Please select an industry.
Please select at least one operating system.

By submitting this form, you are confirming you are age 18 years or older and you agree to share your personal data with Intel for this business request.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.