Code Ahead of the Curve with FREE Technical Trainings

Sign up to attend LIVE SESSIONS focused on today’s relevant technology topics—AI, machine learning, HPC and cloud computing, computer graphics, and more.

Watch past sessions on demand.

Please select at least one session.
Please select at least one option.

Elevate Your AI Expertise with Intel® Gaudi® Accelerators

Wednesday, August 28, 2024 | 9:00 AM PT

Put Intel® Gaudi® 2 AI Accelerators through its paces, streamlining training and inference and enhancing GenAI integration and deployment.

 

Count on Intel® Gaudi® 2 AI accelerator and Intel’s Open Enterprise Architecture (OPEA) to greatly simplify integration and deployment of GenAI solutions. Working with validated GenAI examples, this session explores ChatQnA and Copilot implementations and surveys the ways that Intel Gaudi 2 AI Accelerators fit into the pipeline to deliver top performance.

OPEA delivers the advanced technology infrastructure for launching GenAI solutions. Topics cover these areas of interest:

  • Discover tips and techniques for using Intel Gaudi AI Accelerators to build and deploy innovative GenAI solutions.
  • Examine the ChatQnA and Copilot examples and evaluate the characteristics of validated GenAI solutions.
  • Unlock new opportunities with OPEA to drive AI initiatives and take full advantage of the capabilities of Intel Gaudi Accelerators.

Sign up today.

Skill level: Intermediate to Expert

Get the software

Get the supporting code


1 Intel® Gaudi® 2 Enables a Lower Cost Alternative for AI Compute and GenAI

Intel, the Intel logo Gaudi, and Gaudi logo are trademarks of Intel Corporation or its subsidiaries.


 
Greg Serochi
Principal AI & Ecosystem-Enabling Manager, Intel® Gaudi® Accelerators

Ezequiel Lanza

Read more +

Optimizing Distributed Training and Inference for Intel® Data Centers

Wednesday, September 4, 2024 | 9:00 AM PT

Master expert techniques for distributing AI workloads across Intel® Data Center CPUs and GPUs, improving training and inference.

 

The complexity of deep learning models is surging, warranting enhanced training and inference in distributed compute environments. This session focuses on the essential techniques to use with Intel® Data Center CPUs and GPUs to balance distributed AI workloads and meet data center challenges to improve advances in efficiency and performance.

Within the session, explore Intel® Extensions for PyTorch, which optimizes neural network operations on Intel® hardware, and learn how DeepSpeed can be integrated to perform training operations at scale.

Topics covered include:

  • Tackle model scalability in a distributed environment skillfully, handling workloads efficiently across Intel Data Center CPUs and GPUs.
  • Gain familiarity with essential Intel tools to simplify operations, including PyTorch Distributed Data Parallel (DPP), Intel’s openAPI Collective Communication Library (oneCCL), and the DeepSpeed library that streamlines network training at scale.
  • Deploy practical solutions that maximize hardware efficiency and perfect strategies that ensure top performance for AI development.
  • Sample code and see benchmarking milestones, using tools such as IPX LLM, to illustrate performance achievements.

Sign up today.

Skill level: All skill levels

Get the software

Download code samples


Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries.


 
Alex Sin
AI Software Solutions Engineer

Yuning Qiu
AI Software Solutions Engineer

Read more +

Creating Inventive GenAI with Intel® AI PCs

Wednesday, September 11, 2024 | 9:00 AM PT

Discover effective techniques for deploying LLMs to AI PCs and achieving top performance.

 

The potential of AI PCs has barely been realized and innovative opportunities abound. With the launch of Intel® Core™ Ultra processors systems now included integrated NPUs in addition to CPUs and GPUs. Using LLMs as a model for building GenAI solutions, this session explains NPU architecture, the significance of LLMs, and the use of the Intel® NPU Acceleration Library.

Designed for developers at all levels of experience, who are intent on gaining AI development acumen, learning the last techniques for using LLMs to create inventive GenAI solutions, and exploring the capabilities of Intel AI PCs and NPUs. Code examples demonstrate techniques for optimizing for the resources of Intel® AI PCs and taking advantage of the unique features that make them suitable for AI development.

Topics covered include:

  • Understand large language modes, the advantages of local inference, and the challenges encountered.
  • See how acceleration of AI workloads is accomplished for Intel Core Ultra processors and the benefits of NPUs from Intel to operations.
  • Discover techniques for quick prototyping of LLMs using the Intel Core Ultra processor with the Intel NPU Acceleration Library.
  • Demonstrate how to deploy NPUs with Intel® OpenVINO™ toolkit and the NPU plugin.

Sign up today.

Skill level: All skill levels

Featured software


Intel, the Intel logo, Intel Core Ultra, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries.


 
Alessandro Palla
Machine Learning Engineer, Intel Corporation

Read more +

Using ONNX and OpenVINO™ Toolkit to Accelerate AI PCs

Wednesday, September 11, 2024 | 11:00 AM PT

OpenVINO™ Toolkit excels at optimizing ONNX models, making it an ideal tool for nursing the best behavior from a heterogenous network, including operations involving multi-threading inference, model quantization, and graph partitioning.

 

Optimizing a diverse network composed of heterogenous components can be simplified substantially by combining OpenVINO optimizations to improve Open Neural Network Exchange (ONNX) models. ONNX offers numerous benefits to developers, delivering a common infrastructure for supporting machine learning and providing standardized operators and a common format. For Intel systems that include a mix of CPUs, integrated GPUs, and NPUs, model inferencing is streamlined, as demonstrated in this session.

Using OpenVINO as a backend, models can be inferenced and deployed with the ONNX Runtime APIs . The session shows the performance gains achieved through the simple process of using the OpenVINO Execution Provider on AI PC and evaluating the results.

Topics covered include:

  • Learn the characteristics of an AI PC and the benefits these systems offer developers.
  • Understand the techniques for inferencing and deploying ONNX models on an AI PC.
  • Evaluate performance of ONNX models on AI PC systems with a combination of OpenVINO, ONNX, and OpenVINO™ Execution Provider for ONNX Runtime*.
  • Learn how to build a standalone app for an AI PC with OpenVINO Execution Provider for ONNX Runtime.

Sign up today.

Skill level: All skill levels

Featured software


Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries.


 
Dmitriy Pastushenkov
Software Enabling and Optimization Engineer

Read more +

Register to save your seat

Required Fields(*)

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please enter a company name.
Company name must be at least 2 characters long.
Company name must be less than 250 characters long.
Please enter a company name.
Please select a country.
Your registration cannot proceed. The materials on this site are subject to U.S. and other applicable export control laws and are not accessible from all locations.
Please select a developer role.
Please select a profession.
Please select at least one development language.
Please select at least one option.
Please select an industry.
Please select at least one operating system.

Intel strives to provide you with a great, personalized experience, and your data helps us to accomplish this.

Error: Above consent required for submission.
Error: Above consent required for submission.

By submitting this form, you are confirming you are age 18 years or older and you agree to share your personal data with Intel for this business request.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.