Code Ahead of the Curve with FREE Technical Trainings

Sign up to attend LIVE SESSIONS focused on today’s relevant technology topics—AI, machine learning, HPC and cloud computing, computer graphics, and more.

Watch past sessions on demand.

Please select at least one session.
Please select at least one option.

Build Next-Gen Portable, Power-Efficient AI on the Intel AI PC

Wednesday, June 5, 2024 | 9:00 AM PT

Learn how to use the OpenVINO™ toolkit on the new AI PC to build flexible, low-power, AI-assisted apps that you can take on-the-go without needing the Internet or the cloud.

 

What is an AI PC and how do developers exploit its AI-acceleration capabilities across the included CPU, GPU, and NPU?

This session delivers answers and unpacks fundamental and advanced techniques for tapping into the ever-expanding potential of AI using OpenVINO on the Intel AI PC

Key takeaways:

  • An overview of Intel’s AI PC
  • Approaches for making current-state and next-generation AI and GenAI models more performant and power-efficient
  • Why you should consider running AI and GenAI models on client and edge devices, including tips on low-power implementations.
  • Techniques for optimizing and deploying AI applications on the AI PC’s different compute engines using OpenVINO.
  • How to access and use the OpenVINO Notebooks repository, including its functionalities and applications.

Includes live demos showcasing how to seamlessly transition AI/GenAI apps across compute engines with OpenVINO for popular use cases, such as background blurring on video calls, object detection, and GenAI-powered image generation and chatbots.

Sign up today.

Skill level: All

Featured software


Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries.


 
Ria Cheruvu
Blockchain Developer, Data Scientist, Co-founder, Bilic, Ltd.

Read more +

Prototype and Deploy LLM Applications on Intel NPUs

Wednesday, June 12, 2024 | 9:00 AM PT

Learn how to effectively integrate large language models with Intel neural processing units, one of the compute engines available in the new Intel AI PC.

 

Model size + limited hardware resources in client devices (i.e., disk, RAM, CPU) make it increasingly challenging to deploy LLMs on laptops compared to cloud-based solutions.

The Intel AI PC solves this issue by including a CPU, GPU, and NPU on one device.

This session focuses on the NPU, showcasing how to prototype and deploy LLM applications on it locally.

Key learnings:

  • How NPU architecture works, including features, advantages, and capabilities in accelerating neural network computations on Intel® Core™ Ultra processors (the backbone of Intel’s AI PCs).
  • Practical aspects of deploying performant LLM apps on Intel NPU—from initial setup to optimization and system partitioning—using OpenVINO toolkit and its NPU plugin.
  • Large language modes: what they are and advantages/challenges of local inference.
  • Fast LLM prototyping on Intel Core Ultra processors using the Intel NPU Acceleration Library.

Includes real-world examples and case studies (like chatbots and RAG) that showcase the seamless integration of LLM applications with NPUs, including how this synergy can unlock performance and efficiency.

Sign up today.

Skill level: All

Featured software


Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries.
Intel, the Intel logo, and Intel Core are trademarks of Intel Corporation or its subsidiaries.


 
Alessandro Palla
Machine Learning Engineer, Intel Corporation

Read more +

De-Risking LLMs: How Prediction Guard Delivers Trustworthy AI on Gaudi® 2

Wednesday, June 26, 2024 | 9:00 AM PT

Learn how the AI integration company cracked the code on trustworthy, high-performance LLM applications, achieving 2x throughput gains, cost efficiencies, and more.

 

Large language models promise to revolutionize how enterprises operate, but making them production-ready means solving for privacy risks, security vulnerabilities, and performance bottlenecks.

Not so easy.

This session focuses on how AI startup Prediction Guard found a solution to these challenges by using the processing power of Intel® Gaudi® 2 AI accelerators in the Intel® Tiber™ Developer Cloud1.

Topics include:

  • Prediction Guard’s pioneering work hosting open source LLMs like Llama 2 and Neural-Chat in a secure, privacy-preserving environment with filters for PII, prompt-injection attacks, toxic outputs, and factual inconsistencies.
  • How they optimized batching, model replication, tensor shaping, and hyperparameters for 2x throughput gains and industry-leading time-to-first-token for streaming.
  • Architectural insights and best practices for capitalizing on LLMs.

Sign up today.

Skill level: Expert

Featured software

This session showcases the Intel Tiber Developer Cloud: Learn more | Sign up

Download code samples


1 Formerly Intel® Developer Cloud
Intel, the Intel logo and Gaudi are trademarks of Intel Corporation or its subsidiaries.


 
Daniel Whitenack
Prediction Guard

Read more +

Register to save your seat

Required Fields(*)

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please enter a company name.
Company name must be at least 2 characters long.
Company name must be less than 250 characters long.
Please enter a company name.
Please select a country.
Your registration cannot proceed. The materials on this site are subject to U.S. and other applicable export control laws and are not accessible from all locations.
Please select a developer role.
Please select a profession.
Please select at least one development language.
Please select at least one option.
Please select an industry.
Please select at least one operating system.

Intel strives to provide you with a great, personalized experience, and your data helps us to accomplish this.

Error: Above consent required for submission.
Error: Above consent required for submission.

By submitting this form, you are confirming you are age 18 years or older and you agree to share your personal data with Intel for this business request.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.