Discover effective techniques for deploying LLMs to AI PCs and achieving top performance.
The potential of AI PCs has barely been realized and innovative opportunities abound. With the launch of Intel® Core™ Ultra processors systems now included integrated NPUs in addition to CPUs and GPUs. Using LLMs as a model for building GenAI solutions, this session explains NPU architecture, the significance of LLMs, and the use of the Intel® NPU Acceleration Library.
Designed for developers at all levels of experience, who are intent on gaining AI development acumen, learning the last techniques for using LLMs to create inventive GenAI solutions, and exploring the capabilities of Intel AI PCs and NPUs. Code examples demonstrate techniques for optimizing for the resources of Intel® AI PCs and taking advantage of the unique features that make them suitable for AI development.
Topics covered include:
- Understand large language modes, the advantages of local inference, and the challenges encountered.
- See how acceleration of AI workloads is accomplished for Intel Core Ultra processors and the benefits of NPUs from Intel to operations.
- Discover techniques for quick prototyping of LLMs using the Intel Core Ultra processor with the Intel NPU Acceleration Library.
- Demonstrate how to deploy NPUs with Intel® OpenVINO™ toolkit and the NPU plugin.
Sign up today.
Skill level: All skill levels
Featured software
Intel, the Intel logo, Intel Core Ultra, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries.
Alessandro Palla
Machine Learning Engineer, Intel Corporation