Thursday, June 27, 2024, from 9:00am to 11:00am PT
This workshop demystifies Intel NPUs providing examples with large language models (LLMs) and case studies. The fundamental architecture of NPUs is explained, and the capabilities of the technology revealed, offering a clear picture of the role of neural processors in an AI system and the acceleration benefits.
Real-world examples show how AI applications integrate LLMs with Intel NPUs, including Chatbot, Retrieval Augmented Generation (RAG), Stable Diffusion, and Speech 2 Text. Developers gain insights into how this technology improves performance and efficiency, allowing AI operations to be performed effectively on PCs rather than the cloud.
Areas covered in the workshop include:
Hands-on demonstrations of these techniques require an Intel® Tiber™ Developer Cloud account. If you don’t have one, get one here.
Novice developers as well as experienced professionals interested in NPUs will benefit from this workshop.
Presenter
Machine Learning Engineer, Intel
Q&A
Deep Learning R&D Architect, Intel