Thursday, September 26, 2024, from 9:00am to 11:00am PT
Discover effective techniques for harnessing Hugging Face tools to unlock the power of AI PCs. This hands-on session shows you ways to add efficiency to inferencing, enhance LLM development, and enhance multimodal chat.
Learn to develop practical applications of LLMs on the next-generation of PCs geared for AI operations and optimized for peak performance. Extend your knowledge to create sophisticated multimodal chat systems, integrating text, images, and audio inputs fluidly. Master techniques for optimizing AI models with quantization to effectively deploy AI PC hardware resources.
The session will give you a thorough understanding of LLM inferencing and multimodal chat, using the tools provided by Hugging Face. Among the topics included are:
This workshop illustrates the unique capabilities of AI PCs and clarifies the role and deployment of Intel® CPUs, GPUs, and NPUs through real-world examples. Hands-on demonstrations of these techniques require an Intel® Tiber™Developer Cloud account. If you don’t have one, get one here.
The workshop suits experienced professionals interested in AI PCs, as well as AI enthusiasts, demonstrating the exciting potential of the latest generation of PCs for handling elaborate AI applications.
Presenter
Developer Evangelist for AI and oneAPI, Intel
Q&A moderator
Technical Developer Evangelist, Intel