Get best practices from Intel and Hugging Face for optimizing multi-node, distributed transformer model training and inference on 4th Gen Intel® Xeon® Processors.
Transformer models are powerful neural networks that have become the de facto standard for delivering advanced performance for tasks such as natural language processing (NLP), computer vision, and online recommendations. (Fun fact: People use transformers every time they do an Internet search on Google or Bing.)
But there’s a challenge: Training these deep learning models at scale requires a large amount of computing power. This can make the process time-consuming, complex, and costly.
This session shares a solution: an end-to-end training and inference optimization for transformers.
Join your hosts from Intel and Hugging Face (notable for its transformers library) to learn:
- How to do multi-node, distributed CPU fine-tuning for transformers with hyper-parameter optimization using Hugging Face transformers, its Accelerate library, and Intel® Extension for PyTorch
- How to easily do inference optimization, including model quantization and distillation using Optimum Intel, the interface between the transformers library and Intel® tools and libraries
You’ll also see a showcase of transformer performance on the latest Intel® Xeon® Scalable processors.
Sign up today.
Skill level: Intermediate
Featured software
Get the Intel Extension for PyTorch as part of the Intel® AI Analytics Toolkit or standalone.
Learn more

Julien Simon
Chief Evangelist at Hugging Face