AI inference on the PC allows for trained models to be deployed to Intel® Xeon™ Scalable Processors, Processor Graphics and the Intel® Myriad™ VPU. In previous webinars, we have demonstrated how the Intel® Distribution of OpenVINO™ Toolkit can be used to ingest Image Recognition and Classification models from various frameworks, e.g. Tensorflow*, PyTorch*, and deploy them on these hardware platforms in a performant manner. In this webinar, we will focus on new capabilities of the OpenVINO™ Toolkit to deploy Speech Recognition models using the Kaldi Framework to CPU, Processor Graphics and the VPU. You will walk away learning how to convert a Speech Recognition model to Intermediate Representation (IR) using the Model Optimizer and infer on multiple hardware using the Inference Engine Plugins.
- Understand the requirements to create IR for Speech Recognition Models – A Kaldi trained model (Ex: .nnet1 or nnet2 model), Kaldi Class Counts File, etc.
- See how to use the Model Optimizer to create the IR files (.xml and .bin)
- Use the Inference Engine to deploy to CPU, GPU and VPU.
Rudy holds a Bachelor’s degree in Space Science with minor in Mechanical Engineering from the Florida Institute of Technology; with graduate studies in Aerospace and Astronautics from Georgia Tech and Management Science from Stanford. Rudy has run a technical consultancy in 3D graphics, VR, and computer vision; and is an active volunteer in STEM K-12 programs and participates in academic venues such as ACM Siggraph.