Intel

TensorFlow* on Modern Intel® Architectures Webinar

Register Today!

The availability of open source deep learning frameworks like TensorFlow* is making artificial intelligence (AI) available to everyone. Every day researchers and engineers are using AI to solve business, engineering, and even societal problems. Intel and Google engineers have been working hand-in-hand to optimize TensorFlow* for Intel® Xeon® and Xeon Phi™ processors. This webinar introduces a number of changes to TensorFlow* to ensure that it takes advantage of key performance features in Intel processors. These changes are implemented in such a way that existing Python*-based topologies can experience dramatic performance improvements with no modifications at the model level.

We will discuss key performance challenges encountered while optimizing TensorFlow* as well as optimization techniques deployed to solve these challenges. We also contrast the exercise of optimizing a deep learning framework such as TensorFlow* with optimizing common applications in HPC and other domains. Finally, we provide instructions on how data scientists and engineers can download, build and run TensorFlow* for best performance on Intel® Xeon® and Xeon Phi™ processors.

What attendees can expect to learn:

  • TensorFlow* Overview.
  • Deep Learning Optimizations on CPU: Performance Challenges and Solutions.
  • Speeding TensorFlow* Performance on Intel® Xeon® and Xeon Phi™.

Required Fields(*)

By submitting this form, you are confirming you are an adult 18 years or older and you agree to Intel contacting you with marketing-related emails or by telephone. You may unsubscribe at any time. Intel's web sites and communications are subject to our Privacy Notice and Terms of Use.

About Our Speaker

ElMoustapha Ould-Ahmed-Vall

Senior Principal Engineer in Artificial Intelligence Products Group (AIPG)

Moustapha (ElMoustapha Ould-Ahmed-Vall) is currently a Senior Principal Engineer in the Artificial Intelligence Products Group (AIPG) at Intel working with Machine Learning and Deep Learning performance optimizations. He was a key contributor to a number of performance features in successive Intel processors, specifically the architecture of AVX-512 and its performance evaluation. He was also involved with HSW new instructions (bit manipulation new instructions and AVX2) and conducted extensive performance analysis and tuning work on many high performance applications since joining Intel in 2007. He has more than 170 patents granted or pending in the areas of computer architecture, performance optimization, HPC, cloud and machine learning and over 20 peer-reviewed research publications.

Moustapha recieved his Diplôme d'Ingénieur in Computer Science from Université de Technologie de Compiègne, France in 2003. He received his Master of Science (M.S.) and Doctor of Philosophy (Ph.D.) degrees in Electrical and Computer from the Georgia Institute of Technology, USA in 2004 and 2007.