Code Ahead of the Curve with FREE Technical Trainings

Sign up to attend LIVE SESSIONS focused on today’s relevant technology topics—AI, machine learning, HPC and cloud computing, computer graphics, and more.

Watch past sessions on demand.

Please select at least one session.
Please select at least one option.

MLPerf 2023 Results: How Intel Optimized Model Performance on the Latest CPUs

Wednesday, September 27, 2023 | 9:00 AM PDT

Get an overview of the new (and impressive) MLPerf benchmarking results for 4th Gen Intel® Xeon® Scalable processors … and how they were accomplished.

 

MLPerf is a benchmarking suite that measures the real-world performance of machine-learning systems against a variety of ML tasks—image classification, object detection, machine translation, and others, doing so in an architecture-neutral, representative, and reproducible manner.

This session focuses on Intel’s results from its 2023 MLPerf v.3.0 submissions specific to thedata center category. The workload tasks were run on 4th Gen Intel® Xeon® Scalable processors. (Fun fact: Intel remains the only data center CPU vendor to have MLPerf Inference results on a broad set of models.)

Topics covered include:

  • The remarkable gains of 4th Gen versus 3rd Gen Intel Xeon processors to run any AI workload, due in large part to specialized AI hardware accelerators like Intel® AMX
  • How additional joint submissions with other customers (all on 4th Gen Xeon) were competitive to NVIDIA GPUs
  • Key learnings to boost model performance on 4th Gen Xeon CPUs such as platform configuration, memory balancing, and recommended BIOS settings (these can be checked using Intel® System Health Inspector
  • Methodologies and tools Intel used to optimize model performance for this submission
  • Tips for how you can use MLPerf to benchmark your own model performance

Skill level: All

Featured software

Download the following standalone:




See the results and code


 
Yuning Qiu
AI Software Solutions Engineer

Read more +

Enhance and Accelerate Azure Machine Learning Workloads

Wednesday, October 18, 2023 | 9:00 AM PDT

Speed up, scale, and efficiently manage your cloud-based machine learning workloads for less cost and more resource utilization with new Intel-Azure solution.

 

Two problems continuously faced by machine learning engineers and data scientists are (1) efficiently managing AI pipelines from development to deployment and (2) running those pipelines in such a way as to reduce costs and resource use.

Microsoft and Intel have collaborated to create a solution that addresses both by incorporating Intel AI optimizations into the Azure Machine Learning platform. Sign up for this session to learn what it is and how to take advantage of it.

Key takeaways:

  • An overview of the solution, which integrates Intel optimizations for Python frameworks such as PyTorch, TensorFlow, and Scikit-learn on the Azure ML platform
  • How to enable in your Azure workloads using the above-mentioned framework optimizations to reduce cloud costs and development time, increase resource utilization, and improve ML pipeline speed

Includes a demo.

Sign up today.

Skill level: Intermediate

Featured software

Get the following Intel extensions standalone from GitHub or as part of the Intel® oneAPI AI Analytics Toolkit




Download code samples


 
Rachel Oberman
Intel AI Technical Consulting Engineer

Savita Mittal
Principal Software Engineer, Microsoft

Read more +

OpenMP Offload – Solving Linear Systems using oneMKL on GPU

Wednesday, October 25, 2023 | 9:00 AM PDT

Learn how to solve Fortran linear systems targeting GPUs by using the Intel® oneAPI Math Kernel Library (oneMKL) and OpenMP.

 

This session addresses the challenge of speeding up the linear algebra oneMKL kernel on GPUs through OpenMP. (This is important because it allows you to maintain the same code base for CPU and GPU OpenMP acceleration.)

You’ll learn the technique via a demonstration of how to solve a 1000x1000 complex number matrix by calling the GPU version of the oneMKL subroutine on Intel® Data Center GPU Max Series processors.

Key takeaways:

  • An overview of OpenMP, including how it’s a native solution for Fortran users to take advantage of Intel GPU hardware and its offload capabilities
  • How to dispatch a oneMKL math kernel, such as solving a complex number matrix, to the latest Intel GPU
  • Compiling the code example using the latest Intel® Fortran Compiler
  • Using OpenMP data construct to manage data movement between the host and target devices
  • How to monitor and optimize host-target data movement by the OpenMP runtime library

Sign up today.

Skill level: Intermediate

Featured software

Download the following standalone or as part of the Intel® oneAPI Base Toolkit




Download code samples


 
Shiquan Su
Software Technical Consulting Engineer

Barbara Perz
Technical Consulting Engineer

Read more +

How INESC-ID Achieved 9x Acceleration for Epistasis Disease Detection using oneAPI

Wednesday, November 15, 2023 | 9:00 AM PST

Find out how Lisbon-based R&D innovator INESC-ID significantly sped up its computationally crushing bioinformatics application on the latest Intel® CPUs using oneAPI tools.

 

Regardless of whether medical science is your thing, this session is a good one for anyone interested in materially improving performance on Intel CPUs of computationally challenging, algorithm-heavy applications.

Sign up to hear how two researchers accomplished it for a critical epistasis disease-detection application that uses datasets spanning millions of genetic markers in multiple high-order combinations; this effectively expands datasets into the hundreds of trillions of samples that must be quickly and accurately evaluated.

Find out how they leveraged oneAPI tools to transform their legacy code into multi-architecture state-of-the-art code, plus the results they achieved on Intel CPUs, including Intel® Xeon® CPU Max Series with HBM.

This session focuses on Intel’s results from its 2023 MLPerf v.3.0 submissions specific to thedata center category. The workload tasks were run on 4th Gen Intel® Xeon® Scalable processors. (Fun fact: Intel remains the only data center CPU vendor to have MLPerf Inference results on a broad set of models.)

Sign up today.

Skill level: Intermediate

Featured software

Get the following tools standalone or as part of the Intel® oneAPI Base Toolkit




Download code samples


 
Aleksandar Ilic
Assistant Professor, Universidade de Lisboa and Senior Researcher SIPS GROUP, INESC-ID

Ricardo Nobre
Researcher at INESC-ID | High-Performance Computing Architectures and Systems (HPCAS)

Read more +

Register to save your seat

Required Fields(*)

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please enter a company name.
Company name must be at least 2 characters long.
Company name must be less than 250 characters long.
Please enter a company name.
Please select a country.
Your registration cannot proceed. The materials on this site are subject to U.S. and other applicable export control laws and are not accessible from all locations.
Please select a developer role.
Please select a profession.
Please select at least one development language.
Please select at least one option.
Please select an industry.
Please select at least one operating system.

Intel strives to provide you with a great, personalized experience, and your data helps us to accomplish this.

Error: Above consent required for submission.
Error: Above consent required for submission.

By submitting this form, you are confirming you are age 18 years or older and you agree to share your personal data with Intel for this business request.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.