If you’re looking to sharpen your technical skills, get expert answers to specific questions, or dive into an entirely new area of development, these free webinars can help. Attend them live or view archived webinars on demand.
Boost Application Performance using Intel® Parallel Studio XE
Wednesday, September 13, 2017 9 AM PDT
It’s no longer a nice-to-have. Code modernization—i.e., developing applications to take advantage of multiple cores (Intel® Xeon Phi™ has up to 72 of them) and other heterogeneous computing hardware—is the expectation.
So how do you do it?
This technical webinar will dive into answers, including using Intel® Parallel Studio XE to help get the job done. What you’ll learn:
Join us. And be sure to download the free trial of Intel Parallel Studio XE.
Presenter: Kevin O’Leary
Kevin O’Leary is a Lead Technical Consulting Engineer in Intel’s Developer Products Division. His specialty is performance optimization using Intel® Advisor and Intel® VTune™ Amplifier. He was previously a senior developer of Intel’s Parallel Studio product. Before joining Intel he spent many years as a debugger engineer for IBM/Rational Software.
Increase Performance for Demanding Workloads on Intel® Xeon® Processors
Wednesday, September 20, 2017 9 AM PDT
The latest Intel® Xeon® processors include many advancements—such as Intel® AVX-512—for optimizing performance of even the most demanding computational tasks like scientific simulations and modeling, financial analytics, audio/video processing, and cryptography.
But how do you tap into these advances quickly and easily?
Join us for an overview of how Intel® Parallel Studio XE (download the free trial) can optimize your performance-critical modules—from finding memory access to uncovering threading and vectorization opportunities. We’ll also share tips for optimizing your code.
Presenter: Alex Shinsel
Alex Shinsel has been a Technical Consulting Engineer in Intel’s Software and Services Group for just over a year, with focus on software optimization using performance analysis tools like Intel® VTune™ Amplifier XE and Intel Advisor. She loves solving challenging problems, and helping customers learn how to solve theirs. Alex graduated with honors in 2016 from Pacific University in Oregon with a B.S. in Computer Science.
Better Threaded Performance and Scalability with Intel® VTune™ Amplifier + OpenMP*
Wednesday, September 27, 2017 9 AM PDT
The terms “threading” and “scalability” are not only on the short list of coding buzzwords, they’re absolutely necessary for competitive applications and solutions—from enterprise cloud/network to HPC.
Intel® VTune™ Amplifier is a code-profiling tool with a friendly analysis interface that not only provides accurate profiling data, it helps you mine and interpret it.
Interested?
Join us where we will discuss and demonstrate:
Presenter: Anoop Madhusoodhanan Prabha
Anoop Madhusoodhanan Prabha is a Software Engineer in Intel's Software and Services Group. He currently works as a Technical Consulting Engineer on the C/C++ compiler support team. He joined Intel in 2009. Since he joined Intel, he has worked on optimizing various customer applications by enabling multi-threading, vectorization and other micro architectural tunings. He has experience working with OpenMP, Cilk™ Plus, Intel® TBB, CUDA etc. His current interest are in Processor and GPU architecture, heterogeneous computing and high performance computing. He has an M.S. degree in Electrical Engineering from State University of New York at Buffalo, US. His e-mail is anoop.madhusoodhanan.prabha@intel.com
Memory Access Profiling: Find and Fix Common Performance Bottlenecks
Wednesday, October 4, 2017 9 AM PDT
Intel® VTune™ Amplifier not only provides profiling capabilities that are advanced, accurate, and with very low overhead … it also gives you the tools to mine and interpret your data—all in a single, friendly analysis interface.
Which is precisely what this webinar is focused on.
Join us for an hour of demonstrations and how-to’s, including:
Be sure to download the free trial.
Presenter: Jackson Marusarz
Jackson Marusarz is a technical consulting engineer (TCE) in Intel's Developer Products Division. As the lead TCE for Intel® VTune™ Amplifier, Jackson’s main focus is on software performance analysis and tuning for both serial and multi-threaded applications. His time is split between figuring out how to analyze and tune software, and how to create tools that help others do the same.
Is Python* Almost as Fast as Native Code? Believe It!
Wednesday, October 11, 2017 9 AM PDT
One of the biggest draws to Python* is that it’s easy to learn and use. But it’s also notorious for being too slow for high-performance, compute-intensive applications.
Intel® Distribution for Python* (FREE – get it here) addresses these fundamental performance challenges, delivering the speed of compiled languages with full optimization for a wide range of Intel® processors.
Join us to see Intel Distribution for Python in action, taking an application to the next level of performance using native libraries, performance analysis, and optimization of Python/C/C++ code.
You’ll learn:
Presenter: Nathan Greeneltch, PhD
Nathan joined the Technical Computing, Analyzers and Runtimes (TCAR) group in 2017 as a technical consulting engineer (TCE). His role is to help drive customer engagements for Python as well as Intel’s libraries, leveraging the synergies between Python and MKL. Before joining the TCAR team, Nathan spent 3 years in the processor development side of Intel where he was a ML practitioner in the defects division, identifying and predicting failure areas in the coming generations of Intel processor. Nathan has a PhD in physical chemistry from Northwestern University, where he worked on nanoscale lithography of metal wave-guides for amplification of laser-initiated vibrational signal in small molecules.
Speed Up Small-Matrix Multiplication using New Intel® Math Kernel Library Capabilities
Wednesday, October 18, 2017 9 AM PDT
A major focus of Intel® Math Kernel Library—one of five free Intel® Performance Libraries—is to dramatically improve small-matrix multiplication run-time performance in compute-intense applications such as those used in applied mathematics, physics and engineering.
In this webinar we’ll look at new Intel® MKL capabilities, compare and contrast them for different matrix multiplication uses cases, and show you code samples and benchmark results for different sizes of problems.
Presenter: Murat Guney
Murat E. Guney received his B.S./M.S. degrees from Middle East Technical University, and M.S./Ph.D. degrees from Georgia Institute of Technology. His main interests are high-performance/parallel computing, performance optimizations, sparse solvers, and numerical methods. He is currently a Software Engineer for the Intel Math Kernel Library.
Accelerating Lossless Data Compression Code for Cloud and Edge Applications
Wednesday, October 25, 2017 9 AM PDT
Compressing and decompressing data is increasingly important to save storage space and improve communication efficiency. But compression and decompression take extra processor resources―and in data-intensive applications, this can greatly affect overall system performance. An optimized implementation of compression algorithms plays a critical role in minimizing system-performance impact.
Intel® Integrated Performance Primitives (Intel® IPP) includes a specialized domain of highly optimized, lossless data compression functions including ZLIB, BZIP, and LZO. The latest Intel IPP also introduces the LZ4 algorithm to support fast compression.
Want to find out more?
Join us for a review of Intel IPP usage, performance, and features, including explanations for how applications can decide the compression algorithm to use according to their workload characteristics.
Be sure to download Intel IPP—one of five free Intel® Performance Libraries.
Presenter: Shaojuan Zhu
Shaojuan Zhu is a Technical Consulting Engineer at Intel supporting Intel performance libraries: DAAL, IPP and MKL. She has ten years of experience developing media products and supporting performance solutions on Intel architectures. Her expertise and interests include biologically inspired neural networks, machine learning and media processing. She holds a Ph.D. in Electrical and Computer Engineering from Oregon Health and Science University.
Parallel Programming Standards Update: MPI*, OpenMP* and Intel® TBB
Wednesday, November 1, 2017 9 AM PDT
Two decades is a millennium in technology years. And yet … Message Passing Interface* (MPI*), Open Multi-Processing* (OpenMP*) and Intel® Threading Building Blocks (Intel® TBB) have made the cut, helping the global developer community parallelize code for 25, 20 and 11 years, respectively.
Pretty impressive. And they remain popular largely because they’re based on open source and standards-driven implementations, and they offer intuitive approaches to parallelism.
Join us in this webinar where we’ll:
If you haven’t yet, be sure to download both Intel® MPI Library and Intel TBB—part of the free Intel® Performance Libraries.
Presenter: Henry Gabb
Henry Gabb Henry Gabb is a principal engineer in the Developer Products Division of the Intel Software and Services Group. He first joined Intel in 2000 to help drive parallel computing inside and outside the company. He transferred to Intel Labs in 2010 to become the program manager for various research programs in academia, including the Universal Parallel Computing Research Centers at the University of California at Berkeley and the University of Illinois at Urbana-Champaign. Prior to joining Intel, Henry was Director of Scientific Computing at the U.S. Army Engineer Research and Development Center MSRC, a Department of Defense high-performance computing facility. Henry holds a BS in biochemistry from Louisiana State University, an MS in medical informatics from the Northwestern Feinberg School of Medicine, and a PhD in molecular genetics from the University of Alabama at Birmingham School of Medicine. He has published extensively in computational life science and high-performance computing. Henry recently rejoined Intel after spending four years working on a second PhD in information science at the University of Illinois at Urbana-Champaign, where he established an expertise in applied informatics and machine learning for problems in healthcare and chemical exposure.
Better, Faster and More Scalable: The March To Exascale
Wednesday, November 8, 2017 9 AM PDT
Clusters continue to scale in density. And developing, tuning and scaling Message Passing Interface* (MPI*) applications is now essential―providing more nodes with more cores and more threads, all interconnected by high-speed fabric.
According to the Exascale Computing Project, exascale supercomputers will process a quintillion (1018) calculations each second—more realistically simulating the processes involved in precision, compute-intense usages (e.g., medicine, manufacturing, and climate).
As part of the exascale race, the MPICH* source base from Argonne National Labs (which is not only the high-performance, widely portable implementation of MPI, it’s also the basis for Intel® MPI Library) has been updated.
Join us to learn:
Presenter: Dmitry Durnov
Dmitry Durnov “Dmitry is a senior software engineer in the Intel® MPI team at Intel Corporation. He is one of lead developers and his current main focus is a full stack Intel® MPI product optimization for new Intel platforms (Intel® Xeon® Scalable processors, Intel® Xeon Phi™ and Intel® Omni-Path Architecture)”.