Learn to develop high performance applications and enable deep learning inference from edge to cloud.
The integration of AI in laboratory environments is rapidly changing the way clinical pharmaceutical scientists innovate and extract insights. However, the optimization and deployment of scientific pipelines is often challenging due to laboratory settings and requirements.
In this talk, we will demonstrate how to efficiently build and deploy scientific AI models in using open-source technologies. We will walk through an end-to-end case study, inviting Beckman Coulter Life Sciences to share how they leveraged OpenVINO™ optimizations and OpenVINO model server to unlock AI performance for their CellAI toolbox.
What you’ll learn:
September 7, 8AM-9AM PT
Register nowJoin us for a step-by-step tutorial on how to create an intelligent retail queue management system using the OpenVINO™ toolkit and YOLOv8. We'll walk you through the process of integrating these powerful open-source tools to develop an end-to-end solution that can be deployed in retail checkout environments. Whether you're an experienced developer or new to AI, this session will provide practical tips and best practices for building intelligent systems using OpenVINO. By the end of the presentation, you'll have the knowledge and resources to build your own solution.
What you’ll learn:
Intel® Developer Cloud for the Edge is designed to help you prototype, evaluate, and benchmark AI and edge solutions on Intel® hardware with immediate worldwide access. Within the Developer Cloud, the Bare-metal platform enables you to develop or import your computer vision or edge AI application using a JupyterLab environment on a bare-metal infrastructure.
In this session, you will learn:
In this session, developers will learn how to leverage the Intel® Developer Cloud for the Edge - Container Playground as REST APIs from within Red Hat* OpenShift* Data Science Platform (RHODS), benchmark AI applications using OpenVINO™ Toolkit on Intel® CPUs, integrated GPUs and discrete GPU hardware.
Profile and optimize your neural network on various Intel® hardware configurations hosted in the cloud environment without any hardware setup at your end and integrate the optimized model in the user-friendly environment of JupyterLab*. Session will cover:
Delve into the world of transformer models, including Stable Diffusions and GPT, as well as explore how we've optimized these models to run on Intel’s wide variety of hardware. Part of DevCon 2023.
BeginJoin Meghana Rao, Intel AI Technology Evangelist, for live workflows and walkthroughs of Intel® Developer Cloud for the Edge's Container Playground.
BeginSave your spot to get an overview of the newest Intel/Accenture kits, including demos and how to get them for free. The new kits include modeling solutions for traffic camera object detection, computational fluid dynamics, structured data generation, structural damage assessment, NLP for semantic search, and sensor data anomaly detection.
BeginIn this webinar, Anisha Udayakumar, Intel AI Evangelist, explores how she used AI and OpenVINO™ to help reduce the food waste that happens in the fresh food and produce section at the grocery store – and how you can apply these same techniques to your own projects.
BeginThis webinar demonstrates how to use the Deep Learning Workbench to analyze and optimize your model and more.
BeginExplore AI inferencing and how to run different models using OpenVINO™ toolkit.
BeginListen in to get the latest announcements spanning new products, developer tools and technologies, and Intel’s focus on empowering an open ecosystem, ensuring choice for developers to use tools and environments they prefer, and building trust and partnership across cloud service providers, open-source communities, startups, and others.
BeginTake the stress out of training models with done-for-you integration and extensions ready and optimized for inferencing with OpenVINO™ toolkit.
BeginIn this webinar, you will learn:
The OpenVINO Notebooks repo on GitHub is a collection of ready-to-run Jupyter Notebooks, that feature various aspects, AI models, and use cases to show off the capabilities of the OpenVINO™ Toolkit. One of the examples is a notebook, where you can train a model using TensorFlow*, and then run it in both the native TensorFlow and then using the OpenVINO™ Toolkit. This tutorial shows how to take this notebook and modify it to train the same model but with a different dataset.
BeginFind out how you can accelerate AI workloads for computer vision, audio, speech, language, and recommendation systems using OpenVINO™ toolkit. Watch this self-paced video training series to advance your skills in AI and deep learning. The training will take you through the workflow using OpenVINO™ toolkit including support for accelerated deep learning algorithm deployments in your application. Learn about Intel® DevCloud, a cloud-based development sandbox that allows you to actively prototype and experiment with AI inference workloads on the latest Intel® hardware. Discover tools and demos for the different stages of your development journey.
Length: 16 modules, averaging 20m each
Register for freeIn this course, you will be learning the basics of OpenVINO™ Execution Provider for ONNX* Runtime. After finishing this course you will be able to develop deep learning applications leveraging the OpenVINO Execution Provider for ONNX Runtime. We will introduce you to the environment and various sample applications.
Length: 10 modules, averaging 2m each
BeginLearn more about Intel's reference software stack for video and sensor analytics. Find out how Intel® Edge Insights for Industrial enables you to integrate video and time-series data analytics on edge compute nodes and run concurrent workloads on ready-to-use containerized analytics pipelines. Learn to support acceleration and distribution of video analytics on CPUs, GPUs, and VPUs.
Length: 10 modules, averaging 2m each
BeginPlease download and install OpenVINO™ toolkit before you start the training.
Developing an application using OpenVINO™ toolkit? Tell us about your application to be eligible for co-marketing benefits.