- Visão geral
Learn how to deploy a deep neural network to an edge device. Also learn how to deploy applications to the edge.
Hi. I'm Meghana Rao, and this is the AI from the Data Center to the Edge video series. In this episode, we show you how to deploy a deep neural network to an edge device–be it a CPU based on Intel® architecture, integrated graphics, Intel® Neural Compute Stick, or FPGA.
We introduce the Intel® Distribution of OpenVINO™ toolkit and the Python* workflow to deploy applications to the edge. Lastly, we introduce the toolkit's capability for low precision inference on the 2nd Intel® Xeon® Scalable processors.
Let's take a closer look at the contents covered in this chapter. The input to the deployment process is the trained model, which is a frozen graph. We begin by introducing the capabilities of the Intel Distribution of OpenVINO toolkit. The two main components that are addressed are the Model Optimizer and inference engine.
The Model Optimizer is used to create hardware-agnostic intermediate representation files. The inference engine deploys these intermediate representation files to the tools and target. The targets are CPUs based on Intel® architecture, integrated graphics, or the Intel Neural Compute Stick, which use the appropriate MKDLN, CLDLN, or Intel® Movidius™ plug-in provided for the inference engine.
The toolkit supports both C++ and Python. The course shows a basic workflow of a Python application to infer at runtime. Lastly, the course shows how to use the Intel Distribution of OpenVINO toolkit to perform all the training quantization to convert a floating-point 32-bit model into INT8 for lower precision inference, which results in better inference speeds with a minimum loss of accuracy.
Thanks for watching AI from the Data Center to the Edge. Make sure to check out the links to register. You can complete the lecture and the notebooks listed in the resources for this course. Join me in the next episode to learn more about how you can obtain an optional course completion certificate.