The Deep Learning Workbench (DL Workbench) included in the Intel® Distribution of OpenVINO™ toolkit allows developers to easily:
For more introductory information about DL Workbench, please refer to the previous blog New Deep Learning Workbench Profiler Provides Performance Insights.
In the latest release (R3.1) of Intel® Distribution of OpenVINO™ Toolkit, DL Workbench offers Model Optimizer integration via the import Original Model feature, allowing import of popular DL models from the Internet or your own custom built models which are in the original framework format. As a result, it is no longer necessary to point DL Workbench to IR files generated by Model Optimizer offline, although this option still exists if needed. This release also includes:
This release offers support for many more models, both FP32 and FP16 data types, across several industry-recognized frameworks, including Tensorflow*, Caffe*, Apache MxNet*, PyTorch* and Paddle Paddle* (via ONNX*), as well as optimized INT8 versions of models.. Currently, the Linux* version of the DL Workbench runs with support for CPUs, integrated GPUs and Intel® Movidius™ VPUs but only CPUs can benefit from INT8 or Winograd optimization.
The DL Workbench backend is a Docker* container, so developers can use the pre-built one on Docker* Hub or they can build their own. The easiest way to get started with the current version of DL Workbench is to run this command (Linux example):
sudo docker run -p 127.0.0.1:5665:5665 --name workbench --privileged -v /dev/bus/usb:/dev/bus/usb -v /dev/dri:/dev/dri -e PORT=5665 -e PROXY_HOST_ADDRESS=0.0.0.0 -it openvino/workbench:latest
2) Point a web browser which runs on the Linux system to http://0.0.0.0:5665 and voila ! You got your DL Workbench profiler.
Step-by-step installation and run instructions are provided here
3) Next import a compatible data set in the correct format. Currently ImageNet* (for classification) and Pascal VOC* (for Object Identification) are supported. You can also import your own data set as long as it's converted to the correct file-structure and annotation-file format (ImageNet* or VOC*).
If you don't have a dataset handy, DL Workbench will allow you to auto-generate your own dataset in ImageNet* (file-structure and annotation file) format. Keep in mind that, if this option is used, the images will consist of random noise, which is sufficient for performance measurements but inadequate for accuracy measurements.
Importing a model from the more than 40 choices available within Model Zoo is highly recommended, since these models include both models developed by Intel and models vetted, tested and, in some cases, optimized (INT8) and popularized via the Internet Note, however, that you can also use the Import Original Model button. In this case, DL Workbench now automatically runs the Model Optimizer command under the hood, prompting you to import relevant model files. Following is an example for Caffe ssd-mobilenet:
As you step through the DL Workbench screens you will also be prompted to enter image size as well as --mean_values and --scale_values and --reverse_input_channels pre-processing switches which are ordinarily passed in through the Model Optimizer command-line tool. As discussed in the previous blog post such pre-processing switches (mean values, scale values, input channels order) need to be explicitly defined and input onto Model Optimizer (and now DL Workbench) in order to generate correct IR which actually reflects how the model was trained.
Now, in both the Model Zoo and Original Model scenarios, DL Workbench invokes Model Optimizer under the hood, and in the latter case, if the model is supported by DL Workbench, inputting pre-processing switches is a seamless endeavor. For the Original Model, you will have the option to import Framework types OpenVINO (IR), Caffe*, Apache MxNet*, ONNX*, and Tensorflow*. OpenVINO (IR) handles the case where the Original Model is not recognized by DL Workbench or you wish to have full control over the Model Optimizer process, so you will need to import the generated *.xml and *.bin files after running Model Optimizer on the model manually. In addition, when the user chooses Import from Model Zoo, the recognition of whether the model is of Classification or Object Detection type is automatically determined by DL Workbench.
This version of DL Workbench provides several helpful tips throughout, an example being the Optimize Tips shown below.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804