Image matting is a vision problem which gets more complicated with similar background and foreground colors. Researchers from Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign and engineers from Adobe* Research introduced a deep image matting algorithm at CVPR'17.
The algorithm (Figure 1) uses deep learning to intelligently separate foreground from the background, but takes a lot of time on CPUs and thus is not ready for large scale deployment running locally.
Figure 1. Inference per tile algorithm
Bringing deep learning inference to clients is a trend for future use cases as it helps meet latency, reduce cost, eliminate worries about network bandwidth (can work offline) and give consumers trust and privacy.
We ran deep image matting on a 320x320 image and found the time for one inference is approximately 2.35 seconds which is not acceptable for client inference. Figure 1 shows total time taken by the deep image matting algorithm on an Intel® platform using the Intel® VTune™ Amplifier on a 7th generation Intel® Core™ i7 processor with integrated graphics. While Figure 2 shows memory utilization for the deep image matting algorithm using Windows* Performance Analyzer memory.
Figure 2. Deep image matting CPU using Intel® Math Kernel Library (Intel® MKL) BLAS - 2.35 seconds
Figure 3. Deep image matting memory utilization
Using the Intel® Distribution of OpenVINO™ toolkit and Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), Intel optimized the image deep matte algorithm by 5.3 times and reduced memory consumption by 3 times. Figure 4 shows total time to complete one iteration for the image deep matte algorithm with Intel VTune Amplifier.
Figure 4. Deep image matting using Intel® Distribution of OpenVINO™ toolkit
Figure 5 shows memory utilization result in Windows Performance Analyzer for the image deep matting algorithm.
Figure 5. Deep image matting memory utilization
Intel Distribution of OpenVINO toolkit helps algorithms to use optimized pipeline on Intel hardware to offer the best performance.
Bringing intelligence to clients helps developers reduce cost as well as meet latency and security requirements. The Intel Distribution of OpenVINO toolkit used with Intel MKL-DNN library helps the image deep matting algorithm to reduce compute time by 5 times with the help of optimized deep learning libraries. In addition, the toolkit optimizes the memory utilization for deep learning inference by 3 times.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804