Error loading model into plugin

Error loading model into plugin

Hi

I have converted my models to FP32 IR successfully and was able to do inference without any issues. I also converted the same models to FP16 IR but when I try to load these FP16 models I get this error with CPU option:

Error loading model into plugin: The plugin does not support models of FP16 data type.

When I selected the GPU option, I got this message:

Error loading model into plugin: [NOT_IMPLEMENTED] Input image format FP16is not supported yet...

I'm running it on Windows 10.Should I have to change anything else for using FP16 IR models?

Thanks

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi, The CPU currently only support FP32, if you would like to apply model for FP16, please specify other device. Intel GPU, Movidius & FPGA support FP16. Which version of CVSDK are you using? For very old version, I am afraid the implementation might not be finished. Please try with latest version, and sand private message if you have more request.

I have CVSDK 1.0 Beta 2017R3 (version: 1.0.5852). I think that was the latest version available for me to download.

Also, could you please let me know how to send private message?

Thanks

Quote:

Murugappan, Indrajit wrote:

I have CVSDK 1.0 Beta 2017R3 (version: 1.0.5852). I think that was the latest version available for me to download.

Also, could you please let me know how to send private message?

Thanks

I met the same problem. I converted caffemodel to fp16 IR. and I set options GPU.

InferenceEngine: 
	API version ............ 1.0
	Build .................. 5852
[ INFO ] Parsing input parameters
[ INFO ] No extensions provided
[ INFO ] Loading plugin
107/opt/intel/computer_vision_sdk_2017.1.163/inference_engine/lib/ubuntu_16.04/intel64
 107 
117GPU

	API version ............ 0.1
	Build .................. prod-02709
	Description ....... clDNNPlugin

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ ERROR ] [NOT_IMPLEMENTED] Input image format FP16is not supported yet...

I alert the test code 

            /** Set the precision of input data provided by the user, should be called before load of the network to the plugin **/
            item.second->setInputPrecision(Precision::U8);

but I get the error 

InferenceEngine: 
	API version ............ 1.0
	Build .................. 5852
[ INFO ] Parsing input parameters
[ INFO ] No extensions provided
[ INFO ] Loading plugin
107/opt/intel/computer_vision_sdk_2017.1.163/inference_engine/lib/ubuntu_16.04/intel64
 107 
117GPU

	API version ............ 0.1
	Build .................. prod-02709
	Description ....... clDNNPlugin

[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Start inference (1 iterations)
[ ERROR ] [PARAMETER_MISMATCH] Failed to set Blob with precision not corresponding user input precision

 

Leave a Comment

Please sign in to add a comment. Not a member? Join today