Error in inference

Error in inference

Hello everyone
I have used the model optimizer to generate the .xml file,but I got an error on inference.

>python 6d_Inference.py -m E:\yolo6d\openvino2\yolo6d_graph.xml -i E:\yolo6d\000000.jpg  -d MYRIAD
[ INFO ] Initializing plugin for MYRIAD device...
[ INFO ] Reading IR...
[ INFO ] Loading IR to the plugin...
Traceback (most recent call last):
  File "6d_Inference.py", line 223, in <module>
    sys.exit(main() or 0)
  File "6d_Inference.py", line 119, in main
    exec_net = plugin.load(network=net, num_requests=2)
  File "ie_api.pyx", line 551, in openvino.inference_engine.ie_api.IEPlugin.load
  File "ie_api.pyx", line 561, in openvino.inference_engine.ie_api.IEPlugin.load
RuntimeError: [VPU] Unsupported precision I32for data strided_slice_1/stack_1/Output_0/Data__const

what should I do?

Thank you

8 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Dear yu, jia,

You probably forgot to do --data_type FP16 when you generated your IR using Model Optimizer. FP32 is not supported on VPU (as your error indicates). The precision per device support is documented in Supported Devices

Thanks,

Shubha

Hi,Shubha R,

I use the code below to generate IR.

python mo_tf.py --input_model E:\yolo6d\yolo6d_graph.pb --data_type FP16

But I still get the following error:RuntimeError: [VPU] Unsupported precision I32for data strided_slice/stack/Output_0/Data__const

I don't know what caused the error.What should I do?

Thanks ,

yujia

 

 

Dear yu, jia,

For yolo (v3 I presume ?) but v2 is also ok, what you have above is not the right command. Please step through our detailed documentation on Model Optimizer Tensorflow Yolo . If you follow these steps to the letter, you will get it working.

And yes, if you wish to run on NCS2 you must add on a --data_type FP16.

Thanks,

Shubha

Dear Shubha,

I read the YOLO conversion documentation in detail, now I use the following code to generate the IR file.

>python mo_tf.py --input_model E:\yolo6d\yolo6d_graph.pb --tensorflow_use_custom_operations_config E:\yolo6d\yolo_6.json --data_type FP16

It can successfully generate IR files. I still reported the previous error on inference. My code on inference as follows.

>python 6d_Inference.py -m E:\yolo6d\openvino2\yolo6d_graph.xml -i E:\yolo6d\000000.jpg  -d MYRIAD

RuntimeError: [VPU] Unsupported precision I32for data strided_slice/stack/Output_0/Data__const

My model is based on the modification of the yolov2 network. I use tensorlfow for training. I use the tensorflow to freeze the model code to generate the yolo6d_graph.pb file. The node that freezes the model selects the node name of the network output. It can successfully generate the IR file.I got above error on inference.

Thanks, 

yujia

 

Dear yu, jia,

Can you attach your model as a *.zip which also contains your short inference script ? Also I hope you are using the latest and greatest OpenVino which is 2019R2.

Thanks,

Shubha

 

Dear Shubha,

I have upload my model.  My Openvino is 2019R2.

Thanks,

Yujia

 

Attachments: 

AttachmentSize
Downloadapplication/zip yolov2_tf.zip179.44 MB

Dear yu, jia,

Thank you ! I promise to take a look.

Shubha

 

Leave a Comment

Please sign in to add a comment. Not a member? Join today