TF model (MO) optimization problem

TF model (MO) optimization problem


I am trying to test MO optimizer for my DeepLab/MobileNetV2 model. I am aware of this thread, but the solution doesn't help me. Intel, please, assist ;-)

Model - non-trained, exported with the export script from official deeplab, input node in input:0, output is segmap:0. Link

python --input_model /data/1.pb --input_shape "(1,513,513,3)" --log_level=DEBUG --data_type FP32 --output segmap  --input input --scale 1 --model_name test --framework tf --output_dir ./

And error:

[ ERROR ]  Stopped shape/value propagation at "GreaterEqual" node.

tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node GreaterEqual was passed int64 from add_1_port_0_ie_placeholder:0 incompatible with expected int32.


Thanks a lot! 


4 posts / 0 new

Hello Alex,


Please use below commend to generate IR file:

python --input_model 1.pb --input 0:MobilenetV2/Conv/Conv2D --output ArgMax --input_shape [1,513,513,3]


BTW, the IR file only contains main workloads of the Deeplab model, if you need to infer the whole model, please use TF operation to finish left pre/post processing, you can refer to my repo on Github for DeepLabV3-MobileNetV2:

Hello Fiona,

Thanks for your help and github - makes much more sense now.

Can you please also explain - I'am currently getting " cannot open shared object file: No such file or directory" error and there's no such file in the /opt/intel/. Do I have to run the build myself? Compiled file isn't a part of the distribution ?

Thanks, Alex.

Hello Alex,

In OpenVINOR release, the is built from our inference sample "extension". You have to build this sample firstly, then access the dynamic lib from ${INTEL_CVSDK_DIR}/deployment_tools/inference_engine/samples/intel64/Release/lib/

Leave a Comment

Please sign in to add a comment. Not a member? Join today