Error when try to optimize vgg_16 model

TCE Options

TCE Level: 

TCE Open Date: 

Tuesday, February 11, 2020 - 19:05

Error when try to optimize vgg_16 model


Hi,

 

I download a vgg16 model which is a .ckpt file, and I use freeze_graph commands to convert it to a .pb file. And I use "python mo_tf.py --input_model frozen_model_vgg_16.pb --output_dir \opmodel --mean_values=[103.939,116.779,123.68]" to optimize it then to generate .xml and .bin files. And the program stop with error:

[ ERROR ]  Elementwise operation vgg_16/dropout6/dropout/mul has inputs of different data types: float32 and int32
[ ERROR ]  Elementwise operation vgg_16/dropout7/dropout/mul has inputs of different data types: float32 and int32
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      RandomUniform (2)
[ ERROR ]          vgg_16/dropout6/dropout/random_uniform/RandomUniform
[ ERROR ]          vgg_16/dropout7/dropout/random_uniform/RandomUniform
[ ERROR ]      Floor (2)
[ ERROR ]          vgg_16/dropout6/dropout/Floor
[ ERROR ]          vgg_16/dropout7/dropout/Floor
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

 

I don't know why. Are my parameters given wrong or there's mistakes when I freeze it? I would appreciate if someone could provide your assistance.

2 posts / 0 new

Hello Q, Y.

Please note that only non-frozen version of VGG-16 model is officially supported by OpenVINO toolkit.

I've tested this on my end, and this works fine. Steps to follow:

1) Download non-frozen VGG-16 model from here https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_conver...

2) Convert it into .pb format per this example https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_conver...

The export_inference_graph.py command should be slightly changed as follows:

python3 tf_models/research/slim/export_inference_graph.py -labels_offset 1 \
    --model_name vgg_16 \
    --output_file vgg_16_inference_graph.pb

3) Follow further steps from the guide above, and once you reach mo_tf.py command, then you need to change it as follows:

<MODEL_OPTIMIZER_INSTALL_DIR>/mo_tf.py --input_model ./vgg_16_inference_graph.pb --input_checkpoint ./vgg_16.ckpt -b 1 --mean_value [103.94,116.78,123.68] --scale 1

 As you can see, here you need to provide additional arguments -b and --scale.

If you face a missing pywrap_tensorflow error during mo_tf.ry command execution, try to downgrade tensorflow version to 1.5 version with

sudo pip3 install tensorflow==1.5

Hope this helps.

Leave a Comment

Please sign in to add a comment. Not a member? Join today