BiasAdd operation has unsupported `data_format`=NCHW

BiasAdd operation has unsupported `data_format`=NCHW

Hi,

I'm trying to convert a tensorflow 1.14 frozen graph.

Network has a couple of convolutions followed by batch norm.

tf format is channels first so (n,c,h,w)

Error message i get:

[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      BiasAdd (6)
[ ERROR ]          Conv2D/Conv2D/BiasAdd
[ ERROR ]          Conv2D_1/Conv2D/BiasAdd
[ ERROR ]          Conv2D_2/Conv2D/BiasAdd
[ ERROR ]          Conv2D_3/Conv2D/BiasAdd
[ ERROR ]          Conv2D_4/Conv2D/BiasAdd
[ ERROR ]          Conv2D_5/Conv2D/BiasAdd

Converter is run with:

python mo_tf.py --input_model frozen_model.pb --input FacetDots 
--input_shape "[1, 2, 400, 400]" --disable_nhwc_to_nchw --data_type FP32 --log_level DEBUG

Earlier I also got the following error:
openvino BiasAdd operation has unsupported `data_format`=NCHW

on openvino 2019.2

 

8 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

running without --log_level DEBUG

error changes to:
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  BiasAdd operation has unsupported `data_format`=NCHW
[ ERROR ]  List of operations that cannot be converted to Inference Engine IR:
[ ERROR ]      BiasAdd (6)
[ ERROR ]          Conv2D/Conv2D/BiasAdd
[ ERROR ]          Conv2D_1/Conv2D/BiasAdd
[ ERROR ]          Conv2D_2/Conv2D/BiasAdd
[ ERROR ]          Conv2D_3/Conv2D/BiasAdd
[ ERROR ]          Conv2D_4/Conv2D/BiasAdd
[ ERROR ]          Conv2D_5/Conv2D/BiasAdd
[ ERROR ]  Part of the nodes was not converted to IR. Stopped.

Dear Becktor, Jonathan,

python mo_tf.py --input_model frozen_model.pb --input FacetDots --input_shape "[1, 2, 400, 400]" --disable_nhwc_to_nchw --data_type FP32 --log_level DEBUG

Earlier I also got the following error:
openvino BiasAdd operation has unsupported `data_format`=NCHW

 

Why did you do disable_nhwc_to_nchw ? Tensorflow is actually NHWC, though Inference Engine converts everything to NCHW. If you are passing a frozen Tensorflow pb into Model Optimizer, you should leave --input_shape [N,H,W,C]. --input_shape "[1, 2, 400, 400]" is incorrect.

Please post your thoughts here. Glad to help !

Shubha

Hey thanks for the reply.

Don't think I made it clear.

We run our tensorflow layers with the channels first flag so our model is channels first (n,c,h,w) which is supported on gpu and with a tensorflow built with MKL binaries. Which in turn is why I use the --disable_nhwc_to_nchw flag the model optimizer.

It seems to work for most layers except the bias add operation.

Also what is the timeline for conversion of the new tensorflow 2.0.0's Saved_Model format?

 

Jonathan

Dear Becktor, Jonathan,

OK now I understand the issue. Thanks for explaining. Well today R3 was just released. Can you try it again on 2019R3 ? If the problem persists, let me know and I will file a bug. But please attach your model as a *.zip to this ticket and give me your full Model Optimizer command.  As for Tensorflow 2.0, we hope that full support will be in R4, scheduled for release toward the end of the year or early next year.

Hope it helps,

Thanks,

Shubha

 

Dear Shubha,

2019R3 seems to have fixed it!

Thanks!

Jonathan

Dearest Becktor, Jonathan,

Bravo ! Thanks for sharing your great news with the OpenVino Community. 

Thanks,

Shubha

 

I had the same issue on Linux and updating to a 2019.3 line fixed the issue for me. Thanks to Subha for the tip.

Leave a Comment

Please sign in to add a comment. Not a member? Join today