I'm trying to perform inference using a very simple model defined in TF freeze graph (attached) converted to IR format using Model Optimizer (attached). Unfortunately when executing my code I get the following error:
Exception: [GNAPlugin] in function void GNAPluginNS::GNAPlugin::LoadNetwork(InferenceEngine::ICNNNetwork &): The plugin does not support layer: matrix_multiplication_explicit/MatMul:Gemm
AFAIU this means that my model includes a GEMM layer that is not supported by the GNA plugin. Is there any way I can enforce Model Optimizer NOT to use GEMM in the IR representation and instead use something that will be supported by GNA plugin?
BTW, I'm currently using the following command to convert .pb file to IR:
python mo_tf.py --input_model matrix_mul_explicit.pb --input "input_x_float","input_y_float" --input_shape (1,8,8),(1,8,1)
Thanks in advance for any help.