[security_barrier_camera_demo] Applied a new LPR Model but does not indicate the same result

[security_barrier_camera_demo] Applied a new LPR Model but does not indicate the same result

Dear Sir or Madam,

 

I am trying to Change the LPR model from the default model to the model I have trained on Pytorch.

Firstly I transformed the model to onnx and it seems it worked well.

 

But after transforming the onnx model to IR format (xml/bin) and changing the source code in (net_wrappers.hpp),

I got the different value compared with the original Pytorch result.

 

What should I have to do or how can I check with the OpenVINO ?

In the new model, the input is [layer id="0" name="x" ],

and the outputs are two [layer id="35" name="142"] and [layer id="38" name="145"]

 

input [layer id="0" name="x" ]

                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>60</dim>
                    <dim>126</dim>

 

output 1 [layer id="35" name="142"]

                    <dim>1</dim>
                    <dim>182</dim>
                    <dim>15</dim>

 

output 2 [layer id="38" name="145"]

                    <dim>1</dim>
                    <dim>53</dim>
                    <dim>15</dim>

 

Sincerely,

JS 

6 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

As you probably already have noticed, the open model zoo demo has been developed for model with two inputs and single output. As far as I can see from your comments, your model has single input but two outputs. You may need to check the output blobs from your model inference to see what results you get in fact. How do you see the difference in results? What version of OpenVINO did you use to convert your model to IR?

Dear vladimir-dudnik (Intel),

Thank you for your comment.
Firstly I installed OpenVINO with [w_openvino_toolkit_p_2019.2.242.exe], 
so openvino_2019.2.275 folder was made.And inference engine is like below.

------------------------------------------
[ INFO ] InferenceEngine:
        API version ............ 2.0
        Build .................. 27579
        Description ....... API
------------------------------------------

The version of model optimizer is like below.
------------------------------------------
Version of Model Optimizer is: 2019.2.0-436-gf5827d4
------------------------------------------

As you said in your comment, 
I had noticed that there was two inputs and one output in the original model.
So I tried two ignore the second input in the Lpr function in (net_wrappers.hpp).
And Changed like this.

        LprInputName = "x";
        LprInputSeqName = "x";
        LprOutputName = "142";
        LprOutputName2 = "145";

Can anybody tell me what else I should check into ?

Sincerely,
JS
 

Dear vladimir-dudnik (Intel),

I have one more question related with this issue.

I want to check if the input and out is correct.
So I want to check each value of them.

As the input size is [1,3,60,126] , and out size are [1,182,15] and [1,53,15] , 
I might check the tensors of the value, but I could have not gotten any clue to check the values.

Can you help me how to check these ?

Sincerely,
JS
 

Hi JS,

I might check the tensors of the value, but I could have not gotten any clue to check the values.

Sorry, I did not got this phrase.

You should be able to dump every single value in any output blob. I do not know specific of your model, but in many cases, model's output blob contain some set of floating-point numbers. Could you just print to console the content of your output blobs for model in IR format and inferred with Inference Engine and model inferred in source framework (pytorch, as you said before)? If there is significant difference in output blob values, then you may want to dump values of all model layers, starting from input blob. Ensure that you provide the same input for IR model as you do for pytorch model (may be taking into account possible difference in data format, like RGB vs BGR, 8-bit vs 32f, planar vs interleaved, NCHW vs NHWC and so on, you should know format of inputs and ouputs for your model), and if inputs is the same but output still be different, go through every intermediate layer and dump those intermediate data, to see where the difference comes from.

Regards,
  Vladimir

Dear vladimir-dudnik (Intel),

Thank you for your comment.

I might check the tensors of the value, but I could have not gotten any clue to check the values.
=> I was trying to say that I want to check the out values, but I do not know how to do it.

So can you give me a advise how to dump the all model layers and to check the values.

Sincerely,
JS

P.S. I was trying to check the values with this code, but the values were significantly different.

        /* jslee

                    <dim>1</dim>
                    <dim>88</dim>
                    <dim>1</dim>
                    <dim>1</dim>

        std::cout << std::max_element(Values_u.begin(), Values_u.at(182*1)) - Values_u.begin() << std::endl;

        for (int i = 0; i < maxSequenceSizePerPlate; i++) {
            if (data[i] == -1) {
                break;
            }
            result += items[static_cast<std::vector<std::string>::size_type>(data[i])];
        }
        return result;
        */

        const auto Values_u = inferRequest.GetBlob(LprOutputName)->buffer().as<float*>();
        const auto Values_b = inferRequest.GetBlob(LprOutputName2)->buffer().as<float*>();

        float temp_max;
        int temp_max_i;
        int temp_max_j;
        boolean init = FALSE;
        
        for (int j = 0; j < 15; j++) {

            for (int i = 0; i < 182; i++) {
                if (init == FALSE) {
                    temp_max = Values_u[i + j * 182];
                    temp_max_i = i;
                    temp_max_j = j;
                    init = TRUE;
                }
                if (temp_max < Values_u[i + j * 182]) {
                    temp_max = Values_u[i + j * 182];
                    temp_max_i = i;
                    temp_max_j = j;
                }
                if (i == 181)
                {
                    std::cout << "Values_u max:" << temp_max << ", max_i:" << temp_max_i << ", max_j:" << temp_max_j << std::endl;
                    init = FALSE;
                }

            }
        }

        for (int j = 0; j < 15; j++) {
            for (int i = 0; i < 53; i++) {
                if (init == FALSE) {
                    temp_max = Values_b[i + j * 53];
                    temp_max_i = i;
                    temp_max_j = j;
                    init = TRUE;
                }
                if (temp_max < Values_b[i + j * 53]) {
                    temp_max = Values_b[i + j * 53];
                    temp_max_i = i;
                    temp_max_j = j;
                }
                if (i == 52)
                {
                    std::cout << "Values_b max:" << temp_max << ", max_i:" << temp_max_i << ", max_j:" << temp_max_j << std::endl;
                    init = FALSE;
                }

            }
        }
        

Leave a Comment

Please sign in to add a comment. Not a member? Join today