Two models on NCS2 stick, same input.

TCE Options

TCE Open Date: 

Wednesday, February 12, 2020 - 16:38

Two models on NCS2 stick, same input.


Hello.

I have two models loaded onto one myriad Movidius2 device (on a usb stick).

Both models have the same input.

Can I supply the input once? 

Now I make two nets, A and B, and assigns the same data twice, and transfers the same data twice to the usb;

 

executable_networkA = core.LoadNetwork(networkA, device_name); 
executable_networkB = core.LoadNetwork(networkB, device_name); 

infer_requestA = executable_networkA.CreateInferRequest(); 
infer_requestB = executable_networkB.CreateInferRequest(); 

input_blobA = infer_requestA.GetBlob(networkA.getInputsInfo().begin()->first); 
input_blobB = infer_requestB.GetBlob(networkB.getInputsInfo().begin()->first); 

auto *dataA = input_blobA->buffer().as<PrecisionTrait<Precision::FP16>::value_type *>(); auto *dataB = input_blobB->buffer().as<PrecisionTrait<Precision::FP16>::value_type *>(); dataA[..] = data; 
dataB[..] = data;

Is it possible to have input_blobB to point to the internal (on the usb/vpu) memory assigned to input_blobA?

And thereby only transfer the input data once?

 

Thanks and Please have a Good day.

/Brian

 

NB: This post is also posted in the forum watercooler(?!). I have reposted it here.

4 posts / 0 new

AAhh ha

In section 6) Prepare input

we are presented with the shared blob option ()

auto roiBlob = InferenceEngine::make_shared_blob(inputBlob, cropRoi);
infer_request2->SetBlob(input_name, roiBlob);

I will try that, and return, probably next week. 


The ROI approach did not work for me. Maybe because I was using batch=4?

I succeeded with another variant of make_shared_blob()

 

input_blob = infer_requests[0].GetBlob(networks[0].getInputsInfo().begin()->first);
..
..

auto *dat = input_blob->buffer().as<PrecisionTrait<Precision::FP16>::value_type *>();
InferenceEngine::Blob::Ptr shared_blob = InferenceEngine::make_shared_blob(input_blob->getTensorDesc(), dat, input_n_channels * input_height * input_width * input_batchsize );
infer_requests[model_indx].SetBlob(networks[model_indx].getInputsInfo().begin()->first, shared_blob);

 

 

Have a nice one

/brian

 


Hi brian,

I am glad you could figure this out!

Please, feel free to reach out again if you have additional questions.

Best Regards,

David

Leave a Comment

Please sign in to add a comment. Not a member? Join today