MTCNN

MTCNN

Hello

Mtcn inference sample how to?

I convert caffe model to IR, but I do not how it use

Please help

 

13 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi vladmir.

Perhaps you mean mtcnn ? One of OpenVino's face-detection samples should work fine.  For instance, look at inference_engine/samples/interactive_face_detection_demo/README.md to see how to run the sample from the command-line. You must build the samples first, however. 

 

Good day!
Thanks for the comment - corrected.
Yes, you are right. Detection of faces in OpenVine works fine, except for one thing - I did not manage to detect faces with a size of less than 60x60, and MTCNN copes with this successfully.
Perhaps I do not understand something and I will be glad to receive the opinion of an expert.
As for MTCNN, I would like to get an example on c ++.
thank

Hi .

I converted caffe-mtcnn-o, caffe-mtcnn-p, caffe-mtcnn-r to mtcnn-o.xml mtcnn-p.xml mtcnn-r.xml, but I encountered the following error while running the program with interactive_face_detection_demo:

[ ERROR ] Face Detection network output layer(prob1) should be DetectionOutput, but was SoftMax.

I run the demo like this:

./intel64/Release/interactive_face_detection_demo -m ~/openvino_models/mtcnn/p/fp16/mtcnn-p.xml  ~/openvino_models/mtcnn/r/fp16/mtcnn-r.xml  ~/openvino_models/mtcnn/o/fp16/mtcnn-o.xml -i /dev/video0 -d MYRIAD

How to load these three models simultaneously with this demo? Need to modify the source code?

Can someone help me?

Thanks.

Dear liu, feifei,

Your question is reasonable. Public models should be supported according to this doc .

From where did you get the caffe-mtcnn models ? Also can you give me the model optimizer command you used in each case ?

Thanks,

Shubha

I downloaded the models through the script in the deployment tool, path is ~/intel/openvino/deployment_tools/tools/model_downloader/downloader.py.

python3 downloader.py --name mtcnn-p
python3 downloader.py --name mtcnn-r
python3 downloader.py --name mtcnn-o

Then I used the following command to convert the caffe model to IR file

python3 mo.py --framework=caffe --input_shape=[1,3,720,1280] --input=data --output=prob1 --input_model ~/Desktop/mtcnn/p/mtcnn-p.caffemodel --input_proto ~/Desktop/mtcnn/p/mtcnn-p.prototxt --data_type=FP16

python3 mo.py --framework=caffe --input_shape=[1,3,24,24]  --input=data --output=prob1 --input_model ~/Desktop/mtcnn/r/mtcnn-r.caffemodel --input_proto ~/Desktop/mtcnn/r/mtcnn-r.prototxt --data_type=FP16

python3 mo.py --framework=caffe --input_shape=[1,3,48,48]  --input=data --output=prob1 --input_model ~/Desktop/mtcnn/o/mtcnn-o.caffemodel --input_proto ~/Desktop/mtcnn/o/mtcnn-o.prototxt --data_type=FP16

The Model optimization parameters, I refer to the file: ~/intel/openvino/deployment_tools/tools/model_downloader/list_topologies.yml

Is there a problem with my operation? How to use mtcnn on ncs2?

Thanks.

Dear liu, feife,

Everything you did was just fine. What happens when you run the object detection demo ? Don't use SSD, just run the C++ object_detection_demo which you will find in the samples kit.

Can you try it and report back here ?

Thanks,

Shubha

Dear Shubha:

       When I run the object detection demo with mtcnn, a segmentation fault (core dumped) occurred.

./intel64/Release/object_detection_demo -i ~/Desktop/1.bmp -m ~/mtcnn/p/mtcnn-p.xml ~/mtcnn/r/mtcnn-r.xml ~/mtcnn/o/mtcnn-o.xml -d CPU
[ INFO ] InferenceEngine: 
	API version ............ 1.6
	Build .................. custom_releases/2019/R1.1_28dfbfdd28954c4dfd2f94403dd8dfc1f411038b
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /home/liuliu/Desktop/1.bmp
[ INFO ] Loading plugin

	API version ............ 1.6
	Build .................. 23780
	Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
	/home/liuliu/openvino_models/mtcnn/p/mtcnn-p.xml
	/home/liuliu/openvino_models/mtcnn/p/mtcnn-p.bin
Segmentation fault (core dumped)

But I can run the benchmark_app demo with mtcnn as follows

./intel64/Release/benchmark_app -i ~/Documents/face/ -m ~/mtcnn/p/mtcnn-p.xml ~/mtcnn/r/mtcnn-r.xml ~/mtcnn/o/mtcnn-o.xml -d CPU

In addition to benchmark_app demo, which demo can be used to run mtcnn?

As I mentioned before, the interactive_face_detection_demo can't run mtcnn either.

Thanks.

Dear liu, feifei,

Glad that you at least got benchmark_app to run. This looks like a bug. object detection demo should definitely work and should not core dump. I will file a bug on your behalf. As for the interactive face detection demo it's not really advertised to work with MTCNN. But downloader categorizes mtcnn into the object_detection folder so object detection demo should work fine.

Sorry about the trouble you're having. 

I will file a bug straight away on your behalf.

For now, there's no other OpenVino sample which you can use.

Thanks,

Shubha

 

hi,

if u want to use mtcnn, you need to put the right input of  every model of mtcnn, including pnet, rnet, onet . and use the right skills to process result bbox from each net.

the input of pnet should be the different scales of original image, and the result of pnet face box should use "rms" and "refine" skills. after that ,each face box should be feed to rnet and use  "rms" and "refine" ,and  the result feed to the lase net-onet.

the three net in mtcnn have different function . it's quite different from ssd.

maybe you can see the examples in github , ncnn+mtcnn

Dear hua, wei

Thank you for your detailed explanation for why SSD won't work to detect  MTCNN models. I think the non-SSD sample (which I suggested) should work - but it crashes unfortunately. I need to reproduce and file a bug on this issue.

Thanks,

Shubha

 

Hi liu, feifei and Subha,

Are you able to use any sample to do detection for MTCNN?

I have converted the caffe model but when I try to click on "object detection demo" link, it takes me to an empty page.

Any help would be appreciated.

Regards,

Raj Shah

Dear Shah, Raj,

In OpenVino the samples are not downloadable from a link. Please download the latest 2019R2 package, follow installation instructions, build your C++ samples and go from there. By the way, OpenVino has both C++ and Python samples and demos.

Thanks

Shubha

Leave a Comment

Please sign in to add a comment. Not a member? Join today