intel deep learning inference engine

intel deep learning inference engine

where should i get the faster-rcnn.xml model to run the intel inference engine object detection sample

Zone: 

Thread Topic: 

Question
8 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi Hasbullah,

We will share detailed instruction on Monday. Meanwhile, have you tried the flow on other topologies? Any questions regarding Model Optimizer usage to convert Caffe model to xml on classification topologies?

Regards,
Iliya

Quote:

Ilya (Intel) wrote:

Hi Hasbullah,

We will share detailed instruction on Monday. Meanwhile, have you tried the flow on other topologies? Any questions regarding Model Optimizer usage to convert Caffe model to xml on classification topologies?

Regards,
Iliya

hi Ilya

yeap.. and now it is working well on the image classification example..

however when converting the faster rcnn caffemodel to xml for the image_detection example i got some error:

*** Check failure stack trace: ***
    @     0x7f5193c10e6d  (unknown)
    @     0x7f5193c12ced  (unknown)
    @     0x7f5193c10a5c  (unknown)
    @     0x7f5193c1363e  (unknown)
    @     0x7f519410938e  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7f519410fc3b  readTopology_
    @     0x7f5195c3e53d  Model2OpenVX::CaffeNetworkDescriptor::CaffeNetworkDescriptor()
    @     0x7f5195c3a6fe  Model2OpenVX::CaffeNet::init()
    @     0x7f51969ef72e  Model2OpenVX::FrameworkManager::GenerateIRFile()
    @     0x7f5197137af7  main
    @     0x7f519618fb35  __libc_start_main
    @     0x7f5197138ed7  (unknown)
Aborted (core dumped)

 

 

regards,

Hasbullah

Best Reply

To have an XML for Faster-RCNN model, you can create one by yourself with Intel Caffe (with a patch applied)

1. First of all, download the caffe model and deployment prototxt file for it:

2. Download Intel Caffe

https://github.com/intel/caffe

You can use master branch. I used revision b0ef32... which proved to work.

  • Apply faster-rcnn.patch (attached)
  • Add ModelOptimizer adapters to this caffe
  • Then build Caffe with all the changes applied.

You can check that this Caffe version scores the model by running caffe test like this:

tools/caffe test --model test.prototxt --weights coco_vgg16_faster_rcnn_final.caffemodel

3. Use ModelOptimizer to make XML/BIN pair:
 

ModelOptimizer -w 'coco_vgg16_faster_rcnn_final.caffemodel' -d 'test.prototxt' -i -p FP32 -f 1 --target XEON -b 1

Please follow these steps and let us know if anything fails (or not clear).

Attachments: 

Hi Hasbullah,

Thank you for your interest to Deployment Toolkit! Glad to hear that you were able to use classification sample with success. Were you able to execute Faster RCNN with instruction above?

Regards,

Iliya

hi ilya,

i downloaded the intel caffe master branch..

then i follow the steps u stated above, and unfortunately the following error occur,

*** Check failure stack trace: ***
    @     0x7f78ed1bee6d  (unknown)
    @     0x7f78ed1c0ced  (unknown)
    @     0x7f78ed1bea5c  (unknown)
    @     0x7f78ed1c163e  (unknown)
    @     0x7f78f5869939  caffe::LayerRegistry<>::CreateLayer()
    @     0x7f78f58fd27d  caffe::Net<>::Init()
    @     0x7f78f58ff38e  caffe::Net<>::Net()
    @     0x7f78f598ead6  createNet_
    @     0x7f78f7529770  Model2OpenVX::CaffeNet::init()
    @     0x7f78f82de72e  Model2OpenVX::FrameworkManager::GenerateIRFile()
    @     0x7f78f8a26af7  main
    @     0x7f78f7a7eb35  __libc_start_main
    @     0x7f78f8a27ed7  (unknown)
Aborted (core dumped)

 

 

is it the master branch problem ?

your help is kindly appreciated

Quote:

hasbullah p. wrote:

hi ilya,

i downloaded the intel caffe master branch..

then i follow the steps u stated above, and unfortunately the following error occur,

*** Check failure stack trace: ***
    @     0x7f78ed1bee6d  (unknown)
    @     0x7f78ed1c0ced  (unknown)
    @     0x7f78ed1bea5c  (unknown)
    @     0x7f78ed1c163e  (unknown)
    @     0x7f78f5869939  caffe::LayerRegistry<>::CreateLayer()
    @     0x7f78f58fd27d  caffe::Net<>::Init()
    @     0x7f78f58ff38e  caffe::Net<>::Net()
    @     0x7f78f598ead6  createNet_
    @     0x7f78f7529770  Model2OpenVX::CaffeNet::init()
    @     0x7f78f82de72e  Model2OpenVX::FrameworkManager::GenerateIRFile()
    @     0x7f78f8a26af7  main
    @     0x7f78f7a7eb35  __libc_start_main
    @     0x7f78f8a27ed7  (unknown)
Aborted (core dumped)

 

 

is it the master branch problem ?

your help is kindly appreciated

Could you please provide us with the full log, not only the error stacktrace?

ModelOptimizer -w /opt/intel/deep_learning_sdk_2017.1.0.2778/deployment_tools/inference_engine/bin/intel64/Release/vgg/coco_vgg16_faster_rcnn_final.caffemodel -d /opt/intel/deep_learning_sdk_2017.1.0.2778/deployment_tools/inference_engine/bin/intel64/Release/vgg/coco_vgg16_faster_rcnn_final.prototxt -i -p FP32 -f 1 --target APLK -b 1
Start working...

Framework plugin: CAFFE
Target type: APLK
Network type: LOCALIZATION
Batch size: 1
Precision: FP32
Layer fusion: true
Output directory: Artifacts
Custom kernels directory:
Network input normalization: 1
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 553272331
name: "VGG_ILSVRC_16_layers"
state {
  phase: TEST
}
layer {
  name: "input"
  type: "Input"
  top: "data"
  top: "im_info"
  input_param {
    shape {
      dim: 1
      dim: 3
      dim: 224
      dim: 224
    }
    shape {
      dim: 1
      dim: 3
    }
  }
}
layer {
  name: "conv1_1"
  type: "Convolution"
  bottom: "data"
  top: "conv1_1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu1_1"
  type: "ReLU"
  bottom: "conv1_1"
  top: "conv1_1"
}
layer {
  name: "conv1_2"
  type: "Convolution"
  bottom: "conv1_1"
  top: "conv1_2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu1_2"
  type: "ReLU"
  bottom: "conv1_2"
  top: "conv1_2"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1_2"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2_1"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2_1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu2_1"
  type: "ReLU"
  bottom: "conv2_1"
  top: "conv2_1"
}
layer {
  name: "conv2_2"
  type: "Convolution"
  bottom: "conv2_1"
  top: "conv2_2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu2_2"
  type: "ReLU"
  bottom: "conv2_2"
  top: "conv2_2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2_2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv3_1"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu3_1"
  type: "ReLU"
  bottom: "conv3_1"
  top: "conv3_1"
}
layer {
  name: "conv3_2"
  type: "Convolution"
  bottom: "conv3_1"
  top: "conv3_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu3_2"
  type: "ReLU"
  bottom: "conv3_2"
  top: "conv3_2"
}
layer {
  name: "conv3_3"
  type: "Convolution"
  bottom: "conv3_2"
  top: "conv3_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu3_3"
  type: "ReLU"
  bottom: "conv3_3"
  top: "conv3_3"
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "conv3_3"
  top: "pool3"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv4_1"
  type: "Convolution"
  bottom: "pool3"
  top: "conv4_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu4_1"
  type: "ReLU"
  bottom: "conv4_1"
  top: "conv4_1"
}
layer {
  name: "conv4_2"
  type: "Convolution"
  bottom: "conv4_1"
  top: "conv4_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu4_2"
  type: "ReLU"
  bottom: "conv4_2"
  top: "conv4_2"
}
layer {
  name: "conv4_3"
  type: "Convolution"
  bottom: "conv4_2"
  top: "conv4_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu4_3"
  type: "ReLU"
  bottom: "conv4_3"
  top: "conv4_3"
}
layer {
  name: "pool4"
  type: "Pooling"
  bottom: "conv4_3"
  top: "pool4"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv5_1"
  type: "Convolution"
  bottom: "pool4"
  top: "conv5_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu5_1"
  type: "ReLU"
  bottom: "conv5_1"
  top: "conv5_1"
}
layer {
  name: "conv5_2"
  type: "Convolution"
  bottom: "conv5_1"
  top: "conv5_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu5_2"
  type: "ReLU"
  bottom: "conv5_2"
  top: "conv5_2"
}
layer {
  name: "conv5_3"
  type: "Convolution"
  bottom: "conv5_2"
  top: "conv5_3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu5_3"
  type: "ReLU"
  bottom: "conv5_3"
  top: "conv5_3"
}
layer {
  name: "conv5_3_relu5_3_0_split"
  type: "Split"
  bottom: "conv5_3"
  top: "conv5_3_relu5_3_0_split_0"
  top: "conv5_3_relu5_3_0_split_1"
}
layer {
  name: "rpn_conv/3x3"
  type: "Convolution"
  bottom: "conv5_3_relu5_3_0_split_0"
  top: "rpn/output"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "rpn_relu/3x3"
  type: "ReLU"
  bottom: "rpn/output"
  top: "rpn/output"
}
layer {
  name: "rpn/output_rpn_relu/3x3_0_split"
  type: "Split"
  bottom: "rpn/output"
  top: "rpn/output_rpn_relu/3x3_0_split_0"
  top: "rpn/output_rpn_relu/3x3_0_split_1"
}
layer {
  name: "rpn_cls_score"
  type: "Convolution"
  bottom: "rpn/output_rpn_relu/3x3_0_split_0"
  top: "rpn_cls_score"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 24
    pad: 0
    kernel_size: 1
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "rpn_bbox_pred"
  type: "Convolution"
  bottom: "rpn/output_rpn_relu/3x3_0_split_1"
  top: "rpn_bbox_pred"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 48
    pad: 0
    kernel_size: 1
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "rpn_cls_score_reshape"
  type: "Reshape"
  bottom: "rpn_cls_score"
  top: "rpn_cls_score_reshape"
  reshape_param {
    shape {
      dim: 0
      dim: 2
      dim: -1
      dim: 0
    }
  }
}
layer {
  name: "rpn_cls_prob"
  type: "Softmax"
  bottom: "rpn_cls_score_reshape"
  top: "rpn_cls_prob"
}
layer {
  name: "rpn_cls_prob_reshape"
  type: "Reshape"
  bottom: "rpn_cls_prob"
  top: "rpn_cls_prob_reshape"
  reshape_param {
    shape {
      dim: 0
      dim: 24
      dim: -1
      dim: 0
    }
  }
}
layer {
  name: "proposal"
  type: "SimplerNMS"
  bottom: "rpn_cls_prob_reshape"
  bottom: "rpn_bbox_pred"
  bottom: "im_info"
  top: "rois"
  simpler_nms_param {
    max_num_proposals: 300
    pre_nms_topn: 6000
    post_nms_topn: 150
    scale: 8
    scale: 16
    scale: 32
  }
}
layer {
  name: "roi_pool5"
  type: "ROIPooling"
  bottom: "conv5_3_relu5_3_0_split_1"
  bottom: "rois"
  top: "pool5"
  roi_pooling_param {
    pooled_h: 7
    pooled_w: 7
    spatial_scale: 0.0625
  }
}
layer {
  name: "fc6"
  type: "InnerProduct"
  bottom: "pool5"
  top: "fc6"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer {
  name: "fc7"
  type: "InnerProduct"
  bottom: "fc6"
  top: "fc7"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer {
  name: "fc7_relu7_0_split"
  type: "Split"
  bottom: "fc7"
  top: "fc7_relu7_0_split_0"
  top: "fc7_relu7_0_split_1"
}
layer {
  name: "cls_score"
  type: "InnerProduct"
  bottom: "fc7_relu7_0_split_0"
  top: "cls_score"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 81
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "bbox_pred"
  type: "InnerProduct"
  bottom: "fc7_relu7_0_split_1"
  top: "bbox_pred"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 324
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "cls_prob"
  type: "Softmax"
  bottom: "cls_score"
  top: "cls_prob"
}
F0607 11:11:16.852372  6666 layer_factory.hpp:118] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: SimplerNMS (known types: AbsVal, Accuracy, AnnotatedData, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Concat, ContrastiveLoss, Convolution, Crop, CustomConvolution, CustomDeconvolution, CustomInnerProduct, CustomLRN, CustomPooling, CustomRelU, CustomSoftmax, CustomSoftmaxWithLoss, CustomTanH, Data, Deconvolution, DetectionEvaluate, DetectionOutput, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, ImageData, InfogainLoss, InnerProduct, Input, LRN, LSTM, LSTMUnit, Log, MVN, MemoryData, MultiBoxLoss, MultinomialLogisticLoss, Normalize, PReLU, Parameter, Permute, Pooling, Power, PriorBox, RNN, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, SmoothL1Loss, Softmax, SoftmaxWithLoss, Split, TanH, Threshold, Tile, VideoData, WindowData)
*** Check failure stack trace: ***
    @     0x7f78ed1bee6d  (unknown)
    @     0x7f78ed1c0ced  (unknown)
    @     0x7f78ed1bea5c  (unknown)
    @     0x7f78ed1c163e  (unknown)
    @     0x7f78f5869939  caffe::LayerRegistry<>::CreateLayer()
    @     0x7f78f58fd27d  caffe::Net<>::Init()

Centos uname:

Linux localhost.localdomain 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 

 

Leave a Comment

Please sign in to add a comment. Not a member? Join today