Wakeup from sleep mode

Wakeup from sleep mode

I have an ncs2 application where the system needed to put into sleep mode for a period of time to save battery life. Current my application have an error "async_infer_request_curr" when the system wakeup after sleep. 

What is the proper procedures to setup the ncs2 before sleep and after wakeup?

My application is modify from the object_detection_demo_ssd_async sample program.

I tried stopping the thread that call nscs and restarting the thread. It did not seems to work. 

Environment:     openVino 2019 R1, windows 10, NSC2, object detection 

Any suggestion would be greatly appreciated.

Thanks,

Terry

 

6 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi Terry,

Can you provide the modifications you made to the object detection demo and I can look through it.

Best Regards,

Sahira 

Hi Sahira,

Thanks very much for taking a look at this.

Below is a two key functions in c++. The application call the c++ library from a c# program. 

The first function (onInit()) will initialize the vino.

The second function will call with image to get the boxes and class on.

Please let me know if more clarification needed.  

When the system is a sleep mode, is the NCS2  loss power or it will maintain power? Is there easy way to reset  NCS2 ?

Thanks,

Terry

 

  

//

//   Function 1

//

          
    void vinoLib::onInit(int boxMax, bool displayWindowsOn, bool useCPU , bool useClassify )

    {
        isDisplay = displayWindowsOn;
        // --------------------------- 1. Load Plugin for inference engine -------------------------------------
        slog::info << "Loading plugin" << slog::endl;
        
//#ifdef _RELEASE
#ifdef _RELEASE
        std::string pluginDir = "C:\\Program Files (x86)\\IntelSWTools\\openvino\\deployment_tools\\inference_engine\\bin\\intel64\\Release";

       //        std::string pluginDir = "C:\\Intel\\computer_vision_sdk_2018.5.445\\deployment_tools\\inference_engine\\bin\\intel64\\Debug";
#else
        std::string pluginDir = "C:\\Program Files (x86)\\IntelSWTools\\openvino\\deployment_tools\\inference_engine\\bin\\intel64\\Debug";

        //std::string pluginDir = "C:\\Intel\\computer_vision_sdk_2018.5.445\\deployment_tools\\inference_engine\\bin\\intel64\\Debug";
#endif
        mBoxMax = boxMax;
        //testing debug
        //pluginDir = "C:\\Program Files (x86)\\IntelSWTools\\openvino\\deployment_tools\\inference_engine\\bin\\intel64\\Debug";
        //InferencePlugin plugin = PluginDispatcher({ pluginDir, "../../../lib/intel64" }).getPluginByDevice("MYRIAD");
     //    printPluginVersion(plugin, std::cout);

        //  return;

        /** Load extensions for the plugin **/

        
        InferencePlugin plugin;
        std::string xml ;
        std::string bin  ;
        std::string  labelFileName;

         

        
        if (useCPU == true)
        {
            pluginDir = "C:\\Program Files (x86)\\IntelSWTools\\openvino\\deployment_tools\\inference_engine\\bin\\intel64\\Debug";
            pluginDir = "C:\\Program Files(x86)\\IntelSWTools\\openvino_2019.1.087\\inference_engine\\lib\\intel64";
            // Loading Plugin
            std::cout << std::endl;
            std::cout << "[INFO] - Loading VINO Plugin..." << std::endl;
            plugin = PluginDispatcher({ "", pluginDir , "" }).getPluginByDevice("CPU");
            plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
            printPluginVersion( plugin, std::cout);

            const std::string& deviceName = "CPU";

            //pluginDir = "C:\\Program Files (x86)\\IntelSWTools\\openvino\\deployment_tools\\inference_engine\\bin\\intel64\\Debug";

            //InferencePlugin plugin = PluginDispatcher({ pluginDir, "" }).getPluginByDevice(deviceName);
            //plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());

            // CPU(MKLDNN) extensions are loaded as a shared library and passed as a pointer to base extension
            // IExtensionPtr extension_ptr = make_so_pointer<IExtension>("C:\\Users\\trafficcountuser\\Documents\\Intel\\OpenVINO\\inference_engine_samples_build_2017\\intel64\\Debug");
            //IExtensionPtr extension_ptr = make_so_pointer<IExtension>("C:\\Users\\trafficcountuser\\Documents\\Intel\\OpenVINO\\inference_engine_samples_build_2017\\intel64\Debug\\cpu_extension.lib");
            //plugin.AddExtension(extension_ptr);
            //slog::info << "CPU Extension loaded: " << "flage l" << slog::endl;

            //  InferencePlugin plugin = PluginDispatcher({ "C:\\Intel\\computer_vision_sdk_2018.5.445\\deployment_tools\\inference_engine\\bin\\intel64\\Debug", "../../../lib/intel64" }).getPluginByDevice("MYRIAD");

            if (useClassify == false)
            {             
                xml = "C:\\VC\\TrainedNetwork\\IR_CPU\\frozen_inference_graph.xml";
                bin = "C:\\VC\\TrainedNetwork\\IR_CPU\\frozen_inference_graph.bin";
                labelFileName = "C:\\VC\\TrainedNetwork\\IR_CPU\\label_name.pbtxt";
            }
            else
            {
                xml = "C:\\VC\\TrainedNetwork\\IR_CPU\\frozen_inference_graph.xml";
                bin = "C:\\VC\\TrainedNetwork\\IR_CPU\\frozen_inference_graph.bin";
                labelFileName = "C:\\VC\\TrainedNetwork\\IR_CPU\\label_name.pbtxt";
            }

            xml = "C:\\VC\\TrainedNetwork\\IR_CPU\\frozen_inference_graph.xml";
            bin = "C:\\VC\\TrainedNetwork\\IR_CPU\\frozen_inference_graph.bin";
            labelFileName = "C:\\VC\\TrainedNetwork\\IR_CPU\\label_name.pbtxt";

        } 
        else
        {

             //  plugin = PluginDispatcher({ pluginDir, "../../../lib/intel64" }).getPluginByDevice("MYRIAD");
             

            //std::string deviceName = "MYRIAD";
            const std::string& deviceName = "MYRIAD";
           plugin = PluginDispatcher({ pluginDir, "" }).getPluginByDevice(deviceName);
          // plugin = PluginDispatcher({ pluginDir, "" }).getPluginByDevice("MYRIAD");
          // plugin = PluginDispatcher().getPluginByDevice(FLAGS_d);
           pluginDir = "C:\\VC\\TrainedNetwork\\IR";

          printPluginVersion(plugin, std::cout);
             xml = "C:\\VC\\TrainedNetwork\\IR\\frozen_inference_graph.xml";
             bin = "C:\\VC\\TrainedNetwork\\IR\\frozen_inference_graph.bin";
             labelFileName = "C:\\VC\\TrainedNetwork\\IR\\label_name.pbtxt";
             //xml = "C:\\t\\y11\\IR\\frozen_inference_graph.xml";
             //bin = "C:\\t\\y11\\IR\\frozen_inference_graph.bin";
             //labelFileName = "C:\\t\\y11\\IR\\label_name.pbtxt";
        }

        cv::Mat curr_frame = cv::imread("C:\\VC\\runTime\\vino.bmp");
        // --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
        slog::info << "Loading network files" << slog::endl;
        CNNNetReader netReader;
        /** Read network model **/
        netReader.ReadNetwork(xml);
        /** Set batch size to 1 **/
        slog::info << "Batch size is forced to  1." << slog::endl;
        netReader.getNetwork().setBatchSize(1);
        /** Extract model name and load it's weights **/

        netReader.ReadWeights(bin);
         
        //std::string labelFileName =  ".bin";
        //netReader.ReadWeights(binFileName);
        ///** Read labels (if any)**/
        //std::string labelFileName = fileNameNoExt(FLAGS_m) + ".labels";
        //std::vector<std::string> labels;
        std::ifstream inputFile(labelFileName);
        std::copy(std::istream_iterator<std::string>(inputFile),
        std::istream_iterator<std::string>(),
        std::back_inserter(labels));
        // -----------------------------------------------------------------------------------------------------

                /** SSD-based network should have one input and one output **/
            // --------------------------- 3. Configure input & output ---------------------------------------------
            // --------------------------- Prepare input blobs -----------------------------------------------------
        slog::info << "Checking that the inputs are as the demo expects" << slog::endl;
        InputsDataMap inputInfo(netReader.getNetwork().getInputsInfo());
        if (inputInfo.size() != 1) {
            throw std::logic_error("This demo accepts networks having only one input");
        }
        InputInfo::Ptr& input = inputInfo.begin()->second;
        inputName = inputInfo.begin()->first;
        input->setPrecision(Precision::U8);

        //temp
        //input->getInputData()->setLayout(Layout::NCHW);
        if (useCPU == false)
        {
            input->getInputData()->setLayout(Layout::NCHW);
        }
        else
        {
            input->getPreProcess().setResizeAlgorithm(ResizeAlgorithm::RESIZE_BILINEAR);
            input->getInputData()->setLayout(Layout::NHWC);
        }

        // testing for cpu
        // input->getInputData()->setLayout(Layout::NCHW);

        //if (FLAGS_auto_resize) {
        //    input->getPreProcess().setResizeAlgorithm(ResizeAlgorithm::RESIZE_BILINEAR);
        //    input->getInputData()->setLayout(Layout::NHWC);
        //}
        //else {
        //    input->getInputData()->setLayout(Layout::NCHW);
        //}

                // --------------------------- Prepare output blobs -----------------------------------------------------
        slog::info << "Checking that the outputs are as the demo expects" << slog::endl;
        OutputsDataMap outputInfo(netReader.getNetwork().getOutputsInfo());
        if (outputInfo.size() != 1) {
            throw std::logic_error("This demo accepts networks having only one output");
        }
        DataPtr& output = outputInfo.begin()->second;
        outputName = outputInfo.begin()->first;
        const int num_classes = netReader.getNetwork().getLayerByName(outputName.c_str())->GetParamAsInt("num_classes");
        if (labels.size() != num_classes) {
            if (labels.size() == (num_classes - 1))  // if network assumes default "background" class, having no label
                labels.insert(labels.begin(), "fake");
            else
                labels.clear();
        }

        const SizeVector outputDims = output->getTensorDesc().getDims();
        maxProposalCount = outputDims[2];
        objectSize = outputDims[3];
        if (objectSize != 7) {
            throw std::logic_error("Output should have 7 as a last dimension");
        }
        if (outputDims.size() != 4) {
            throw std::logic_error("Incorrect output dimensions for SSD");
        }
        output->setPrecision(Precision::FP32);
        output->setLayout(Layout::NCHW);

        // -----------------------------------------------------------------------------------------------------

    // --------------------------- 4. Loading model to the plugin ------------------------------------------
        slog::info << "Loading model to the plugin" << slog::endl;
        network = plugin.LoadNetwork(netReader.getNetwork(), {});
        // -----------------------------------------------------------------------------------------------------

                // --------------------------- 5. Create infer request -------------------------------------------------
        async_infer_request_next = network.CreateInferRequestPtr();
        async_infer_request_curr = network.CreateInferRequestPtr();
        // -----------------------------------------------------------------------------------------------------

    }
 

 

 

 

//

//   Function 2

//

 

 

    int vinoLib::onOneFrameWithImage(int mFrameNum, unsigned char* img_pointer, unsigned int  width, unsigned int height, int step)

    {
        static cv::Mat curr_frame;
        static cv::Mat next_frame;
        static bool isLastFrame = false;
        //static bool isAsyncMode = false;  // execution is always started using SYNC mode
        //static bool isModeChanged = false;  // set to TRUE when execution mode is changed (SYNC<->ASYNC)
        //static bool isDisplay = true;

        try {
            // --------------------------- 6. Do inference ---------------------------------------------------------
            // slog::info << "Start inference " << slog::endl;

            typedef std::chrono::duration<double, std::ratio<1, 1000>> ms;
            auto total_t0 = std::chrono::high_resolution_clock::now();
            auto wallclock = std::chrono::high_resolution_clock::now();
            double ocv_decode_time = 0, ocv_render_time = 0;

            mFrameNext = mFrameNum;
            mTransferReadyFlag = false; // flag for start of one process

            auto t0 = std::chrono::high_resolution_clock::now();

            //
            // 1.get image from image pointer 
            //
            next_frame = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, step);

            //cv::imwrite("c:\\vc\\debug\\CPPtest1.bmp", next_frame);
            //std::string  s = "p:\\img" + std::to_string(mFrameNext) + ".bmp";
            //    next_frame = cv::imread(s);

            // test to avoid error on sync mode
            //isAsyncMode = true;
            if (!isAsyncMode)
            {
                    curr_frame = next_frame.clone();
                    mFrameCurrent = mFrameNext;
                    mFrameNumProcessing = mFrameCurrent;
            }

            if (curr_frame.rows == 0)
                curr_frame = next_frame.clone();
                //curr_frame = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, step);
 
            if (isAsyncMode) {
                if (isModeChanged) {
                    mFrameNumProcessing = mFrameCurrent;
                    // std::string  s = "c:\\t\\y11\\image\\img" + std::to_string(mFrameCurrent) + ".bmp";
                    // curr_frame = cv::imread(s);                     
                    frameToBlob(curr_frame, async_infer_request_curr, inputName);
                }

                if (!isLastFrame) {
                    mFrameNumProcessing = mFrameNext;
                    // std::string  s = "c:\\t\\y11\\image\\img" + std::to_string(mFrameNext) + ".bmp";
                    // next_frame = cv::imread(s);
                    frameToBlob(next_frame, async_infer_request_next, inputName);
                }
            }
            else if (!isModeChanged) {
                mFrameNumProcessing = mFrameCurrent;
                // std::string  s = "c:\\t\\y11\\image\\img" + std::to_string(mFrameCurrent) + ".bmp";
                // curr_frame = cv::imread(s);

                frameToBlob(curr_frame, async_infer_request_curr, inputName);
            }

            auto t1 = std::chrono::high_resolution_clock::now();
            ocv_decode_time = std::chrono::duration_cast<ms>(t1 - t0).count();
            t0 = std::chrono::high_resolution_clock::now();
            //Main sync point:
            //in the truly Async mode we start the NEXT infer request, while waiting for the CURRENT to complete
            //in the regular mode we start the CURRENT request and immediately wait for it's completion

            if (isAsyncMode) {
                if (isModeChanged) {
                    async_infer_request_curr->StartAsync();
                }
                if (!isLastFrame) {
                    async_infer_request_next->StartAsync();
                }
            }
            else if (!isModeChanged) {
                async_infer_request_curr->StartAsync();
            }

            if (OK == async_infer_request_curr->Wait(IInferRequest::WaitMode::RESULT_READY)) {
                t1 = std::chrono::high_resolution_clock::now();
                ms detection = std::chrono::duration_cast<ms>(t1 - t0);

                t0 = std::chrono::high_resolution_clock::now();
                ms wall = std::chrono::duration_cast<ms>(t0 - wallclock);
                wallclock = t0;

                // slog::info << "Total Inference time 1: " << detection.count() << "ocv + render: " << (ocv_decode_time + ocv_render_time) << slog::endl;

                t0 = std::chrono::high_resolution_clock::now();
                //isDisplay = true;
                if (isDisplay)
                {
                    std::ostringstream out;

                    out << "OpenCV cap/render time: " << std::fixed << std::setprecision(2)
                        << (ocv_decode_time + ocv_render_time) << " ms";
                    cv::putText(curr_frame, out.str(), cv::Point2f(0, 25), cv::FONT_HERSHEY_TRIPLEX, 0.6, cv::Scalar(0, 255, 0));
                    out.str("");
                    out << "Wallclock time " << (isAsyncMode ? "(TRUE ASYNC):      " : "(SYNC, press Tab): ");
                    out << std::fixed << std::setprecision(2) << wall.count() << " ms (" << 1000.f / wall.count() << " fps)";
                    cv::putText(curr_frame, out.str(), cv::Point2f(0, 50), cv::FONT_HERSHEY_TRIPLEX, 0.6, cv::Scalar(0, 0, 255));
                    if (!isAsyncMode) {  // In the true async mode, there is no way to measure detection time directly
                        out.str("");
                        out << "Detection time  : " << std::fixed << std::setprecision(2) << detection.count()
                            << " ms ("
                            << 1000.f / detection.count() << " fps)";
                        cv::putText(curr_frame, out.str(), cv::Point2f(0, 75), cv::FONT_HERSHEY_TRIPLEX, 0.6,
                            cv::Scalar(255, 0, 0));
                    }
                }

                //---------------------------Process output blobs--------------------------------------------------
                //Processing results of the CURRENT request
                const float *detections = async_infer_request_curr->GetBlob(outputName)->buffer().as<PrecisionTrait<Precision::FP32>::value_type*>();

                int countInFrame = 0;
                for (int i = 0; i < maxProposalCount; i++) {
                    float image_id = detections[i * objectSize + 0];
                    int label = static_cast<int>(detections[i * objectSize + 1]);
                    float confidence = detections[i * objectSize + 2];
                    float xmin = detections[i * objectSize + 3] * width;
                    float ymin = detections[i * objectSize + 4] * height;
                    float xmax = detections[i * objectSize + 5] * width;
                    float ymax = detections[i * objectSize + 6] * height;

                    if (isDisplay)
                    {
                        if (image_id < 0) {
                            std::cout << "Only " << i << " proposals found" << std::endl;
                            break;
                        }
                    }
                    //if (FLAGS_r) {
                    //    std::cout << "[" << i << "," << label << "] element, prob = " << confidence <<
                    //        "    (" << xmin << "," << ymin << ")-(" << xmax << "," << ymax << ")"
                    //        << ((confidence > FLAGS_t) ? " WILL BE RENDERED!" : "") << std::endl;
                    //}

                    float confident = 0.15;
                    if (confidence > confident) {
                        /** Drawing only objects when > confidence_threshold probability **/
                        isDisplay = false;
                        if (isDisplay)
                        {
                            std::ostringstream conf;
                            conf << ":" << std::fixed << std::setprecision(3) << confidence;
                            cv::putText(curr_frame,
                                (label < labels.size() ? labels[label] : std::string("label #") + std::to_string(label))
                                + conf.str(),
                                cv::Point2f(xmin, ymin - 5), cv::FONT_HERSHEY_COMPLEX_SMALL, 1,
                                cv::Scalar(0, 0, 255));
                            cv::rectangle(curr_frame, cv::Point2f(xmin, ymin), cv::Point2f(xmax, ymax), cv::Scalar(0, 0, 255));
                        }
                        // store to array

                        boxStruc b = boxStruc(
                            (int)xmin,
                            (int)ymin,
                            (int)(xmax - xmin),
                            (int)(ymax - ymin),
                            confidence,
                            label,
                            0,
                            mFrameCurrent);
                        //mFrameNumProcessing  );

                        mBoxList1.push_back(b);
                        ++countInFrame;
                    }
                }

                // no box found
                if (countInFrame == 0)
                {
                    boxStruc b = boxStruc(
                        (int)0,
                        (int)0,
                        (int)(0),
                        (int)(0),
                        0,
                        0,
                        0,
                        mFrameCurrent);
                    //mFrameNumProcessing  );
                    mBoxList1.push_back(b);
                }

                //    slog::info << "frame num in core===> " << mFrameCurrent << slog::endl;

            }
            else
            {
                //std::cout << "----------->missed frame ";
                slog::info << "----------->missed frame ";
                //////////////
                // reset async
                /////////////

                //async_infer_request_next.unique();

                //async_infer_request_curr.unique();

                //async_infer_request_next.reset();
                //async_infer_request_curr.reset();
                //Sleep(3000);

                //// --------------------------- 5. Create infer request -------------------------------------------------
                //async_infer_request_next = network.CreateInferRequestPtr();
                //async_infer_request_curr = network.CreateInferRequestPtr();
                //// -----------------------------------------------------------------------------------------------------

            }

            if (isDisplay)
            {
                cv::imshow("Detection results", curr_frame);
            }
            t1 = std::chrono::high_resolution_clock::now();
            ocv_render_time = std::chrono::duration_cast<ms>(t1 - t0).count();

            //if (isLastFrame) {
            //    break;
            //}

            if (isModeChanged) {
                isModeChanged = false;
            }

            //Final point:
            //in the truly Async mode we swap the NEXT and CURRENT requests for the next iteration
            //curr_frame.release();
            
            curr_frame = next_frame;
            mFrameCurrent = mFrameNext;
            // next_frame = cv::Mat();

            if (isAsyncMode) {
                async_infer_request_curr.swap(async_infer_request_next);

            }

            if (isDisplay)
            {
                const int key = cv::waitKey(1);
                //if (27 == key)  // Esc
                //    break;
                if (9 == key) {  // Tab
                    isAsyncMode ^= true;
                    isModeChanged = true;
                }

                if (32 == key)
                {
                    if (isDisplay == true)
                        isDisplay = false;
                    else
                        isDisplay = true;
                }
            }

            //if (isChageMode == true)
            //{
            //    isChageMode = false;
            //    isModeChanged = true;
            //    if (newMode == true)
            //    {
            //        isAsyncMode = true;
            //    }
            //    else
            //    {
            //        isAsyncMode = false;
            //    }
            //}

            mTransferReadyFlag = true; // flag for end of one process
        //}
        // -----------------------------------------------------------------------------------------------------
            // auto total_t1 = std::chrono::high_resolution_clock::now();
            // ms total = std::chrono::duration_cast<ms>(total_t1 - total_t0);
            // std::cout << "Total Inference time: " << total.count() << std::endl;
            // slog::info << "Total Inference time 1: " << total.count()   <<  "ocv + render: " << (ocv_decode_time + ocv_render_time) << slog::endl;
            ///** Show performace results **/  
            //if (FLAGS_pc) {
            //    printPerformanceCounts(*async_infer_request_curr, std::cout);
        //     }
        }
        catch (const std::exception& error) {
            std::cerr << "[ ERROR ] " << error.what() << " [Frame]  " << mFrameNext << std::endl;
            
            if (isModeChanged) {
                isModeChanged = false;
            }

            //////////////
            // reset async
            /////////////

            async_infer_request_next.unique();

            async_infer_request_curr.unique();

            async_infer_request_next.reset();
            async_infer_request_curr.reset();
            Sleep(3000);

            // --------------------------- 5. Create infer request -------------------------------------------------
            async_infer_request_next = network.CreateInferRequestPtr();
            async_infer_request_curr = network.CreateInferRequestPtr();
            // -----------------------------------------------------------------------------------------------------

                                                                        

            return -1;
        }

        return mFrameNumProcessing;
    }

Hi Terry,

Thank you for providing your code - let me look into this for you. In the meantime, can you please try upgrading to the latest version of OpenVINO and try running your model again?

 

Also, depending on your system settings, when the system is in sleep mode, the USB ports can still be providing power to the NCS2. 

Best Regards,

Sahira 

Hi Sahira,

The latest version don't seems to make a different. 

You can easily simulate the problem by setting the computer to sleep mode and wake it up in a few minus. You are right, the USB port power do not shut off power at sleeping mode.  

I'm hopping to ship the next batch of products with this option. It would be nice if I can find a solution to this.

Thanks,

Terry 

HI!

The is totally change.

Thank you for providing your code - let me look into this for you. In the meantime, can you please try upgrading to the latest version of OpenVINO and try running your model again?

Thanks

Developer From:https://www.freeessaywriter.net/

Leave a Comment

Please sign in to add a comment. Not a member? Join today