Code Sample: Use LibRealSense and OpenCV* to stream RGB and Depth Data

Table of Contents

Introduction 

In this document I will show you how you can use LibRealSense and OpenCV to stream RGB and depth data. This article assumes you have already downloaded, installed both LibRealSense, OpenCV and have them setup properly in Ubuntu*. In this article I will be on Ubuntu 16.04 using the Eclipse* neon™ IDE though most likely earlier versions will work fine. It just happens to be what version of Eclipse I was working with when this sample was created.

In this article I make the following assumptions that the reader:

  1. Is somewhat familiar with using the Eclipse IDE. The reader should know how to open Eclipse and create a brand new empty C++ project.
  2. Is familiar with C++
  3. Knows how to get around Linux*.
  4. Knows what Github* is and knows how to at least download a project from a Github repository.

In the end you will have a nice starting point where you use this code base to build upon to create your own LibRealSense / OpenCV applications.

Conventions 

LRS = LibRealSense. I get tired of writing it out. It’s that simple. So, if you see LRS, you know what it means.

Software Requirements 

Supported Cameras 

  • RealSense R200

In theory all the Intel® RealSense™ cameras (R200, F200, SR300) should work with this code sample, however, this was only tested with the R200

Setting up the Eclipse Project 

As mentioned, I’m going to assume that the reader already is familiar with opening up Eclipse and creating a brand new empty C++ project.

What I would like to show you is the various C++ header and linker settings I used for creating my Eclipse project.

Header file includes 

The following image shows which header directories I’ve included. If you followed the steps for installing LRS, you should have your LibRealSense header files located in the proper location. The same goes for OpenCV

Header file includes

Library file includes 

This image shows you the libraries that are needed at runtime. The one LRS library and three OpenCV libraries. Again, I’m taking the assumption you have already setup LRS and OpenCV properly.

Library file includes

The main.cpp source code file contents 

Here is the source code for the example application.

/////////////////////////////////////////////////////////////////////////////

// License: Apache 2.0. See LICENSE file in root directory.

// Copyright(c) 2016 Intel Corporation. All Rights Reserved.

//

//

//

/////////////////////////////////////////////////////////////////////////////

// Authors
// * Rudy Cazabon
// * Rick Blacker
//
// Dependencies
// * LibRealSense
// * OpenCV
//
/////////////////////////////////////////////////////////////////////////////
// This code sample shows how you can use LibRealSense and OpenCV to display
// both an RGB stream as well as Depth stream into two separate OpenCV
// created windows.
//
/////////////////////////////////////////////////////////////////////////////

#include <librealsense/rs.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>

using namespace std;
using namespace rs;


// Window size and frame rate
int const INPUT_WIDTH      = 320;
int const INPUT_HEIGHT     = 240;
int const FRAMERATE        = 60;

// Named windows
char* const WINDOW_DEPTH = "Depth Image";
char* const WINDOW_RGB     = "RGB Image";


context      _rs_ctx;
device&      _rs_camera = *_rs_ctx.get_device( 0 );
intrinsics   _depth_intrin;
intrinsics  _color_intrin;
bool         _loop = true;


// Initialize the application state. Upon success will return the static app_state vars address

bool initialize_streaming( )
{
       bool success = false;
       if( _rs_ctx.get_device_count( ) > 0 )
       {
             _rs_camera.enable_stream( rs::stream::color, INPUT_WIDTH, INPUT_HEIGHT, rs::format::rgb8, FRAMERATE );
             _rs_camera.enable_stream( rs::stream::depth, INPUT_WIDTH, INPUT_HEIGHT, rs::format::z16, FRAMERATE );
             _rs_camera.start( );

             success = true;
       }
       return success;
}




/////////////////////////////////////////////////////////////////////////////
// If the left mouse button was clicked on either image, stop streaming and close windows.
/////////////////////////////////////////////////////////////////////////////
static void onMouse( int event, int x, int y, int, void* window_name )
{
       if( event == cv::EVENT_LBUTTONDOWN )
       {
             _loop = false;
       }
}


/////////////////////////////////////////////////////////////////////////////
// Create the depth and RGB windows, set their mouse callbacks.
// Required if we want to create a window and have the ability to use it in
// different functions
/////////////////////////////////////////////////////////////////////////////
void setup_windows( )
{
       cv::namedWindow( WINDOW_DEPTH, 0 );
       cv::namedWindow( WINDOW_RGB, 0 );

       cv::setMouseCallback( WINDOW_DEPTH, onMouse, WINDOW_DEPTH );
       cv::setMouseCallback( WINDOW_RGB, onMouse, WINDOW_RGB );
}


/////////////////////////////////////////////////////////////////////////////
// Called every frame gets the data from streams and displays them using OpenCV.
/////////////////////////////////////////////////////////////////////////////
bool display_next_frame( )
{

       _depth_intrin       = _rs_camera.get_stream_intrinsics( rs::stream::depth );
       _color_intrin       = _rs_camera.get_stream_intrinsics( rs::stream::color );
	   

       // Create depth image
       cv::Mat depth16( _depth_intrin.height,
                                  _depth_intrin.width,
                                  CV_16U,
                                  (uchar *)_rs_camera.get_frame_data( rs::stream::depth ) );

       // Create color image
       cv::Mat rgb( _color_intrin.height,
                            _color_intrin.width,
                            CV_8UC3,
                            (uchar *)_rs_camera.get_frame_data( rs::stream::color ) );

       // < 800
       cv::Mat depth8u = depth16;
       depth8u.convertTo( depth8u, CV_8UC1, 255.0/1000 );

       imshow( WINDOW_DEPTH, depth8u );
       cvWaitKey( 1 );

       cv::cvtColor( rgb, rgb, cv::COLOR_BGR2RGB );
       imshow( WINDOW_RGB, rgb );
       cvWaitKey( 1 );

       return true;
}

/////////////////////////////////////////////////////////////////////////////
// Main function
/////////////////////////////////////////////////////////////////////////////
int main( ) try
{
       rs::log_to_console( rs::log_severity::warn );

       if( !initialize_streaming( ) )
       {
             std::cout << "Unable to locate a camera" << std::endl;
             rs::log_to_console( rs::log_severity::fatal );
             return EXIT_FAILURE;
       }

       setup_windows( );

       // Loop until someone left clicks on either of the images in either window.
       while( _loop )
       {
             if( _rs_camera.is_streaming( ) )
                    _rs_camera.wait_for_frames( );

             display_next_frame( );
       }


       _rs_camera.stop( );
       cv::destroyAllWindows( );
	   

       return EXIT_SUCCESS;

}
catch( const rs::error & e )
{
       std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
       return EXIT_FAILURE;
}
catch( const std::exception & e )
{
       std::cerr << e.what() << std::endl;
       return EXIT_FAILURE;
}

Source code explained 

Overview 

The structure is pretty simplistic. It’s a one source code file containing everything we need for the sample. We have our header includes at the top. Because this is a sample application, we are not going to worry too much about “best practices” in defensive software engineering. Yes, we could have better error checking, however the goal here is to make this sample application as easy to read and comprehend as possible.

Constants 

Here you can see that we have various constant values for the width, height, framerate. Basic values used for dictating the size of the image we want to stream and size of the window we want to display the stream in as well as the framerate we want. After that we have two string constants. These are used for naming our OpenCV windows.

Global variables 

While I’m not a fan of global variables per-say, in a streaming app such as this I don’t mind bending the rules a little bit. And while simple streaming such as what is in this sample app may not be resource intensive, other things we could bring to the app could be. So, if we can squeeze out any performance now, it could be beneficial down the road.

  • _ctx is used to return a device (camera). Notice here that we are hard coding getting the first device. There are ways to detect all devices however that is out of scope for this article.
  • _rs_camera is the RealSense device(camera) that we are streaming from.
  • _dept_intrin this is a LRS intrinsics object that contains information about the current depth frame. In this case we are mostly interested in the size of the image.
  • _color_intrin this is a LRS intrinsics object that contains information about the current color frame. In this case we are mostly interested in the size of the image.
  • _loop is simply used to know when to stop the processing of images. Initially set to true, is set to false when a user clicks on an image in the OpenCV window.

I want point out that _dept_intrin and _color_intrin is not really necessary. They are not the product of calculations of any type. They are simply used for collecting intrinsic data in the display_next_frame( ) function, making it easier to read when creating the OpenCV Mat objects. These are global so we don’t have to create these two variables every single frame.

Functions 

main(…)

Obviously as the name implies, this is the main function. We don’t need any command line parameters so I’ve chosen to not include any parameters. The first thing that happens is showing how you can use a LRS to log to the console. Here we are asking LRS to print out any warnings to the console. Next we initialize the helper structure _app_state by calling initialize_app_state(). If there is an error, print it out and exit. After that we make a call to setup_windows(). At this point everything is setup and we can begin streaming. This is done in the while loop. While _loop is true, we will see if the camera is streaming, if so wait for the frames. We call get_next_frame to get the next frame from the camera and populate it into the global _app_state variable and then display it.

Once _loop has been set to false, we fall out of the while loop, stop the camera and tell OpenCV to close all its windows. At this point, the app will then quit.

initialize_streaming(…)

This where we initially setup the camera for streaming. We will have two streams, one depth, one color. The images will be the size specified in the constants. We also must specify the format of the stream and framerate. For future expansion, it might be better to add some kind of error checking/handling here. However to keep things simplistic, we have chosen not to do anything fancy. Assuming the happy path.

setup_windows(…)

This is a pretty easy function to understand. We tell OpenCV to create two new named windows. We are using the string constants WINDOW_DEPTH and WINDOW_RGB for the names. Once we have created them we associate a mouse call back function “onMouse”.

onMouse(…)

onMouse will be triggered anytime a user clicks on the body of the window. In specific, where the image is being displayed. We are using this function as an easy way to stop the application. All it does is check to see if the event was a left button click, if so, set the Boolean flag _loop to false. This will cause the code to exit out of the while loop in the main function.

display_next_frame(…)

This function is responsible for displaying the LRS data into OpenCV windows. We start off by getting the intrinsic data from camera. Next we create the depth and rgb OpenCV Mat objects. We specify their dimensions, their format and then assign their buffer to the cameras streams current frame. The depth Mat object gets the cameras depth data, the color Mat object gets the cameras color stream.

The next thing we do is create a new Mat object depth8u. This is to perform a scaling into a 0-255 range as required by OpenCVs imgshow() function which cannot display 16 bit depth images.

Once we have converted the depth image, we display it using the OpenCV function imgshow. We are telling it what named widow to use via the WINDOW_DEPTH constant and giving it the depth image. cvWaitKey(1) tells OpenCV to stop for a brief time to allow other processing to take place, such as key presses. After the depth window, now we move onto the color/rgb window. cvtColor will convert the Mat rgb object from OpenCVs BGR to RGB colorspace. Once that has completed, we show the image and call waitkey again.

Wrap up 

In this article, I’ve attempted to show you just how easy it is to stream data from a RealSense camera using the LibRealSense open source library and display it into a window using OpenCV. While this sample is simple, it does help form a base application from which you can create more complex applications using OpenCV.

ВложениеРазмер
Файл main.cpp4.55 КБ
Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.
Возможность комментирования русскоязычного контента была отключена. Узнать подробнее.