Developer's Guide


Before You Start

You may wish to visit the following links:

Introduction

Intel RealSense Cross Platform API (librealsense) is a cross-platform library for Linux, Windows, and Mac for capturing data from the Intel RealSense F200, SR300, R200, LR200 and ZR300 cameras. This effort was initiated to better support researchers, creative coders, and app developers in domains such as robotics, virtual reality, and the Internet of Things. Several often-requested features of RealSense devices are implemented in this project, including multi-camera capture.

Developer kits containing the necessary hardware to use this library are available for purchase at this link. This project is separate from the production software stack available in the Intel RealSense SDK, namely that this library only encompasses camera capture functionality without additional computer vision algorithms.

The Intel RealSense Cross Platform API is experimental and not an official Intel product. It is subject to incompatible API changes in future updates.

A comprehensive suite of sample and tutorial applications are provided in the /examples subdirectory. For new users, it is best to review the tutorial series of apps which are designed to progressively introduce API features.

Developer kits require USB 3.0. RealSense cameras do not provide backwards compatibility with USB 2.0. Not all USB host chipsets are compatible with this library, although it has been validated with recent generations of the Intel Host Controller chipset.

Basic Concepts

The main flow for Cross Platform API is the use case of video streaming from the different imagers of the device. The flow can be broken down to the following sequence:

  1. Create rs::context object:
    rs::context ctx;
  2. Get a pointer to an rs::device object from the context:
    rs::device * dev = ctx.get_device(0);
  3. Configure stream types you are interested in:
    // Configure depth to run at VGA resolution at 30 frames per second
    dev->enable_stream(rs::stream::depth, 640, 480, rs::format::z16, 30);
  4. Start streaming:
    dev->start();
  5. Wait (synchronously) until the next coherent set of frames:
    dev->wait_for_frames();
  6. Access relevant frame data:
    dev->get_frame_data(rs::stream::depth)

In order to get or set various camera controls, Cross Platform API exposes the rs::option API. For example, to turn the infrared emitter on, the user could call:
dev->set_option(rs::option::r200_emitter_enabled, 1.0f);

Asynchronous Concepts

When no blocking and synchronization is required, Cross Platform API also provides asynchronous APIs to capture frames and motion data.

To get a callback every time a frame of a certain stream becomes available, perform the following steps:

  1. Configure stream types you are interested in:
    // Configure depth to run at VGA resolution at 30 frames per second
    dev->enable_stream(rs::stream::depth, 640, 480, rs::format::z16, 30);
  2. Provide a callback function (either function name or a lambda):
    dev->set_frame_callback(rs::
    stream::depth, [](rs::frame f)
    {
       std::cout
    << f.get_timestamp() << " ms" << std::endl;
    });
  3. Start streaming:
    dev->start();
  4. No further action is required. The main thread can simply sleep and the application will receive a callback from a backend thread asynchronously:
    std::this_thread::sleep_for(std::chrono::seconds(10));

Notes regarding callbacks:

  • A call to dev->stop(); will wait until all callbacks have returned and all frames have been released. As a direct side-effect, it is illegal to call stop from the callback thread, as it would cause a dead-lock.
  • The rs::frame object can be moved from a callback thread without causing a deep-copy. It is common practice to move the frame into a queue to process it later from the main message loop.


ZR300 Asynchronous APIs

The Intel RealSense ZR300 camera is an evolution of the R200 device and is adapted for the needs of localization and motion tracking. In addition to R200 capabilities, the ZR300 provides the following:

  1. Wide field of view fish-eye sensor
  2. Accelerometer samples at 250Hz, with a range of +-4G
  3. Gyroscope samples at 200Hz, with a range of +-1000deg
  4. Two GPIOs for external sensors
  5. A common microcontroller providing high-frequency timestamps for all connected components. Using microcontroller timestamps, the library is able to synchronize between the different video and motion streams.

When using ZR300 for motion tracking, you are strongly encouraged to work with asynchronous APIs to reduce latencies.

The flow of enabling video streaming with motion tracking is as follows:

(To enable motion alone, without video streaming, skip item #5.)

  1. Define motion callback:

           auto motion_callback = [](rs::motion_data entry)

           {

               // entry.axes contains the motion data

               // entry.timestamp_data.timestamp provides the timestamp

               // entry.timestamp_data.source_id allows discriminating between different devices (Gyro / Accel / ..)

           };

  1. In case you wish to receive timestamps from external sources (GPIOs), define a timestamp callback:

           auto timestamp_callback = [](rs::timestamp_data entry){};

  1. Enable motion tracking and register the callbacks (with or without the timestamp callback):

           dev->enable_motion_tracking(motion_callback, timestamp_callback);

  1. To make all components synchronize with each other, a special option fisheye_strobe must be set to 1:
    dev->set_option(rs::
    option::fisheye_strobe, 1);
    This should be done before starting streaming and it will make the camera signal the microcontroller on each frame, so producing high-quality microcontroller timestamps.
  2. Enable streaming and set frame callbacks as in the previous example.
    Note that it is recommended to check the frame timestamp domain in the callback to filter only frames that successfully received the microcontroller timestamp:
    if (frame.get_frame_timestamp_domain() == rs::timestamp_domain::microcontroller) // Timestamp is good 
  3. Simultaneously start motion and video streaming by using:
    dev->start(rs::source::all_sources);
    Unless they are started together, there might be a gap between motion and video data.


Samples

The library is provided with a range of useful examples, tools and tutorials. All sources are located under the /examples folder.

SampleDescription
cpp-alignimages

Demonstrates how to use C++ and OpenGL to access and display all of the streams that the camera is capable of, using intrinsic and extrinsic camera information to perform stream to stream alignment.

cpp-alignimages.png

cpp-callback

Demonstrates how to use C++ and OpenGL to access and display the color and depth streams of the camera using the callback notification method for frame availability.

cpp-callback-2

This sample builds on cpp-callback, and demonstrates how to implement support for callback notification in a multithreaded application.

cpp-capture

Demonstrates how to use C++ and OpenGL to access and display all of the streams the camera is capable of.

cpp-capture.png

cpp-config-ui

Demonstrates how to use C++ and OpenGL to access and display all of the streams the camera is capable of and includes interactive controls for each stream&#39;s capabilities and adjustment options.

cpp-config-ui.png

cpp-enumerate-devicesDemonstrates how to use C++ to access and enumerate the stream capabilities and options for the camera, using CPP, and display the camera version and features on the console.
cpp-headlessCaptures 30 frames and writes the last frame to disk.
cpp-module-moduleDemonstrates how to use C++ to access the ZR300 camera&#39;s motion module data stream.
cpp-multicam

Demonstrates how to use C++ and OpenGL to access and display the streams from multiple cameras that are connected to the computer.

cpp-pointcloud

Demonstrates how to access multiple sensor streams (color and depth) from the camera using C++ and display it as a Point Cloud using OpenGL.

cpp-pointcloud.png

cpp-restartDemonstrates how to use C++ and OpenGL to access and display streams from the camera and dynamically adjust which streams are being used through restarts.
cpp-stride

This sample builds on cpp-callback-2, and demonstrates how to implement support for callback notification in a multithreaded application with low latency and zero copy when working with aligned data.

cpp-tutorial-1-depth

Demonstrates how to access a single-sensor depth stream from the camera using C++ and display it as text data in the console.

cpp-tutorial-1-depth.png

cpp-tutorial-2-streamsDemonstrates how to access multiple sensor streams (color and depth) from the camera using C++ and display it using OpenGL.
cpp-tutorial-3-pointcloud

Demonstrates how to access multiple sensor streams (color and depth) from the camera using C++ and display it as a Point Cloud using OpenGL.

c-tutorial-1-depth

Demonstrates how to access a single-sensor depth stream from the camera using C and display it as text data in the console.

c-tutorial-2-streamsDemonstrates how to access multiple-sensor streams (color and depth) from the camera using C and display it using OpenGL.
c-tutorial-3-pointcloudDemonstrates how to access multiple-sensor streams (color and depth) from the camera using C and display it as a Point Cloud using OpenGL.