LibRealSense use

LibRealSense use

Hello,

 

I've seen this on the product information:

 

The Camera is intended solely for use by developers with the Intel® RealSense SDK  solely for the purposes of developing applications using Intel RealSense technology. 

 

Would this mean that the LibRealSense API can not be used and only the realSense SDK can be or is that also fine?

Thank you!

 

7 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
Best Reply

Hi Jessica,

You can use the LibRealSense API, but as mentioned in their Github's README, this library only encompasses camera capture functionality without additional computer vision algorithms. A list of the functionality offered is as follows:

  1. Native streams: depth, color, infrared and fisheye.
  2. Synthetic streams: rectified images, depth aligned to color and vice versa, etc.
  3. Intrinsic/extrinsic calibration information.
  4. Majority of hardware-specific functionality for individual camera generations (UVC XU controls).
  5. Multi-camera capture across heterogeneous camera architectures (e.g. mix R200 and F200 in same application)
  6. Motion-tracking sensors acquisition (ZR300 only)

So as long you're planning to use just these, you can use the LibRealSense API.

Quote:

Rishabh Banga wrote:

Hi Jessica,

You can use the LibRealSense API, but as mentioned in their Github's README, this library only encompasses camera capture functionality without additional computer vision algorithms. A list of the functionality offered is as follows:

  1. Native streams: depth, color, infrared and fisheye.
  2. Synthetic streams: rectified images, depth aligned to color and vice versa, etc.
  3. Intrinsic/extrinsic calibration information.
  4. Majority of hardware-specific functionality for individual camera generations (UVC XU controls).
  5. Multi-camera capture across heterogeneous camera architectures (e.g. mix R200 and F200 in same application)
  6. Motion-tracking sensors acquisition (ZR300 only)

So as long you're planning to use just these, you can use the LibRealSense API.

 

Thanks so much Rishabh, do you have any opinion as to which would be better to use the librealsense API or the official SDK, in the case that the librealsense meets my required capabilities?

 

Quote:

Jessica N. wrote:

Quote:

Rishabh Banga wrote:

 

Hi Jessica,

You can use the LibRealSense API, but as mentioned in their Github's README, this library only encompasses camera capture functionality without additional computer vision algorithms. A list of the functionality offered is as follows:

  1. Native streams: depth, color, infrared and fisheye.
  2. Synthetic streams: rectified images, depth aligned to color and vice versa, etc.
  3. Intrinsic/extrinsic calibration information.
  4. Majority of hardware-specific functionality for individual camera generations (UVC XU controls).
  5. Multi-camera capture across heterogeneous camera architectures (e.g. mix R200 and F200 in same application)
  6. Motion-tracking sensors acquisition (ZR300 only)

So as long you're planning to use just these, you can use the LibRealSense API.

Thanks so much Rishabh, do you have any opinion as to which would be better to use the librealsense API or the official SDK, in the case that the librealsense meets my required capabilities?

Hi Jessica,

Happy to help! That would depend on you really. librealsense API is a slightly unstable version of the official SDK with limited capabilities but with way better support. So you if you think you can manage everything on your own (research, code integration,etc) you can go ahead with the official SDK, else librealsense API would be the perfect choice for you.

Hello

 

Most of the tutorials only use Ubuntu platform for libreal sense but I have also read that it can be used with Windows as well.Has anyone used it with Windows or know the steps for Installation and use?

My use would be Hand tracking with multiple cameras from added accuracy.

Any comments or help would be very appreciated.

Cheers

Hi Pooja,

Yes, just clone the c++ code locally and compile it with Visual Studio (https://github.com/IntelRealSense/librealsense).

Inside this repository you have some examples to get you started.

If you are interested in using C# and dotnet, I'm porting the library to C#, but it's not finished yet. Stay tuned :)

Sorry, I didn't see the hand tracking requirement.

Librealsense doesn't provide hand tracking out of the box. You will have to get the streams and use it together with OpenCV for that.

Here is one example on github: https://github.com/bobdavies2000/handSample

Leave a Comment

Please sign in to add a comment. Not a member? Join today