The Intel® RealSense™ SDK has been discontinued. No ongoing support or updates will be available.
In a previous article, Best UX Practices for Intel® RealSense™ Camera (F200) Applications, we showed you a series of 15 short videos that members of the Experience Design and Development team within Intel’s Perceptual Computing (PerC) Group recorded to help you learn best practices for developing a natural user interface (NUI) application for the user-facing F200 and SR300 cameras using the Intel® RealSense™ SDK.
To help you implement these UX best practices, we’ve created six more short videos to help you learn the technical best practices to use the software and hardware for best effect. Topics covered include Boundary Boxes, Capture Volume, Interaction Zone, Occlusion, Speed and Precision, and World Space.
Intel® RealSense™ Camera (F200 or SR300) Tech Tips: Capture Volume
The capture volume of the Field-of-View is different between color and depth cameras. An additional dimension developers have to keep in mind is the way users interact with the camera on different form factors. Understanding how to determine the field of view for both the color and depth cameras, and the effective range of the camera, will help provide visual feedback to show users when they move out the camera detection zone. In this video, we will show some of the APIs to get this data in real time.
Intel® RealSense™ (F200 or SR300) Tech Tips: Interaction Zone
The fidelity and the capture volumes of the color and the depth cameras differ within the F200 camera. Depending on the algorithm that you want to use, the interaction zones will differ. Detecting the interaction zone and operating within it may not be very obvious to the end user. This video shows how developers can detect the interaction zones for the hand and face modules through the alerts built into the SDK middleware, and build an effect visual mechanism that tells the end users when to adapt to the interaction zones.
Intel® RealSense™ (F200 or SR300) Tech Tips: Boundary Boxes
Object interaction is a key component with most RealSense apps. Understanding some of the challenges that users could have with incorrect object placements on the UI is critical. In this video, we will introduce the concept of bounding boxes that the SDK supports to allow for an effective handling of objects within the interaction zones. We will also show you how to use SDK APIs to implement bounding boxes in your apps as an effective visual feedback mechanism.
Intel® RealSense™ (F200 or SR300) Tech Tips: Occlusion
Since RealSense modalities include interactions that are non-tactile, it is hard to envision when your hand or face could be occluded by a part of your body or by an object. In this video, we will talk about some of the supported, partially supported and unsupported occlusion scenarios for the hand and face, the alert mechanisms available in the SDK, and how to leverage them in applications to provide visual feedback indicating end users when occlusion and loss of tracking happens.
Intel® RealSense™ (F200 or SR300) Tech Tips: Speed and Precision
The precision you can get with each of the Intel® RealSense™ SDK algorithms differs with the speed of interaction. In this video, we will provide guidance using the SDK to accommodate different speeds of operation and the expected amount of precision with each. Some of the utilities that help improve precision are also introduced. We will further show you how to implement alerts specific to handling the speed of operation and how they translate to visual feedback when alerts are raised.
Intel® RealSense™ (F200 or SR300) Tech Tips: World Space
When developing apps for RealSense, it is very important for developers to understand how to translate the world space (the area the camera can see) to screen space and vice versa. In this video, we demonstrate the Projection tool that is available as a part of the SDK installation and walk you through a visualization of the translation between screen space to world space as well as color to depth projection.