The Intel® RealSense™ SDK has been discontinued. No ongoing support or updates will be available.
Download Demo Files ZIP 35KB
TouchDesigner*, created by Derivative*, is a popular platform/program used worldwide for interactivity and real-time animations during live performances as well as rendering 3D animation sequences, building mapping, installations and recently, VR work. The support of the Intel® RealSense™ camera in TouchDesigner* makes it an even more versatile and powerful tool. Also useful is the ability to import objects and animations into TouchDesigner* from other 3D packages using
.fbx files, as well as taking in rendered animations and images.
In this two-part article I explain how the Intel® RealSense™ camera is integrated into and can be used in TouchDesigner*. The demos in Part 1 use the Intel® RealSense™ camera TOP node. The demos in Part 2 use the CHOP node. In Part 2, I also explain how to create VR and full-dome sequences in combination with the Intel® RealSense™ camera. I show how TouchDesigner*’s Oculus Rift node can be used in conjunction with the Intel® RealSense™ camera. Both Part 1 and 2 include animations and downloadable TouchDesigner* files,
.toe files, which can be used to follow along. To get the TouchDesigner* (
.toe) files click on the button on the top of the article. In addition, a free noncommercial copy of TouchDesigner* which is fully functional (except that the highest resolution has been limited to 1280 by 1280), is available.
Note: There are currently two types of Intel® RealSense™ cameras, the short range F200, and the longer-range R200. The R200 with its tiny size is useful for live performances and installations where a hidden camera is desirable. Unlike the larger F200 model, the R200 does not have finger/hand tracking and doesn’t support "Marker Tracking." TouchDesigner* supports both the F200 and the R200 Intel® RealSense™ cameras.
To quote from the TouchDesigner* web page, "TouchDesigner* is revolutionary software platform which enables artists and designers to connect with their media in an open and freeform environment. Perfect for interactive multimedia projects that use video, audio, 3D, controller inputs, internet and database data, DMX lighting, environmental sensors, or basically anything you can imagine, TouchDesigner* offers a high performance playground for blending these elements in infinitely customizable ways."
I asked Malcolm Bechard, senior developer at Derivative, to comment on using the Intel® RealSense™ camera with TouchDesigner*:
"Using TouchDesigner*’s procedural node-based architecture, Intel® RealSense™ camera data can be immediately brought in, visualized, and then connected to other nodes without spending any time coding. Ideas can be quickly prototyped and developed with an instant-feedback loop.Being a native node in TouchDesigner* means there is no need to shutdown/recompile an application for each iteration of development.The Intel® RealSense™ camera augments TouchDesigner* capabilities by giving the users a large array of pre-made modules such as gesture, hand tracking, face tracking and image (depth) data, with which they can build interactions. There is no need to infer things such as gestures by analyzing the lower-level hand data; it’s already done for the user."
Using the Intel® RealSense™ Camera in TouchDesigner*
TouchDesigner* is a node-based platform/program that uses Python* as its main scripting language. There are five distinct categories of nodes that perform different operations and functions: TOP nodes (textures), SOP nodes (geometry), CHOP nodes (animation/audio data), DAT nodes (tables and text) and COMP nodes (3D Geometry nodes and nodes for building 2D control panels), and MAT nodes (materials). The programmers at TouchDesigner* consulting with Intel® programmers designed two special nodes: the Intel® RealSense™ camera TOP node and the Intel® RealSense™ camera CHOP node to integrate the Intel® RealSense™ camera into the program.
Note: This article is aimed at those familiar with using TouchDesigner* and its interface. If you are unfamiliar with TouchDesigner* and plan to follow along with this article step-by-step, I recommend that you first review some of the documentation and videos available here: Learning TouchDesigner*
Note: When using the Intel® RealSense™ camera, it is important to pay attention to its range for best results. On this Intel® web page you will find the range of each camera and best operating practices for using it.
Intel® RealSense™ Camera TOP Node
The TOP nodes in TouchDesigner* perform many of the same operations found in a traditional compositing program. The Intel® RealSense™ camera TOP node adds to these capabilities utilizing the 2D and 3D data feed that the Intel® RealSense™ camera feeds into it. The Intel® RealSense™ camera TOP node has a number of setup settings to acquire different forms of data.
- Color. The video from the Intel® RealSense™ camera color sensor.
- Depth. A calculation of the depth of each pixel. 0 means the pixel is 0 meters from the camera, and 1 means the pixel is the maximum distance or more from the camera.
- Raw depth. Values taken directly from the Intel® RealSense™ SDK. Once again 0 means 1 meter from the camera and 1 is the maximum range or more away from the camera.
- Visualized depth. A gray-scale image from the Intel® RealSense™ SDK that can help you visualize the depth. It cannot be used to actually determine a pixel’s exact distance from the camera.
- Depth to color UV map. The UV values from a 32-bit floating RG texture (note, no blue) that are needed to remap the depth image to line up with the color image. You can use the Remap TOP node to align the images to match.
- Color to depth UV map. The UV values from a 32-bit floating RG texture (note, no blue) that are needed to remap the color image to line up with the depth image. You can use the Remap TOP node to align the two.
- Infrared. The raw video from the infrared sensor of the Intel® RealSense™ camera.
- Point cloud. Literally a cloud of points in 3D space (x, y, and z coordinates) or data points created by the scanner of the Intel® RealSense™ camera.
- Point cloud color UVs. Can be used to get each point’s color from the color image stream.
Note: You can download that toe file, RealSensePointCloudForArticle.toe, to use as a simple beginning template for creating a 3D animated geometry from the data of the Intel® RealSense™ camera. This file can be modified and changed in many ways. Together, the three Intel® RealSense™ camera TOP nodes—the Point Cloud, the Color, and the Point Cloud Color UVs—can create a 3D geometry composed of points (particles) with the color image mapped onto it. This creates many exciting possibilities.
Intel RealSense Camera CHOP Node
Note: There is also an Intel® RealSense™ camera CHOP node that controls the 3D tracking/position data that we will discuss in Part 2 of this article.
Demo 1: Using the Intel® RealSense™ Camera TOP Node
Click on the button on top of the article to get the First TOP Demo: settingUpRealNode2b_FINAL.toe
Demo 1, part 1: You will learn how to set up the Intel® RealSense™ camera TOP node and then connect it to other TOP nodes.
- Open the Add Operator/OP Create dialog window.
- Under the TOP section, click RealSense.
- On the Setup parameters page for the Intel® RealSense™ camera TOP node, for Image select Color from the drop-down menu. In the Intel® RealSense™ camera TOP node, the image of what the camera is pointing to shows up, just as in a video camera.
- Set the resolution of the Intel® RealSense™ Camera to 1920 by 1080.
- Create a Level TOP and connect it to the Intel® RealSense™ camera TOP node.
- In the Pre parameters page of the Level TOP Node, choose Invert and slide the slider to 1.
- Connect The Level TOP node to an HSV To RGB TOP node and then connect that to a Null TOP node.
Next we will put this created image into the Phong MAT (Material) so we can texture geometries with it.
Using the Intel® RealSense™ Camera Data to Create Textures for Geometries
Demo 1, part 2: This exercise shows you how to use the Intel® RealSense™ camera TOP node to create textures and how to add them into a MAT node that can then be assigned to the geometry in your project.
- Add a Geometry (geo) COMP node into your scene.
- Add a Phong MAT node.
- Take the Null TOP node and drag it onto the Color Map parameter of your Phong MAT node.
- On the Render parameter page of your Geo COMP for the Material parameter add type
phong1to make it use the
phong1node as its material.
Creating the Box SOP and Texturing it with the Just Created Phong Shader
Demo 1, part 3: You will learn how to assign the Phong MAT shader you created using the Intel® RealSense™ camera data to a box Geometry SOP.
- Go into the
geo1node to its child level, (
- Create a Box SOP node, a Texture SOP node, and a Material SOP node.
- Delete the Torus SOP node that was there and connect the
box1node to the
texture1node and the
- In the Material parameter of the
../phong1which will refer it to the
phong1MAT node you created in the parent level.
- To put the texture on each face of the box, in the parameters of the
texture1node, Texture/Texture Type, put
faceand set the Texture/Offset put
.5 .5 .5.
Animating and Instancing the Box Geometry
Demo 1, part 4: You will learn how to rotate a Geometry SOP using the Transform SOP node and a simple expression. Then you will learn how to instance the Box geometry. We will end up with a screen full of rotating boxes with the textures from the Intel® RealSense™ camera TOP node on them.
- To animate the box rotating on the x-axis, insert a Transform SOP node after the Texture SOP node.
- Put an expression into the x component (first field) of the Rotate parameter in the
transform1SOP node. This expression is not dependent on the frames so it will keep going and not start repeating when the frames on the timeline run out. I multiplied by 10 to increase the speed:
- To make the boxes, go up to the parent level (
/project1) and in the Instance page parameters of the
geo1COMP node, for Instancing change it to
- Add a Grid SOP node and a SOP to the DAT node.
- Set the grid parameters to
10Columns and the size to
- In the SOP to DAT node parameters, for SOP put
grid1and make sure Extract is to set
- In the Instance page parameters of the
geo1COMP, for Instance CHOP/DAT enter:
- Fill in the TX, TY, and TZ parameters with
P(0), P(1), and P(2)respectively to specify which columns from the
sopto1node to use for the instance positions.
- If you prefer to see the image in the Intel® RealSense™ camera unfiltered, disconnect or bypass the Level TOP node and the HSV to RGB TOP node.
Rendering or Performing the Animation Live
Demo 1, part 5: You will learn how to set up a scene to be rendered and either performed live or rendered out as a movie file.
- To render the project, add in a Camera COMP node, a Light COMP node, and a Render TOP node. By default the camera will render all the Geometry components in the scene.
- Translate your camera about 20 units back on the z-axis. Leave the light at the default setting.
- Set the resolution of the render to 1920 by 1080. By default the background of a render is transparent (alpha of 0).
- To make this an opaque black behind the squares, add in a Constant TOP node and change the Color to 0,0,0 so it is black while leaving the Alpha as 1. You can choose another color if you want.
- Add in an Over TOP node and connect the Render TOP node to the first hook up and the Constant TOP node to the second hook up. This makes the background pixels of the render (0, 0, 0, 1), which is no longer transparent.
Another way to change the alpha of a TOP to 1 is to use a Reorder TOP and set its Output Alpha parameter to
Input 1 and
If you prefer to render out the animation instead of playing it in real time in a performance you must choose the Export movie Dialog box under file in the top bar of the TouchDesigner program. In the parameter for the TOP Video, enter
null2 for this particular example. Otherwise enter any TOP node that you want to render.
Demo 1, part 6: One of the things that makes TouchDesigner* a special platform is the ability to do real-time performance animations with it. This makes it especially good when paired with the Intel® RealSense™ Camera.
- Add a Window COMP node and in the operator parameter enter your
- Set the resolution to 1920 by 1080.
- Choose the Monitor you want in the Location parameter. The Window COMP node lets you perform the entire animation in real time projected onto the monitor you choose. Using the Window COMP node you can specify the monitor or projector you want the performance to be played from.
Demo 2: Using the Intel® RealSense™ Camera TOP Node Depth Data
The Intel® RealSense™ camera TOP node has a number of other settings that are useful for creating textures and animation.
In demo 2, we use the depth data to apply a blur on an image based on depth data from the camera. Click on the button on top of the article to get this file: RealSenseDepthBlur.toe
First, create an Intel® RealSense™ camera TOP and set its Image parameter to
Depth. The depth image has pixels that are 0 (black) if they are close to the camera and 1 (white) if they are far away from the camera. The range of the pixel values is controlled by the Max Depth parameter which is specified in Meters. By default it has a value of 5 which means pixels 5 or more meters from the camera will be white. A pixel with a value of 0.5 will be 2.5 meters from the camera. Depending on how far the camera is from you changing this value to something smaller may be good. For this example we’ve changed it to 1.5 meters.
Next we want to process the depth a bit to remove objects outside our range interest, which we will do using a Threshold TOP.
- Create a Threshold TOP and connect it to the
realsense1node. We want to cull out pixels that beyond a certain distance from the camera so set the Comparator parameter to
Greaterand set the Threshold parameter to
0.8.This makes pixels that are greater than 0.8 (which is 1.2 meters or greater if we have Max Depth in the Intel® RealSense™ camera TOP set to 1.5), become 0 and all other pixels become 1.
- Create a Multiply TOP and connect the
realsense1node to the first input and the
thresh1node to the 2nd input. Multiplying the pixels we want by 1 will leave them as-is and others by 0 make them back. The
multiply1node now has only pixels greater than 0 for the part of the image you want to control the blur we will do next.
- Create a Movie File in TOP, and select a new image for its File parameter. In this example we select Metter2.jpg from the TouchDesigner* Samples/Map directory.
- Create a Luma Blur TOP and connect
moviefilein1to the 1st input of
multiply1to the 2nd input of
- In the parameters for
lumablur1set White Value to
0.4, Black Filter Width to
20, and White Filter Width to
1. This makes pixels where the first input is
0have a blur filter width of
20and a pixels with a value of
0.4or greater have a blur width of
The result is an image where the pixels where the user is located are not blurred while other pixels are blurry.
Demo 3: Using the Intel® RealSense™ Camera TOP Node Depth Data with the Remap TOP Node
Click on the button on the article top to get this file: RealSenseRemap.toe
Note: The depth and color cameras of the Intel® RealSense™ camera TOP node are in different spots in the world so their resulting images by default do not line up. For example if your hand is positioned in the middle of the color image, it won’t be in the middle of the depth image, it will either be off to the left or right a bit. The UV remap fixes this by shifting the pixels around so they align on top of each other. Notice the difference between the aligned and unaligned TOPs.
Demo 4: Using Point Cloud in the Intel® RealSense™ Camera TOP Node
Click on the button on top of the article to get this file: PointCloudLimitEx.toe
In this exercise you learn how to create animated geometry using the Intel® RealSense™ camera TOP node point Cloud setting and the Limit SOP node. Note that this technique is different than the Point Cloud example file shown at the beginning of this article. The previous example uses GLSL shaders, which results in the ability to generate far more points, but it is more complex to do and out of the scope of this article.
- Create a RealSense™ TOP node and set the parameter Image to
- Create a TOP to CHOP node and connect it to a Select CHOP node.
- Connect the Select CHOP node to a Math CHOP node.
- In the
topto1CHOP node parameter, TOP, enter:
- In the Select CHOP node parameters, Channel Names, enter
r g bleaving a space between the letters.
- In the
math1CHOP node for the Multiply parameter, enter:
- On the Range parameters page, for To Range, enter: 1 and 7.
- Create a Limit SOP node.
To quote from the information on the www.derivative.ca online wiki page, "The Limit SOP creates geometry from samples fed to it by CHOPs. It creates geometry at every point in the sample. Different types of geometry can be created using the Output Type parameter on the Channels Page."
- In the
limit1CHOP Channels parameters page, enter
rin the X Channel, g in the Y Channel, and b in the Z Channel.
Note: Switching the r g and b to different X Y or Z channels changes the geometry being generated. So you might want to try this later: In the Output parameter page, for Output Type select
Sphere at Each Pointfrom the drop-down. Create a SOP to DAT node. In the parameters page, for SOP put in
limit1or drag your
limit1CHOP into the parameter. Keep the default setting of Points in the Extract parameter. Create a Render TOP node, a Camera COMP node, and a Light COMP node. Create a Reorder TOP and make Output Alpha be
Oneand connect it to the Render TOP.
In Part 2 of this article we will discuss the Intel® RealSense™ camera CHOP and how to create content both rendered and in real-time for performances, Full Dome shows, and VR. We will also show how to use the Oculus Rift CHOP node. Hand tracking, face tracking and marker tracking will be discussed.
About the Author
Audri Phillips is a visualist/3d animator based out of Los Angeles, with a wide range of experience that includes over 25 years working in the visual effects/entertainment industry in studios such as Sony*, Rhythm and Hues*, Digital Domain*, Disney*, and Dreamworks* feature animation. Starting out as a painter she was quickly drawn to time based art. Always interested in using new tools she has been a pioneer of using computer animation/art in experimental film work including immersive performances. Now she has taken her talents into the creation of VR. Samsung* recently curated her work into their new Gear Indie Milk VR channel.
Her latest immersive work/animations include: Multi Media Animations for "Implosion a Dance Festival" 2015 at the Los Angeles Theater Center, 3 Full dome Concerts in the Vortex Immersion dome, one with the well-known composer/musician Steve Roach. She has a fourth upcoming fulldome concert, "Relentless Universe", on November 7th, 2015. She also created animated content for the dome show for the TV series, “Constantine*” shown at the 2014 Comic-Con convention. Several of her Fulldome pieces, “Migrations” and “Relentless Beauty”, have been juried into "Currents", The Santa Fe International New Media Festival, and Jena FullDome Festival in Germany. She exhibits in the Young Projects gallery in Los Angeles.
She writes online content and a blog for Intel®. Audri is an Adjunct professor at Woodbury University, a founding member and leader of the Los Angeles Abstract Film Group, founder of the Hybrid Reality Studio (dedicated to creating VR content), a board member of the Iota Center, and she is also an exhibiting member of the LA Art Lab. In 2011 Audri became a resident artist of Vortex Immersion Media and the c3: CreateLAB.