By Audri Phillips,
Published:10/27/2015 Last Updated:10/27/2015
The Intel® RealSense™ SDK has been discontinued. No ongoing support or updates will be available.
Download Demo Files ZIP 35KB
TouchDesigner*, created by Derivative*, is a popular platform/program used worldwide for interactivity and real-time animations during live performances as well as rendering 3D animation sequences, building mapping, installations and recently, VR work. The support of the Intel® RealSense™ camera in TouchDesigner* makes it an even more versatile and powerful tool. Also useful is the ability to import objects and animations into TouchDesigner* from other 3D packages using .fbx
files, as well as taking in rendered animations and images.
In this two-part article I explain how the Intel® RealSense™ camera is integrated into and can be used in TouchDesigner*. The demos in Part 1 use the Intel® RealSense™ camera TOP node. The demos in Part 2 use the CHOP node. In Part 2, I also explain how to create VR and full-dome sequences in combination with the Intel® RealSense™ camera. I show how TouchDesigner*’s Oculus Rift node can be used in conjunction with the Intel® RealSense™ camera. Both Part 1 and 2 include animations and downloadable TouchDesigner* files, .toe
files, which can be used to follow along. To get the TouchDesigner* (.toe
) files click on the button on the top of the article. In addition, a free noncommercial copy of TouchDesigner* which is fully functional (except that the highest resolution has been limited to 1280 by 1280), is available.
Note: There are currently two types of Intel® RealSense™ cameras, the short range F200, and the longer-range R200. The R200 with its tiny size is useful for live performances and installations where a hidden camera is desirable. Unlike the larger F200 model, the R200 does not have finger/hand tracking and doesn’t support "Marker Tracking." TouchDesigner* supports both the F200 and the R200 Intel® RealSense™ cameras.
To quote from the TouchDesigner* web page, "TouchDesigner* is revolutionary software platform which enables artists and designers to connect with their media in an open and freeform environment. Perfect for interactive multimedia projects that use video, audio, 3D, controller inputs, internet and database data, DMX lighting, environmental sensors, or basically anything you can imagine, TouchDesigner* offers a high performance playground for blending these elements in infinitely customizable ways."
I asked Malcolm Bechard, senior developer at Derivative, to comment on using the Intel® RealSense™ camera with TouchDesigner*:
"Using TouchDesigner*’s procedural node-based architecture, Intel® RealSense™ camera data can be immediately brought in, visualized, and then connected to other nodes without spending any time coding. Ideas can be quickly prototyped and developed with an instant-feedback loop.Being a native node in TouchDesigner* means there is no need to shutdown/recompile an application for each iteration of development.The Intel® RealSense™ camera augments TouchDesigner* capabilities by giving the users a large array of pre-made modules such as gesture, hand tracking, face tracking and image (depth) data, with which they can build interactions. There is no need to infer things such as gestures by analyzing the lower-level hand data; it’s already done for the user."
TouchDesigner* is a node-based platform/program that uses Python* as its main scripting language. There are five distinct categories of nodes that perform different operations and functions: TOP nodes (textures), SOP nodes (geometry), CHOP nodes (animation/audio data), DAT nodes (tables and text) and COMP nodes (3D Geometry nodes and nodes for building 2D control panels), and MAT nodes (materials). The programmers at TouchDesigner* consulting with Intel® programmers designed two special nodes: the Intel® RealSense™ camera TOP node and the Intel® RealSense™ camera CHOP node to integrate the Intel® RealSense™ camera into the program.
Note: This article is aimed at those familiar with using TouchDesigner* and its interface. If you are unfamiliar with TouchDesigner* and plan to follow along with this article step-by-step, I recommend that you first review some of the documentation and videos available here: Learning TouchDesigner*
Note: When using the Intel® RealSense™ camera, it is important to pay attention to its range for best results. On this Intel® web page you will find the range of each camera and best operating practices for using it.
The TOP nodes in TouchDesigner* perform many of the same operations found in a traditional compositing program. The Intel® RealSense™ camera TOP node adds to these capabilities utilizing the 2D and 3D data feed that the Intel® RealSense™ camera feeds into it. The Intel® RealSense™ camera TOP node has a number of setup settings to acquire different forms of data.
Note: You can download that toe file, RealSensePointCloudForArticle.toe, to use as a simple beginning template for creating a 3D animated geometry from the data of the Intel® RealSense™ camera. This file can be modified and changed in many ways. Together, the three Intel® RealSense™ camera TOP nodes—the Point Cloud, the Color, and the Point Cloud Color UVs—can create a 3D geometry composed of points (particles) with the color image mapped onto it. This creates many exciting possibilities.
Note: There is also an Intel® RealSense™ camera CHOP node that controls the 3D tracking/position data that we will discuss in Part 2 of this article.
Click on the button on top of the article to get the First TOP Demo: settingUpRealNode2b_FINAL.toe
Demo 1, part 1: You will learn how to set up the Intel® RealSense™ camera TOP node and then connect it to other TOP nodes.
Next we will put this created image into the Phong MAT (Material) so we can texture geometries with it.
Demo 1, part 2: This exercise shows you how to use the Intel® RealSense™ camera TOP node to create textures and how to add them into a MAT node that can then be assigned to the geometry in your project.
phong1
to make it use the phong1
node as its material.Demo 1, part 3: You will learn how to assign the Phong MAT shader you created using the Intel® RealSense™ camera data to a box Geometry SOP.
geo1
node to its child level, (/project1/geo1
).box1
node to the texture1
node and the material1
node.material1
node enter: ../phong1
which will refer it to the phong1
MAT node you created in the parent level.texture1
node, Texture/Texture Type, put face
and set the Texture/Offset put .5 .5 .5
.Demo 1, part 4: You will learn how to rotate a Geometry SOP using the Transform SOP node and a simple expression. Then you will learn how to instance the Box geometry. We will end up with a screen full of rotating boxes with the textures from the Intel® RealSense™ camera TOP node on them.
transform1
SOP node. This expression is not dependent on the frames so it will keep going and not start repeating when the frames on the timeline run out. I multiplied by 10 to increase the speed: absTime.seconds*10
/project1
) and in the Instance page parameters of the geo1
COMP node, for Instancing change it to On
.10
Rows and 10
Columns and the size to 20
and 20
.grid1
and make sure Extract is to set Points
.geo1
COMP, for Instance CHOP/DAT enter: sopto1
.P(0), P(1), and P(2)
respectively to specify which columns from the sopto1
node to use for the instance positions.Demo 1, part 5: You will learn how to set up a scene to be rendered and either performed live or rendered out as a movie file.
Another way to change the alpha of a TOP to 1 is to use a Reorder TOP and set its Output Alpha parameter to Input 1
and One
.
If you prefer to render out the animation instead of playing it in real time in a performance you must choose the Export movie Dialog box under file in the top bar of the TouchDesigner program. In the parameter for the TOP Video, enter null2
for this particular example. Otherwise enter any TOP node that you want to render.
Demo 1, part 6: One of the things that makes TouchDesigner* a special platform is the ability to do real-time performance animations with it. This makes it especially good when paired with the Intel® RealSense™ Camera.
null2
TOP node.The Intel® RealSense™ camera TOP node has a number of other settings that are useful for creating textures and animation.
In demo 2, we use the depth data to apply a blur on an image based on depth data from the camera. Click on the button on top of the article to get this file: RealSenseDepthBlur.toe
First, create an Intel® RealSense™ camera TOP and set its Image parameter to Depth
. The depth image has pixels that are 0 (black) if they are close to the camera and 1 (white) if they are far away from the camera. The range of the pixel values is controlled by the Max Depth parameter which is specified in Meters. By default it has a value of 5 which means pixels 5 or more meters from the camera will be white. A pixel with a value of 0.5 will be 2.5 meters from the camera. Depending on how far the camera is from you changing this value to something smaller may be good. For this example we’ve changed it to 1.5 meters.
Next we want to process the depth a bit to remove objects outside our range interest, which we will do using a Threshold TOP.
realsense1
node. We want to cull out pixels that beyond a certain distance from the camera so set the Comparator parameter to Greater
and set the Threshold parameter to 0.8.
This makes pixels that are greater than 0.8 (which is 1.2 meters or greater if we have Max Depth in the Intel® RealSense™ camera TOP set to 1.5), become 0 and all other pixels become 1.
realsense1
node to the first input and the thresh1
node to the 2nd input. Multiplying the pixels we want by 1 will leave them as-is and others by 0 make them back. The multiply1
node now has only pixels greater than 0 for the part of the image you want to control the blur we will do next.moviefilein1
to the 1st input of lumablur1
and multiply1
to the 2nd input of lumablur1
.lumablur1
set White Value to 0.4
, Black Filter Width to 20
, and White Filter Width to 1
. This makes pixels where the first input is 0
have a blur filter width of 20
and a pixels with a value of 0.4
or greater have a blur width of 1
.The result is an image where the pixels where the user is located are not blurred while other pixels are blurry.
Click on the button on the article top to get this file: RealSenseRemap.toe
Note: The depth and color cameras of the Intel® RealSense™ camera TOP node are in different spots in the world so their resulting images by default do not line up. For example if your hand is positioned in the middle of the color image, it won’t be in the middle of the depth image, it will either be off to the left or right a bit. The UV remap fixes this by shifting the pixels around so they align on top of each other. Notice the difference between the aligned and unaligned TOPs.
Click on the button on top of the article to get this file: PointCloudLimitEx.toe
In this exercise you learn how to create animated geometry using the Intel® RealSense™ camera TOP node point Cloud setting and the Limit SOP node. Note that this technique is different than the Point Cloud example file shown at the beginning of this article. The previous example uses GLSL shaders, which results in the ability to generate far more points, but it is more complex to do and out of the scope of this article.
Point Cloud
.topto1
CHOP node parameter, TOP, enter: realsense1
.r g b
leaving a space between the letters.math1
CHOP node for the Multiply parameter, enter: 4.2
.To quote from the information on the www.derivative.ca online wiki page, "The Limit SOP creates geometry from samples fed to it by CHOPs. It creates geometry at every point in the sample. Different types of geometry can be created using the Output Type parameter on the Channels Page."
limit1
CHOP Channels parameters page, enter r
in the X Channel, g in the Y Channel, and b in the Z Channel.
Note: Switching the r g and b to different X Y or Z channels changes the geometry being generated. So you might want to try this later: In the Output parameter page, for Output Type select Sphere at Each Point
from the drop-down. Create a SOP to DAT node. In the parameters page, for SOP put in limit1
or drag your limit1
CHOP into the parameter. Keep the default setting of Points in the Extract parameter. Create a Render TOP node, a Camera COMP node, and a Light COMP node. Create a Reorder TOP and make Output Alpha be Input 1
and One
and connect it to the Render TOP.
In Part 2 of this article we will discuss the Intel® RealSense™ camera CHOP and how to create content both rendered and in real-time for performances, Full Dome shows, and VR. We will also show how to use the Oculus Rift CHOP node. Hand tracking, face tracking and marker tracking will be discussed.
Audri Phillips is a visualist/3d animator based out of Los Angeles, with a wide range of experience that includes over 25 years working in the visual effects/entertainment industry in studios such as Sony*, Rhythm and Hues*, Digital Domain*, Disney*, and Dreamworks* feature animation. Starting out as a painter she was quickly drawn to time based art. Always interested in using new tools she has been a pioneer of using computer animation/art in experimental film work including immersive performances. Now she has taken her talents into the creation of VR. Samsung* recently curated her work into their new Gear Indie Milk VR channel.
Her latest immersive work/animations include: Multi Media Animations for "Implosion a Dance Festival" 2015 at the Los Angeles Theater Center, 3 Full dome Concerts in the Vortex Immersion dome, one with the well-known composer/musician Steve Roach. She has a fourth upcoming fulldome concert, "Relentless Universe", on November 7th, 2015. She also created animated content for the dome show for the TV series, “Constantine*” shown at the 2014 Comic-Con convention. Several of her Fulldome pieces, “Migrations” and “Relentless Beauty”, have been juried into "Currents", The Santa Fe International New Media Festival, and Jena FullDome Festival in Germany. She exhibits in the Young Projects gallery in Los Angeles.
She writes online content and a blog for Intel®. Audri is an Adjunct professor at Woodbury University, a founding member and leader of the Los Angeles Abstract Film Group, founder of the Hybrid Reality Studio (dedicated to creating VR content), a board member of the Iota Center, and she is also an exhibiting member of the LA Art Lab. In 2011 Audri became a resident artist of Vortex Immersion Media and the c3: CreateLAB.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804