The Intel® Skeletal Hand Tracking Library Experimental Release (Official Intel)

The Intel® Skeletal Hand Tracking Library Experimental Release (Official Intel)

Intel® is excited to present the pre-beta version of Skeletal Hand Tracking library. This library aims to enrich and re-imagine the human-computer interface by extending the ubiquity and interaction of the human hand into virtual 3D environments.

The Skeletal Hand Tracking library is designed to track the 3D pose of the user’s hand based on depth data from the Intel® Perceptual Computing SDK on a system with a Creative Interactive Gesture Camera™. In each frame, the library provides the full 6-DOF position and orientation of 17 bones of the hand; this data is suitable for enabling physically-based or similar interactions within a virtual 3D environment.

Learn more and get started here:

http://software.intel.com/en-us/articles/the-intel-skeletal-hand-trackin...

-Scott, Intel Corporation

37 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

For clarification, the webpate states:

> The Skeletal Hand Tracking library is accessible through a simple C API, with supporting C++ code to facilitate quick application integration

Does this library have to be accessed through the Perceptual Computing SDK, or will anything that produces a depthmap output work?  For example the Linux driver for the PerC camera produces a raw depthmap.  Can this be used with this library?

 

The library is a windows dynamic linked library (.dll,.lib,.h), and as such wont run on Linux.   

Will it run without the PerC SDK with just a PC Driver? 

Bravo and nice job for bringing this in!!!

This would be a nice add-on for the PerC-SDK as it delivers a more reliable coordinates for each finger. Would love to see this as an official module in the sdk  ;-)
Have tried the sample applications quite a bit and I must say that this is quite impressive for a 0.1 version !  Good job guys.

Would love to see some more improvements in the upcoming version in the areas such as:
1. Lower Latency
2. Palm orientation accuracy
3. finger bending accuracy, etc

This will definitely be a big thing as the reliability and accuracy gets improved over time. keep up the good work! guys

cheers
SMing 

Thanks for information, this library is not availble in C#, is it? I tried using dll in my C# console program but it errored out.

How different this library is from PreC hand tracking., both uses depth data from camera. Is it enhanced version or algorithms or what?

Shyam - I am using the Skeletal Hand Tracking from .Net right now.  Use the pinvoke command of your choice to get you into C++ and everything will work.  

Bob Davies

Thanks Bob for information, but I am not able to add dll into my project. As I try to add it will error out that Please check its a valid assesmbly or COM component. My project build type is x64 and I am using 64bit dll. What else I need to configure to make it work?

Excellent!! I have to try it out..

Abhishek Nandy

Awesome!! Just what I have been waiting for. Gonna try this very soon, thanks!

Shyam - in your C# code use a pinvoke statement to call to a native C++ dll.  There are plenty of examples on the web.  Then in the C++ function that you invoke, invoke the tracklib.init function just like the examples that came with the download. 

Bob Davies

Hi everyone.

Can I use this library in my application that I want to submit to Perceptual Computing Challenge?

Please do not use the library for your contest entries.    This is a pre-beta version so use it to learn and provide feedback to Intel. 

does it mean it's  officially prohibited to use library for Perceptual Computing Challenge?

Hi! Does your library only support your Creative Interactive Gesture Camera or does it also support other sensors like Kinect, Asus Xtition, etc?

Prerna - the library only supports the Intel camera at this point.  

Hi! Will this work on a mac?

Hi,  we dont have a Mac version of the library at this time.

Quote:

Kevin E. wrote:

Have you guys seen this?

http://cvrlcode.ics.forth.gr/handtracking/

Yes, good work.  Same goal of turning depth data into the pose of the user's hand.  Our approach is a bit different, We're heavily based on the technology similar to what you would find in rigid body physics engines used in games and professional simulation.  Then we incorporate depth data to fit to a 3D rigged hand model.  For further details, including recent research paper and videos, do a web search for: dynamics based 3d skeletal hand tracking.   

Is it possible to get more control over the actual tracking performance? I.e. to assign more computing power and precision to the tracking algorithm?
Or does it already automatically scale based on free hardware resources?
I noticed in one paper it said it uses one x86 core but I think on modern multi-core systems, we could get away with utilizing more cores.. 

Also I have a more critical request. At the moment, the library in both-hands-mode is only designed to be used as the user facing the camera with the hands (i.e. the camera is on the monitor).

However, my application of the library requires the camera to sit on the users head looking down on the forearms and hands. However, in both-hands mode, the library always mixes up the two hands and trys to map the left to the right hand and vice versa. Could there be a mode-switch that assumes the camera is sitting on the users eyes? 

Hye guys, any chances to include this at Unity3D?

Quote:

ChaosGrid wrote:

...  my application of the library requires the camera to sit on the users head looking down on the forearms and hands. However, in both-hands mode, the library always mixes up the two hands and trys to map the left to the right hand and vice versa. Could there be a mode-switch that assumes the camera is sitting on the users eyes? 

Such functionality isn't there now, but that would be a small modification to the existing code.     I'll check to see if we can push out a new version of the library for you.  stand by.

Quote:

ChaosGrid wrote:

Is it possible to get more control over the actual tracking performance? I.e. to assign more computing power and precision to the tracking algorithm?
Or does it already automatically scale based on free hardware resources?
I noticed in one paper it said it uses one x86 core but I think on modern multi-core systems, we could get away with utilizing more cores.. 

Conceptually yes, this is possible.  Admittedly, there is nothing in the code to do this now.     Many parts of the algorithm are well suited to parallel (multi-core) processing, so such scaling could be possible.    We'd have to give more thought toward what algorithms would we want to run, which configurations (start poses) would we want to try, etc  when given additional processing power.    

Quote:

Pedro Monteiro Kayatt wrote:

Hye guys, any chances to include this at Unity3D?

Hi,  

The demo with the Snake dropping gold coins out of its mouth was using Unity with this 3D skeletal hand tracking system.   This wasn't done using a production quality plugin.   Getting the hand pose into unity was implemented using a tcp/ip connection between a C++ program running the tracking system and a unity script pulling the data.    

We did find it a bit harder to tweak the interaction and physics within Unity.  Even when the hand tracking is all working perfectly, we still have tunneling issues in the collision detection in the system.  The coins would pile up on the hand ok, but when you move the hand they sometimes pass through.    Perhaps a Unity expert out there could figure this out.   :)

wow, very good job, i need to try these out ASAP...

shin

Quote:

Stan Melax (Intel) wrote:

Quote:

Pedro Monteiro Kayattwrote:

Hye guys, any chances to include this at Unity3D?

Hi,  

The demo with the Snake dropping gold coins out of its mouth was using Unity with this 3D skeletal hand tracking system.   This wasn't done using a production quality plugin.   Getting the hand pose into unity was implemented using a tcp/ip connection between a C++ program running the tracking system and a unity script pulling the data.    

We did find it a bit harder to tweak the interaction and physics within Unity.  Even when the hand tracking is all working perfectly, we still have tunneling issues in the collision detection in the system.  The coins would pile up on the hand ok, but when you move the hand they sometimes pass through.    Perhaps a Unity expert out there could figure this out.   :)

Still very nice that you guys manage to make this wrapper thru TCP!  I could not find the demo of the Snake, it is closed?

In fact I had similar problems in Unity at the past, maybe I can get a look into it just need to make a day with 30hrs and everything would be amazing :P.

Thank you for this support, what you guys are doing is really amazing!

I used the library for a university project, you can take a look here:

https://www.youtube.com/watch?v=UbL1EpL6M_I 

Hi Guys

Please have a look at http://www.youtube.com/watch?v=0P2XJcEPkFU. It is a short promotion video of my application for Intel Perceptual Computing Challenge 2013. The application fully utilizes hand tracking and voice control capabilities of Creative's Interactive Gesture Camera.

The application is build on Intel sponsored Havok 2012 PC XS technology.  Hand skeletal tracking library is used to get finger tip points and palm normal. I found it far better as standard PerC SDK  gesture->QueryNodeData().  These data are inputs to my physical model of hand (skeleton inverse kinematics, rigid body dynamic driving, constraints, etc..).

 The main goal of my application is to show not only full immersion to physical world, but interaction with autonomous AI objects as well (following finger,  hand landing of helibots, dynamic avoidance to hand and following prescribed agent behavior, character can jump to, surf,  walk on virtual hand, go from pinky to thumb etc). The virtual hand in physical world induces the changes in AI/Animation system as well.

 Hope you will like it. 

 Ivan

This library is very impressive and precise for hands tracking.
But it sometimes get lost, when approaching hands to the face, or when hand goes out of the sensor range and returns.

Will there be an update for Intel Skeletal Hand Tracking Library? When it is planned?
What about integrating to Intel Perceptual Computing SDK?

I am associated lecturer for Augmented Reality course. Bought Creative Depth camera as soon as have seen hskl library and its super demo. Now I have several questions because after investigating the camera I feel myself tricked. Paying for the camera 10 times more of the cost of web cam that it really is, I thought the money goes to developers behind the library. As it turns out there are no sources of hskl library and its interface is very ascetic, no ability to set hand parameters like different length of fingers. The library is adapted only to fingers length of its developer and that doesn't look professional. In a nutshell I am pretty disappointed as the marketing pushes the camera strong. Trying now to disassemble dll to find this damned hardcoded params. Do not suggest to spend your money on the cam now! Very raw state!

Yes, there is a function to change the hand parameters.  see inside the hskl.h header file:

void  hsklSetHandMeasurements ( hskl_tracker tracker, float width, float length ); /* Width, length in meters. Defaults to 0.08 and 0.19 */

or in the inline header c++ convenience wrapper (that connects hskl to the library that gets the camera/depth) hsklu.h, this call is passed through to the corresponding c interface via the method:

hskl::Tracker::SetHandMeasurements(float width, float height)

The viewer program lets the user manually adjust the hand size using WASD keys to scale the underlying hand model.  

The API/library only supports the two scale factors (overall length and width) and does not provide access to modify individual finger geometry or tweak the allowable joint ranges, etc.   Yes, the underlying technology is based on fitting a 3D articulated rigid-body model to a point cloud.  Although, rather than being a separate data file, this was included inside the DLL.      

This hand skeleton (hskl) library was provided as an experimental released and is not part of the main SDK.   It was intended for those interested in early access to full motion-capture hand pose estimation.  From the link on the forum that leads to this page it mentions the intended audience:  "...At this time, we invite highly-skilled 3D graphics and interaction C/C++ developers to download this library and explore interaction possibilities..."    The documentation (page 2), is quite up front on the limits of the tracking technology:  "The output of tracking library will not always match the pose of the user’s hand".

The hand used rendered in the provided samples is the hand model used for tracking.   The download does not include any higher quality visual demos where a skinned mesh model is rendered using the hand pose provided by the tracking library.   However, if one has the art content, it should not difficult to use the hskl skeleton pose to drive it.          

 

 

Thank you Stan for the quick answer. Are there any plans to publish sources of the hskl under some open source license? I do not want to tweak any marching cubes silhouette recognition algorithm inside, but my hand finger proportions are different to the one hardcoded in dll, and all fingers uniform scaling doesn't help. If the hand model were put as a separate file, that will make the library much more versatile.

Quote:

Stan Melax (Intel) wrote:

 "...At this time, we invite highly-skilled 3D graphics and interaction C/C++ developers to download this library and explore interaction possibilities..." 

I do not quite understand how to read the mentioned slogan. I am 3D-graphics developer with 20+ years of experience. I explored the library and say that for some reason Intel wants the library to remain a toy instead of doing real job, publish sources otherwise.

Buen día, estoy utilizando la librería skeleton para el seguimiento de la mano, mi pregunta es:

aparte de seguir la mano, puede medir que tanto se doblan las articulaciones de esta, un ejemplo, en grados.

y dicha información guardarla en una base de datos.

Hi Anatol D.

The reason I quoted the announcement part "At this time, we invite highly-skilled.. developers" was only to explain that *this* library has a minimal amount of documentation, samples, and no provided plugins for commonly used tools (such as Unity or Maya).  Also, the label 'experimental', and the fact that this is not part of the official SDK, was meant to manage expectations.  There are certainly many ways the technology in its current form is missing some important pieces (object interference, better recovery, two hands in close proximity, etc).  I certainly never meant to imply anything about anyone's skill level.  On the contrary, i would assume that anyone on this forum thread would have to be a fairly hardcore developer who knows motion capture, skeletal animation, etc.

I'll see what I can do about opening up the hand model data.  You are correct that the proportions do vary more than the limited scaling factors provided.  For example, males tend to have longer ring fingers than females.  Some people have fingers that are longer than their palms.  While tweaking the model does have some improvements, there is a point where the noise in the depth data itself surpasses the discrepancy in the model.  Also, The code in this library is limited to convex bone pieces, which limits how good of a fit there can be given the hand has some small regions of soft tissue (skinned like behavior) and concavity in the palm.  So while improvements are required, it will have to be more than just improving the tracking model.  

A camera's depth data cannot see that a ball is spinning.  Similarly the system will be challenged when trying to track a clenched fist being rotated.  A suggested path to improvements is to include some more traditional image processing and computer vision techniques - use the RGB or IR data to detect internal edges and features and track known markers.  These are areas where I begin to defer to other experts.  My background is more in rigid-body dynamics and 3d graphics.  

I'm not the spokesperson for the hardware or the official SDK.  Unfortunately, this forum might not be followed by the individuals in the know. You do have some good questions.  Please do ask, on the other forums, all your questions regarding hardware and software roadmap, as well as mentioning your developer needs and technology requests/expectations.   

Additionally, I can inquire internally about what can be done in the meantime about providing the requested improvements.  The original model came from a 3dsmax file marked up with additional properties to specify physical constraints.  It was exported to a custom xml format.   The ascii for the xml model file is simply an ascii substring inside the .dll file.  This was only done to make things more convenient - so developers would only have to copy one file (instead of two) into their own project's bin directory.  There was never any intention to hide the model data - its just 3d data that can be queried from the API.  Unfortunately this mesh data used for tracking cannot be modified in code.  (casting the vertices pointer the api provides to non-const doesn't help, the internal system uses a separate copy that is offset to each bone's center of mass)    The only effective way is to modify the input geometry.  I just tested editing the xml model description within the dll's binary directly (changed palm position z value from ~2.42 to 3.00 to add a gap between the wrist and the palm) and yes, it looks like the xml internal to the dll can be tweaked.  Probably ok as long as you dont change the filesize.  Although it might not be easy to understand parts of the file format. For example, the joint limit parameters aren't explained.  Note that I am certainly not recommending this approach.  It is not easy, nor do I suspect it would provide all the improvements that you would hope to achieve.  

 

Leave a Comment

Please sign in to add a comment. Not a member? Join today