I have been looking though the documentation and there's a lot of stuff to be excited about, I mean a SDK that does the heavy lifting in regard to picking and sorting out the more exotic user input so I can focus on the other areas of the product sounds like a dream come true.
The only thing holding me back from ordering a camera and dving in the SDK is the fact that apparently there is only very limited face and emotion tracking (a forum post even mentioned that some eye tracking features were apaprently removed in v4). However I districly remember an image from the press package showcasing a oval mesh overlapped over a face with a coordinate system attached, implying real time orientation and position tracking - but coudn't find any actual methods that expose those values in the help. ( have I overlooked them when browsing the documentation in which case I appologise - and if it is not too much trouble, please point me to the right page :D ).
The camera has a depth sensor, so I imagine implementing in the SDK an Intel-optimised tracking algorithm would not be technically out of the question. Is there anything like it planned for the near future ?
I am thinking something similar to the ones presented in "Real Time performance based facial Animation [Weise at al, Siggraph 2011]" ( or its follow-up from 2013 by Hao Li) or "Online Modeling for Realtime Facial animation [Bouaziz et al 2013]" or a RGB-only based solution akin to Mixamo's FlacePlus , SeeingMachine's FaceAPI or ImageMetrics' LiveDriver (which I suspect are based off Active Appearance Models approach -AAM- or On-Line Appearance Models -OAM's- , for example as described in "Hierarchical On-line Appearance-Based Tracking for 3d Head pose, eyebrows, lips eyelids and irises" [Orozco et al -Image and vision computing 31 2013])
Thank you and best regards,