A unique experiment for exploring the 3D space of a Space Shuttle. The user can choose in what way she would like to explore the 3D space. She can choose to control everything using her head movement, or she can choose to control using both head and hand movements.
Upon creating this prototype, I have encountered numerous issues. The number one issue was that for the framework I am using, Cinder, there were no connectors or blocks which would interface the Perceptual SDK and expose all the included functionality of the SDK. Upon researching all possible options and looking at connectors to other frameworks, I made the decision that the best option is to write my connector between Cinder and the perceptual SDK. I developed a CinderBlock which interfaces the UtilPipeline part directly and opens up all options which were previously not available with the pxcupipeline interface. Part of the CinderBlock, is a sample application which shows all possible functions of the SDK on a 3x3 multi-window screen. Technically, the most challenging aspect of the development process was that the feature detection of the SDK works in a very rough way. Both finger tracking and face tracking is prone to jumps between frames. This meant that it was not possible to use them as input directly, but that I had to develop some kind of filtering. The method I choose - `integration` - meant that I took only tiny amounts of the given inputs and added them frame by frame to slowly changing variable. This method turned out to be very robust in terms of smoothness and user-friendliness. I have open sourced the Cinder Block and it's uploaded to the https://bitbucket.org/zsero/cinder-intelcam address. I did not use any other tool or interface, just pure C++ and UtilPipeline.