Touch-less Volumetric Rendering
A prototype to make it possible for doctors and scientists to use state-of-the-art volume rendering techniques without being forced to touch keyboard or mouse. This makes it possible to bring these technologies into places where it was not possible before. For example a doctor could interact with a 3D volumetric model of a CT or MRI scan directly in the surgery room, something which has not been possible before.
Upon creating this prototype, I have encountered numerous issues. The number one issue was that for the framework I am using, Cinder, there were no connectors or blocks which would interface the Perceptual SDK and expose all the included functionality of the SDK. Upon researching all possible options and looking at connectors to other frameworks, I made the decision that the best option is to write my connector between Cinder and the perceptual SDK. I developed a CinderBlock which interfaces the UtilPipeline part directly and opens up all options which were previously not available with the pxcupipeline interface. Part of the CinderBlock, is a sample application which shows all possible functions of the SDK on a 3x3 multi-window screen. Technically, the most challenging aspect of the development process was that the feature detection of the SDK works in a very rough way. Both finger tracking and face tracking is prone to jumps between frames. This meant that it was not possible to use them as input directly, but that I had to develop some kind of filtering. The method I choose - 'integration' - meant that I took only tiny amounts of the given inputs and added them frame by frame to slowly changing variable. This method turned out to be very robust in terms of smoothness and user-friendliness. I have open sourced the Cinder Block and it’s uploaded to the https://bitbucket.org/zsero/cinder-intelcam address. I did not use any other tool or interface, just pure C++ and UtilPipeline.