We’re fresh back from GDC and wow…what a great conference! We had so much fun meeting the other contestants, making fun of Lee, and showing off Stargate Gunship to hordes of Stargate fans. Any day you can make a fanboi literally squeak in delight – that’s a good day.
But one of the real high points of the conference was a real-world field test of a technology we’ve been tinkering with for the last several weeks:
I know that might come as a surprise but consider the following:
- The tongue is easily identifiable as a landmark with sharp edges and a definite ‘point.’
- For the typical user, the tongue is a highly agile appendage with far-greater accuracy and lower latency than landmarks like the nose or chin.
- The tongue moves independently of other landmarks like the eyes or head.
In short, the tongue provides unique advantages in the perceptual environment and could provide an entirely untapped resource for full-facial input modalities.
To give credit where it’s due, we really must cite the decades long and pioneering work of Dr. G. Simmons who says of his own work, “You can't go through life and leave things the way they are. We can all make a difference, and if I die today, I know I made a difference." (Read more at: http://www.brainyquote.com/quotes/authors/g/gene_simmons.html#6H0IHxJPUkSGvFRh.99)
Mr. Simmons has been an advocate of glossal and lingual efficacy since the early 70s but technology has been a limiting factor in his work. Tools like the perceptual camera offer a real opportunity to see this as a genuine path for further exploration. In our case, we created a special gesture, the SDS, which is used to activate the tongue-tracking feature. As you’ll see in the video, once activated the glossal translation is easily mapped to, in our case, the motion of the camera within the scene.
We hope this work can be a stepping-stone to greater use of alternate input methods and look forward to possible future collaboration with Dr. Simmons and other in this exciting field.