Interview with Developer Martin Wojtczyk

Busy developer Martin Wojtczyk generously gave us a few minutes of his team this week to talk about his current work with perceptual computing and Android technology.  Along with his wife Devy, this developer team recently demoed several Perceptual Computing projects they entered during the Intel Perceptual Computing Challenges Phases 1 & 2, and on which they worked during multiple Intel-hosted Hackathons in Sacramento and San Francisco. These include an application for medical doctors who need to look up patient information and diagnostic images in sterile conditions, an application that utilizes the PerC SDK to control Google Earth touch-free via gestures and voice, and a LEGO self-driving car.

Tell us about your background.

In 2004 I earned a Masters degree in computer science from the Technische Universität München (Munich, Germany). My thesis was on an experimental mobile lab robot. Afterwards I was able to continue my work in a PhD program at the same university and implemented several real world service robot scenarios. One of these projects brought me to Bayer HealthCare in Berkeley, CA where I installed the mobile lab robot in a real biotech pilot plant to carry out cell experiments. I received my PhD in computer science (Dr. rer. nat.) in 2010.

What got you into coding - your "lightbulb moment", so to speak?

As a little kid I always liked to push buttons. We got our first computer when I was 13. I was fascinated by computer games and enjoyed playing them, but soon I wanted to know how I could create games by myself. I enrolled in programming classes at school and also started teaching myself programming languages from books.

Tell us about a few of the projects you’ve been working on.

THE LEGO SELF-DRIVING CAR – is a perceptual robot for hobbyists and enthusiasts as well as for educational institutions and teachers to engage kids and teenagers with science and technology by using LEGO and Computers – tools they know and love – to create an autonomous mobile robot. When a person is detected in front of the camera by the PerC SDK, the robotic Lego car awakens, opens its eyes, and greets the person with a welcome message. The robot drives around the room, explores the area, avoids tables, chairs, and stairs and simultaneously generates a map, which will be depicted on the screen. This is just a starting point to get kids and teenagers involved in robotics, image processing, speech recognition, speech synthesis and Human-Robot-Interfaces.

CHEESE! – is a fun photo experience at the desktop/laptop computer, or for amusement parks like Disneyland/ Disney World Resort, media companies like Pixar and Warner Brothers, or businesses like Disney Stores that require consumers to engage with their products. The main menu of Cheese invites the user to select one of several fun photo scenarios. It may put your own image next to the one of a super star or your childhood hero. It allows you to create a fun comic experience with comic effects or put funny objects on or around your face. When ready say "Cheese!" and the smile on your face will trigger the picture.

PERCEPTUAL PATIENT – For surgeons, dentists, or other professions that would benefit from simultaneously using the computer with gloved or soiled hands. Perceptual Patient is an application for medical Doctors who need to look up patient information and diagnostic images in sterile conditions. The application allows to open a patient's record via voice or gesture and to browse through a patients data set without touching a screen or keyboard. We hope to partner with large medical device companies like Cerner or Siemens.

PRETTY PICTURE PRESENTER – For big department stores (ie, Nordstrom’s, Home Depot, GAP), real estate companies in high foot traffic areas, car showrooms (ie, Mercedes, Toyota, Honda), or any other business that rely on a stellar presentation to sell their large inventory of products. Pretty Picture Presenter is an application to present a businesses large inventory of images in an attractive and interactive way by the use of an OpenGL accelerated image carousel with reflections and highlights. The carousel can be controlled by gestures and speech input to advance and select content.

What are you currently working on?

I am currently working on a native Android controller for my perceptual robot to eventually replace the laptop with an android phone or tablet.

What excites you about perceptual computing right now? Android?

Perceptual Computing is a novel and experimental field where early adopters and implementors can have a huge impact and create solutions that make a difference.

Android fascinates me as the most widespread mobile operating system. With mobile processors becoming smaller and more powerful and a plethora of sensors built into smartphones I would like to see perceptual computing integrated on Android devices for affordable robot creations.

Anything else you'd like to add?

My wife Devy and I have been self-employed on our startup for the past year and we appreciate the opportunities to work with Intel during programming challenges, Hackathons, and showcasing our applications at the Intel Developer Forum in September 2013.

Thanks again to Martin for his time, and we look forward to seeing what you and Devy come up with! 


Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.
Возможность комментирования русскоязычного контента была отключена. Узнать подробнее.