Duo 3D

Duo 3D

On my Kickstarter radar - just wanted to share yet another motion gesture hardware thing @ http://duo3d.com

4 posts / 0 nouveau(x)
Dernière contribution
Reportez-vous à notre Notice d'optimisation pour plus d'informations sur les choix et l'optimisation des performances dans les produits logiciels Intel.

Looks like a stereo-optic hand tracker, something akin to the Leap (which uses active stereo optic).

Keep in mind that neither this (nor Leap) produce a full 3D depth map - they are fine for basic finger and hand movements, but provide less user immersion.

The way to think of this is starting a low cost - low user immersion and going to higher user immersion (with more cost) with the various technologies:

- 2D imager finger tracking (Point Grab)

- Acrive 2D imager (2D imager plus LED - announced by PixArt, others)

- Stereo Optic (example above looks like this)

- Active stero optic (Leap)

- Time-of-Flight (PerC camera for short range) and Structured Light (Kinect for long range)

So at the top you have basic finger movement at the cheapest cost, and at the bottom you have head plus fingers plus background subtraction, etc. (I think what Lee is doing on the Developer's Challenge with the virtual meeting room is amazing, and a good example of full immersion).  What you use will depend on the user experience you want to provide at what budget - a typical feature/cost trade-off.

I guess the thing is that most o the motion gesture based apps you see out there mostly are about swiping for navigation.

The few apps that truly require depth-cam are - what you mentioned  - instant green screening, 3d scanning(ish). The interpolated z suffices for almost all the other use-cases. 

I wonder if there will be stronger use-cases out there that truly require a depth cam - from the Challenge. 

引文:

Mitch R. 写道:

Looks like a stereo-optic hand tracker, something akin to the Leap (which uses active stereo optic).

Keep in mind that neither this (nor Leap) produce a full 3D depth map - they are fine for basic finger and hand movements, but provide less user immersion.

The way to think of this is starting a low cost - low user immersion and going to higher user immersion (with more cost) with the various technologies:

- 2D imager finger tracking (Point Grab)

- Acrive 2D imager (2D imager plus LED - announced by PixArt, others)

- Stereo Optic (example above looks like this)

- Active stero optic (Leap)

- Time-of-Flight (PerC camera for short range) and Structured Light (Kinect for long range)

So at the top you have basic finger movement at the cheapest cost, and at the bottom you have head plus fingers plus background subtraction, etc. (I think what Lee is doing on the Developer's Challenge with the virtual meeting room is amazing, and a good example of full immersion).  What you use will depend on the user experience you want to provide at what budget - a typical feature/cost trade-off.

Yes, there are plenty of other items that are being done outside the contest and not posted on YouTube, such as:

 - face and eye tracking

 - 3D face recognition

 - copying facial emotions into an avatar

 - biometrics

 - Virtual desktop buttons (i.e. virtual keyboard)

 - 3D object learning and tracking

 - 3D planar surface decomposition

 - Virtual Reality

 - No fewer than four companies (probably more) are protyping the camera for wearable computing (a small module version of the camera will be sampling later this year)

 - Robotic control

 - etc.

 By and large it seems many content entries went with "mouse replacement", since that is where most people's experience lies (and probably due to the quick timeframe for Phase 1 - most of the work above is with companies with longer development cycles and larger teams).  The whole point of PerC is to re-think the UI and what you can do when a computer sees in 3D, and the result will be a lot more than simple gestures.

Laisser un commentaire

Veuillez ouvrir une session pour ajouter un commentaire. Pas encore membre ? Rejoignez-nous dès aujourd’hui