I am planning to design a app using perceptual computing SDK which would be installed at the end on a moving target like vehicles etc where target vibrations might afefct tracking gestures and performance.Any suggestions on how I may maintain the performance of the system?The vibrations of the target cannot be reduced.
I have some questions about the camera.
I'm doing some research about this camera, and my environments is win7 64bits & matlab.
1. I can only read the RGB camera in resolution 640x480.
So I want to make sure the resolution about RGB camera is 1280x720 ?
2. I want to build a 3D point cloud by depth image.
The depth camera can get 0~32001 in every pixel.
Are them means the real Z value in space??
(my question is that do I have to do some transform to map them to 3D space?)
Since Intel announced the winning Demos in the 2013 Perceptual Computing Challenge there have been questions posted here regarding the number of winners, the Judging process, and the score that each Demo submission received.
The results are out now with some got lucky and many more like me left without a prize. However that is the way contest runs and that is fine.
We are very excited by both the volume of submissions and the quality of effort put forth by the community. Congratulations are in order to everyone for an effort you should all be proud of. To clarify the awarding of prizes: At the outset of the contest we established clear scoring thresholds that would determine if a submission qualified for a prize. Prizes are awarded to submissions that place highest in their categories AND meet the minimum qualifying scoring thresholds.
Intel has announced the winners of the 2013 Perceptual Computing Challenge. The chip shot can be found here:
It was a fantastic contest, and we all look forward to seeing more of the winners soon!
My application needs to read depth streams from multiple cameras, and needs to distinguish in code between cameras (since they are positioned separately).
My PCSDK-based application has a multiple processes architecture, two or more processes need to access the camera at the same time, say one process for RGB image stream, one process for face tracking and another process for gesture recognition. It also applies to camera sharing by multiple applications.
It seems that the Unity plug-in doesn't support multiple cameras. Is that right, or did I miss something?
If that's right, are there any plans to add multiple camera support soon?
Hi all, I'm interested with the latest pcl release 1.7.1. They said they support for Intel Perceptual Computing SDK cameras.
Is there anyone know how to use it? very appreciate.