How can I get the real world 3D coordinates of the image? Right now I can get the depth data but I don't know how I can get the real world coordinates? which SDK can be used to realize this function?
I'm trying to save images (in PXCImage format) taken by the Creative Camera onto my hard drive. Is there a way to do this?
Can anyone help me?
Why canot run the DEMO,Augmented Farm and DepthBall
my OS is win7 64bit
a intel perceptual computing newone
When using multiple Intel depth cameras, what can be done to avoid IR interference? Another topic I found on this forum indicates that the angle between the two cameras should be greater than or equal to 90 degrees:
Just a quick post to point out some errors I've found in the myFirstApp Tutorial. Please correct me if I'm wrong.
-The path they give to add under the Properties -> VC++Directories (Page 7) does not correspond to the path given in the Screenshot. After fiddling around, the path depicted is correct:
$(PCSDK_DIR)/lib/$(Platform) and NOT $(PCSDK_DIR)/lib/$(PlatformName)
Dear Intel Real Sense (RS) Developers,
I am working with the latest version/samples and I also kept an earlier SDK version 7383 Gold. The earlier version has the original emotion smile recognition capability.
I am doing research in affective computing and would appreciate any update as to when the emotion capabilities will be restated in the RS SDK.
Thanking you in advance,
Now I work on perceptual computing SDK . The aim is to get the RGB data and depth data(x, y, z) of every frame and save them to files. in other meaning, when I get a image, I need their RGB data and depth data. but image and depth resolution are not the same, I don't know how to get the depth of every pixel. For example, the raw-stream example can output 640*480 image, but the depth can output 320*240 image, How can I get the depth data(x, y, z) of every pixel of image? I don't know which SDK function can do this.
I'm new to intel perceptual computing and quite new to c++ (used at least 10 years ago)
All the samples work fine.
I`m trying to use SDK without utility lib.
I read the SDK documentation and everything was clear, nothing I could not understand. I tried to implement my own utility classes to start catching gestures, but ran into a problem, the reason of which I cant understand, and I hope someone can help me.
My library initializes those instances: