Is there a simple way to get the depth information for a point with coordinates in the RGB image (kind of inverse of uv-map) ?
Not for now. We have new projection interface in beta 3 release but the implementation of color coordinates to depth coordinates is not ready yet.
Is it not possible to use the UV to map depth to color?
Thanks David for your response. Actually, I've downloaded yesterday the new SDK and I saw this new interface. Hope this will be completed for the final release. By the way, do you know a way to access the calibration data of the camera so that I could try by myself ?
@yosun: what I want to do is the inverse. I want to get the depth of a point on the color picture, ie. get the coord in the depth image from the coord in the color image. It could be done by scanning the UV map to find the RGB coord, but I think there could be a better way to do this.
Any recommendation how this new interface might work? The mapping from color to map isn't unique. There are many projections possible mappings from color to depth.
Yes, indeed, the 3D reconstruction from the RGB image need the knowledge of the depth for each point. In my software, I currently need to reverse only a few points, not the full image. I've implemented it with an iterative method: I used the last known depth for the point I'm processing and if the point is not near an edge of and object (where the depth change abruptly), it refines automatically with every frames.
This works quite good for me. But to develop it, I required the calibration of the camera, which is not directly available with the SDK. I first hard-coded the values, and then I've found a way to read it from customized properties. We really need to document these values to help the developpers, there is no reason to hide them. And this applies also to the accelerometer values, which would be incredibly valuable to calibrate the camera/screen system (but this is out-of-topic here...).
>>...But to develop it, I required the calibration of the camera...
I've calibrated a camera ( for a different tracking applications ) and I used the following values:
- Focal length of a lens
- Size of a sensor
- Size of a target ( calibrator )
- Distance to a target ( calibrator )
As soon as all these values are known simple trigonometric equations were used. However, in my case a size of the target is always known (!) and a distance ( camera-to-target ) could be calculated with some accuracy. The accuracy of calculations depends on how far the target is from the camera and if it is too far then accuracy is not good.
To perform a correct reprojection, you need three calibrations:
All these parameters are available in the camera, but are not officially exposed by the SDK.
Thanks for these details.
>>...All these parameters are available in the camera, but are not officially exposed by the SDK...
That is why my calculations are based on some Laws of Optics.
Just here to support the wish for RGB coordinates to depth coordinates support in the SDK.
It would be very useful since we could use RGB markers to improve depth-tracking points of interest.