How to get 3D metric position given a pixel position + depth ?

How to get 3D metric position given a pixel position + depth ?

Hi all,

I am wondering how could I get the 3D position (in metric scale) of a point relative to the camera referential.

Is the function PXCProjection::ProjectImageToRealWorld supposed to do this ?

Thanks in advance

10 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Yes, PXCProjection::ProjectImageToRealWorld will help you to get the real world coordinates.

Thanks David, that's exactly what I needed.

Dear David and Eldar, we are facing some problems to get real world coordinates. Do you have any sample or documentation about PXCProjection::ProjectImageToRealWorld? Thank you very much for your attention!

You can find more information regarding PXCProjection::ProjectImageToRealWorld at


I guess I have the same problem. i have a set of 3D spheres that I'd love to follow my finger tips position. I used the positionImage and positionWorld fiends but seems not very suitable. I can work out the x,y from the image, but the Z is still inconsistent in 3D world.


David and Giancarlo, thank you for the comments. We solved our problem using PXCMGesture.GeoNode. We´ve used positionWorld property and now we can follow the nodes that we want. Thank you!

Hi all,

documentation of ProjectImageToRealWorld is a little short. For example, I got face landmark in color image coordinates. It seems that ProjectImageToRealWorld  needs coordinates from depth image coordinates.

LandmarkData always contains 0 as z info, so I want to rebuild the z from the depth image.

Unfortunately PXCProjection::MapColorCoordinatesToDepth does not seem to be implemented as it always returns -1.

How could I get realWord coordinates from LandmarkData ?


So, I have implemented a version of MapColorCoordinatesToDepth which uses MapDepthToColorCoordinates to find the depth coordinates for which MapDepthToColorCoordinates returns the closes color coordinates.

This is a little overkill but it works...


I have a question regarding how ProjectImageToRealWorld function works. Does it multiply all coordinates to some predefined Matrix? Is there a way to obtain this information? For the kind of algorithms I use I need to access the raw depth data, do some preprocessing and then transform it offline, where I don't have access to PCSDK? Is this possible at all?

Login to leave a comment.