Multiple Depth Cameras - Interference/Risks

Multiple Depth Cameras - Interference/Risks

When using multiple Intel depth cameras, what can be done to avoid IR interference?  Another topic I found on this forum indicates that the angle between the two cameras should be greater than or equal to 90 degrees:

When experimenting with two cameras, I noticed that if they are fairly close in orientation (~30 degrees), the depth maps returned flicker frequently, which I assume is the result of this infrared interference.  However, I just want to confirm that increasing this angle to 90 degrees or above will not result in any sort of damage to either camera - since in that case the depth sensor could be receiving IR directly from the other camera's IR source, rather than reflected light.  In that scenario, I imagine the intensity of the received light could be above what the camera was designed for.

Thanks for your assistance!

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

By 90 degrees I think they meant perpendicular to one another, not facing one another.  However, note there is no way to "damage" the sensor with IR light - the pixel will simply saturate if it gets too much light, like when you point a 2D cellphone imager at the sun and it "blooms".

The idea of the perpendicular position is that each camera will ideally only see its signal reflected back to it.  If a camera does pick up another camera's direct signal or reflection, it will map it as depth or noise depending on the intensity of the signal.  In general the camera was designed to face a single user working at a PC  so it was assumed there was only one camera to worry about.  Obviously once the camera went into the wild people tried lots of different things with it, so we starting to get feedback like this.

Back in the lab we got multiple cameras overlapping their fields by changing the duty cycle of the IR frequency for each camera, and then each camera knows to register its own duty cycle and ignore its neighbors (or a more radical change would be to change the IR emitter for each camera to get the same results).  So a workaround is "possible" but it is not a public release and is not planned to be offered on the off-the-shelf Senz3D today.


Would it be possible to have a software controlled duty cycle via the API? We have an application which was using 8 of the Xtion Pro units using IR disable/enable and would like to replace these with the Creative units. 

Could we please have such a method? There are MANY applications where multiple cameras are needed, so I am sure that we will not be the only ones!

Thanks in advance for the consideration. 

Chris Aiken

Aiken Development LLC

Yes what you describe is "possible" but is down on a very long to-do list.  For example, we are also working on the following:

- As several of you know, SoftKinetic has created a driver that makes the DS325/Senz3D see mid range (2~3 meters) and long range (up to 5 meters) and is going through alpha testing right now.  This is being released on a case-by-case basis (you have to have a DS325, not a Senz3D to get on the alpha list).   Now that I think of it, this technique uses different modulation, so I am thinking that maybe if you use this driver with the original driver you could have two cameras looking at each other without interference (but one camera would see 1 meter and the other farther), but I would need to check this out before committing for sure.  In any event this wouldn't help your 8-camera use case....

- We have a driver that corrects much of the radial distortion in the DS325/Senz3D, giving a more uniform depth image when put against flat surfaces (this issue does not show up when the camera is used for gesture control, but now that people are using the camera for other use-cases we realized we needed better calibration).  This is also in beta test and should make GA later this summer.

- We are taking some of filtering/smoothing algos and putting them directly into the driver.  This code already exists in the Perceptual Computing stack, so this will help users get a better image when using the camera for non-gesture use cases.

- We are finishing up an Android driver

So multi-camera support is on the "to-do" list but a ways down, and like any company we have limited resources to throw at projects.  If someone has a reason we should move something up the list, the best way to move it up is a solid financial incentive....

Leave a Comment

Please sign in to add a comment. Not a member? Join today