Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article doesn't mention the main drawback of ToF depth sensing: Multipath Errors. They originate from light bouncing around in the scene before coming back to the detector, causing the resulting depth maps to have dents and distortions in the neighborhood of angled surfaces.

They are a big problem in built environments which have lots of 90 deg angles that act as retroreflectors to the signal. To my knowledge none of the ToF sensor manufacturers (MS, Sony, PMD, Samsung, etc..) has solved this problem. If anyone has, please let me know as the topic is of professional interest to me.



There has been some work on resolving multipath errors with indirect ToF, Chronoptics (I'm a cofounder) recently licensed our technology to Melexis in automotive.

Here's a blog post I wrote about resolving multipath https://medium.com/chronoptics-time-of-flight/multipath-inte...

And a link to the announcement from Melexis https://www.melexis.com/en/news/2021/4mar2021-melexis-announ...


Could this be used with SAR as well?


Maybe, would be very interesting to investigate.


There are practical mitigations for the problem, especially if you want to filter away points from these surfaces (and are OK with dropout regions in the depth image). Some of these sensors produce useful per-pixel confidence values which do a reasonable job identifying regions with multipath errors, and various types of spatial/temporal filters work so-so in handling small distortions. The K4A sensors are perhaps a bit overeager in their spatial filter, leading to slightly over-smoothed edges, though.

You can always try combining ToF sensors with other types, like stereo, and hope that the failure modes of the different types are mostly distinct.

The EpiScan3D and EpiToF cameras are probably the closest to "solving" reflective subjects, but they are basically one-off benchtop prototypes and nowhere near products.


The article is also a bit optimistic in regards to outdoors use, with direct sunlight exposure. The sensors I tested in the past just didn't work at all.


There are two issues with sunlight and iToF. The sunlight photons saturate the pixel, and you get no useful measurement. The dominant noise source of iToF is photon shot noise [1], so sunlight photons contribute heavily to noise. Increase your laser power, use better optical notch filters, decrease the sensor integration time, and do more image filtering. I'm a cofounder at Chronoptics and we've developed a sunlight capable ToF camera, https://youtu.be/7vMI37S0w3Q

[1] https://en.wikipedia.org/wiki/Shot_noise


You can absolutely make ToF sensors work outdoors in direct sunlight, but you need to operate at different frequencies that require more expensive emitters and detectors to avoid being saturated by light from the sun.


Which sensor was that stefan_? maybe I worked on it


It only matters if you need physically accurate data - ie if your brain can't process the multi path error and correct for it. A bat sees in multi path error and has no problem with it. I assume a machine vision system can learn to perform with multipath error and indirectly account for it. ie, it sees an apple with multipath error, and still knows it's an Apple.

There are options to fix multipath and recover the underlying ground truth

1) you can do a reverse raytrace and iteratively correct for the error - somewhat expensive, but there's tricks and shortcuts to accelerate

2) hardware fix to measure the multipath component separately and subtract / correct it - there's several ways to do this - there are some patents on it that I've worked on. The same methods also can remove background signal from ambient light.


If you know that the surfaces in your data should be flat and the corners sharp, it may be possible to filter out the fly pixels pretty well in a post-processing step. Of course, if you can't make these assumptions, then the problem is ill-posed at post-processing time.

Presumably my cursory experience with this from half a decade ago is not news to you if you have professional interest in the topic, but maybe you can elaborate how this is not a feasible solution in your case?


Would it help to use multiple sensors at different positions and use the common points to filter out the 2nd+ bounces?

I think this would work for, say, a mirror but what about something like brushed metal?

Are those other bounces scattered enough that multiple perspective still produce an error?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: