Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These lenticular autostereoscopic displays have been around for a long time and never quite took off. There have been even 3D TVs (Philips) using this idea. I am not quite sure what is "new" apart of yet again abusing the "holographic" term (hint, it has zero to do with holograms or holography).

The major issues with these are the limited viewing angles and the enormous bandwidth needed to both render the individual points of view and to actually transfer them to the screen. Heck, a lot of computer games have problems to generate stereoscopic (i.e. 2 images) content at 60 or 90fps required by the VR helmets such as Rift or Vive these days. And these guys want to push 45 distinct images at 60fps?

Good luck with that, especially for that ridiculous price for a tiny screen.

These guys offer a 4k, 50" screen: http://www.ultra-d.com/



This isn't quite the same as what you're talking about. The lenticular displays only vary in the x-dimension, but this display seems to work in both dimensions, which would probably create a noticeably better 3D effect.



Does it? The demo videos only move the camera side-to-side. I can't find anything detailing whether the image is holographic in the Y dimension.


Yes the problem with light-field displays (and cameras) is that they do need massive bandwidths.

But there are ways to mitigate it somewhat. It's possible to have a gaussian or random distribution of light rays so that you end up with only average about 10 or so rays per pixel, instead of the 45 here.

But yes, expect to need a massive increase in bandwidth for light-field holographic displays. This includes headset VR mounts, where you can finally have a display without the limited field-of-vision that you'd have from current VR headsets. A VR headset can focus the light rays towards the range of each eye, which can also cut down on bandwidth.


If you don't display the full raster, you're aliasing the signal and will see a lot of view-dependent artifacts (twinkling).

Source: I work in light fields.


Which is why I said you need to distribute the rays around randomly in a gaussian distribution. Don't just arrange them in perfect rows and columns - that's the worst array option that guarantees aliasing. You can shift each pixel (or subpixel) lens around slightly in a permanent pattern.

Anti-aliasing techniques are common elsewhere in 3-D graphics, and can be used just as well in light-fields.


That assumes your pixel distribution has much higher potential addressable resolution than the signal you care about... in which case, why not just display the full raster?

Rendering for 2D and display are two different beasts -- but I'll own the fact that I'm not formally trained in this and there may be subtleties -- or even obvious signal processing facts -- that I'm getting wrong. (But if I'm wrong, I'd love to know how, for my own edification.)


> That assumes your pixel distribution has much higher potential addressable resolution than the signal you care about.

It doesn't need that at all. Light fields can be interpolated like anything else, just like bayer filter for color on camera sensors or 4:2:2 color compression on the signal side. But if you're doing 3-D rendering, you can match rays exactly to your distribution on the display if the renderer knows the distribution of rays on the display.

Interpolation is always going to reduce quality, but it's better than aliasing, so there's going to be a trade-off analysis that needs to be done. I don't know what the results of that would be, so this is all theoretical.


With all due respect, interpolating light fields is far less trivial than you make it out to be. It's a 4D field, and naive interpolation leads to loss of detail, and often edge doubling (itself a form of aliasing).

Furthermore, if you're interpolating rays, you're necessarily not doing what you originally proposed, which is to only light up a (random or pseudorandom or evenly distributed) subset of the pixel display elements, presumably to save on rendering cycles.

Let me just say, more generally, that intuition trained on 2D doesn't apply directly to light fields.


Tru dat. Interpolation will probably be too difficult in 4-D space.


Compression is needed and probably not difficult given that not much extra information is rendered comparing to normal display given the limited view angle. Probably only twice as much as bandwidth needed if compression is done properly.


Figured it was something similar, especially given their very careful filming angles.


The videos are pretty accurate to the experience, having seen them in person a couple times. I don't think there's any cinematic trickery going on.


[flagged]


I feel like you might have landed on http://blog.lookingglassfactory.com. Their main website is http://lookingglassfactory.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: