These lenticular autostereoscopic displays have been around for a long time and never quite took off. There have been even 3D TVs (Philips) using this idea. I am not quite sure what is "new" apart of yet again abusing the "holographic" term (hint, it has zero to do with holograms or holography).
The major issues with these are the limited viewing angles and the enormous bandwidth needed to both render the individual points of view and to actually transfer them to the screen. Heck, a lot of computer games have problems to generate stereoscopic (i.e. 2 images) content at 60 or 90fps required by the VR helmets such as Rift or Vive these days. And these guys want to push 45 distinct images at 60fps?
Good luck with that, especially for that ridiculous price for a tiny screen.
This isn't quite the same as what you're talking about. The lenticular displays only vary in the x-dimension, but this display seems to work in both dimensions, which would probably create a noticeably better 3D effect.
Yes the problem with light-field displays (and cameras) is that they do need massive bandwidths.
But there are ways to mitigate it somewhat. It's possible to have a gaussian or random distribution of light rays so that you end up with only average about 10 or so rays per pixel, instead of the 45 here.
But yes, expect to need a massive increase in bandwidth for light-field holographic displays. This includes headset VR mounts, where you can finally have a display without the limited field-of-vision that you'd have from current VR headsets. A VR headset can focus the light rays towards the range of each eye, which can also cut down on bandwidth.
Which is why I said you need to distribute the rays around randomly in a gaussian distribution. Don't just arrange them in perfect rows and columns - that's the worst array option that guarantees aliasing. You can shift each pixel (or subpixel) lens around slightly in a permanent pattern.
Anti-aliasing techniques are common elsewhere in 3-D graphics, and can be used just as well in light-fields.
That assumes your pixel distribution has much higher potential addressable resolution than the signal you care about... in which case, why not just display the full raster?
Rendering for 2D and display are two different beasts -- but I'll own the fact that I'm not formally trained in this and there may be subtleties -- or even obvious signal processing facts -- that I'm getting wrong. (But if I'm wrong, I'd love to know how, for my own edification.)
> That assumes your pixel distribution has much higher potential addressable resolution than the signal you care about.
It doesn't need that at all. Light fields can be interpolated like anything else, just like bayer filter for color on camera sensors or 4:2:2 color compression on the signal side. But if you're doing 3-D rendering, you can match rays exactly to your distribution on the display if the renderer knows the distribution of rays on the display.
Interpolation is always going to reduce quality, but it's better than aliasing, so there's going to be a trade-off analysis that needs to be done. I don't know what the results of that would be, so this is all theoretical.
With all due respect, interpolating light fields is far less trivial than you make it out to be. It's a 4D field, and naive interpolation leads to loss of detail, and often edge doubling (itself a form of aliasing).
Furthermore, if you're interpolating rays, you're necessarily not doing what you originally proposed, which is to only light up a (random or pseudorandom or evenly distributed) subset of the pixel display elements, presumably to save on rendering cycles.
Let me just say, more generally, that intuition trained on 2D doesn't apply directly to light fields.
Compression is needed and probably not difficult given that not much extra information is rendered comparing to normal display given the limited view angle. Probably only twice as much as bandwidth needed if compression is done properly.
The major issues with these are the limited viewing angles and the enormous bandwidth needed to both render the individual points of view and to actually transfer them to the screen. Heck, a lot of computer games have problems to generate stereoscopic (i.e. 2 images) content at 60 or 90fps required by the VR helmets such as Rift or Vive these days. And these guys want to push 45 distinct images at 60fps?
Good luck with that, especially for that ridiculous price for a tiny screen.
These guys offer a 4k, 50" screen: http://www.ultra-d.com/