Hacker Newsnew | past | comments | ask | show | jobs | submit | ticklemyelmo's commentslogin

Much of the criticism of the Iphone photos is the fisheye effect. This is exaggerated, because you took the photos from different distances. If the Iphone photos were taken at the same distance, a cropped version of the Iphone photo would have identical perspective.


We are? There are games like this in any modern "arcade" like Dave & Buster's.

If you mean why aren't we building highly specialized hardware like this any more, I'd say it's because most of that complexity has moved into software running on general-purpose hardware, which is infinitely more flexible and maintainable.


Those stores also predominantly sell imported products. They are going to be affected disproportionately by the tariffs.


Better on every axis: security, performance, resource consumption, reliability, verification, documentation...

Code is a pure liability that you accept to get a useful service.


exactly..which is also why lines of code written is one of if not the worst possible metric of productivity. easy to game too.


I marked my transition to senior engineer when my net lines of code flipped to negative.

Not that I game that obviously, it just occurred naturally for a ~4 month period


> Better on every axis...

How 'bout job security?


Anyone can write no code.


But it takes a master to not write the write piece.


Image quality is extremely important for medical image analysis. A flickering low resolution display is the last thing you'd want a doctor looking at.


I rode in a Waymo for the first time last week. The highest praise I can give it is that it's boring. It drives like a careful human with zero surprises.


Yeah it’s hilarious (frightening) when Tesla cultists say that the surprise in FSD’s behavior makes it exciting.

That attitude is why people who share the roads are tired of Tesla’s and their loyalists’ constant excuses for this stuff.


"The breakdown is dramatic, as models also express strong overconfidence in their wrong solutions, while providing often non-sensical "reasoning"-like explanations akin to confabulations to justify and backup the validity of their clearly failed responses, making them sound plausible."

It's fascinating how much they anthropomorphize the systems and credit them with emotional, possibly deceitful behaviour, in a paper trying to explain how unintelligent they are.


> It fired my imagination. I was always bad at that game but while playingit I was a Starship Pilot!

"Greetings, Starfighter. You have been recruited by the Star League to defend the frontier against Xur and the Ko-Dan armada."

Yeah, he designed THAT cabinet too.


It's not better though. The STR template processor behaves the exact same way as all the other examples, and it's the one that all the inexperienced devs prone to this kind of injection attack will use.


The approach of defining template processors is definitely better. Moving from unsafe to safe is "just" switching STR to whatever secure processor team writes.


They won't use it if the APIs for generating HTML/JSON/SQL don't take String (or deprecate the old methods that do). The various APIs can support only their own, safe processors, and if an API doesn't take a String then you can't pass it interpolated strings.


With less than an inch of separation, is the sense of depth even perceptible?


They're probably fusing both lenses with the Lidar and some other tricks to reliably compute a dense surface. That would explain their suggestion not to move the camera very much, as that would cause a large portion of the mesh to be rebuilt. A blogger exported what appears to be two side by side videos, so maybe the view really is narrow or reconstruction happens at playback. There might also be Lidar data in there that he didn't notice.

Apple bought C3 Technologies a decade ago, and they use this technique to fuse photos from low flying charters to produce the 3d view in Apple Maps.

[ Paper: https://ui.adsabs.harvard.edu/abs/2008SPIE.6946E..0DI/abstra... ]

[ Coverage: https://9to5mac.com/2011/10/29/apple-acquired-mind-blowing-3... ]

[ Similar: https://web.stanford.edu/class/ee367/Winter2021/projects/rep... ]


Pure speculation: when combined with the LIDAR depth sensor, the two cameras probably don't need as much physical separation to accurately create a depth map. The bigger problem is the inpainting needed to generate hidden detail when the movie is viewed from angles that are different from the one it was actually filmed from.


My understanding is that very few consumer lidar sensors work well in daylight. It's hard to send out & detect significantly meaningful pulses of light, when there's sunlight all around.

I have an Intel L515 which is pretty remarkable in that sometimes you can get some depth finding outdoors. This is just a hobby item for me, I'm not an expert, but this launched as a fairly impressively long range & capable $350 USB3 system, and it seems like the market hasn't much comparable to it. Phones certainly I'd expect to be significantly worse.


>My understanding is that very few consumer lidar sensors work well in daylight. It's hard to send out & detect significantly meaningful pulses of light, when there's sunlight all around.

Aren't many "self driving car" sensors lidar? This would imply they can work in daylight - perhaps they don't necessarily depend on light on the sunlight spectrum?

(Or perhaps you don't consuder them consumer? Though those cars are consumer products, they're not made for military or industrial use)


Many cars are lidar,, but they use much stronger, bigger, and higher power lasers, on very expensive and precise rotator assemblies.

The L515 I mentioned was somewhat advanced at least for it's day because it used MEMS to steer its light source. That gave it leading class performance/size but it's still big and kinda hot-ish. Maybe we can keep scaling that kind of system performance to smaller sizes but even this package was pretty cutting edge & gave much better falloff than many competing systems, and was still largely an indoor sensor.


>The bigger problem is the inpainting needed to generate hidden detail when the movie is viewed from angles that are different from the one it was actually filmed from.

It's for spatial video, not for holographic video. When you see a 3d movie in a cinema, it's not like you can look at it from widely different anges and go peek from the side or behind the actors or whatever...


Given that iPhone cameras are ~2.5 mm apart, there needs to be some amount of in-painting when building the stereo image, so that it looks like it is taken with cameras that are ~6.5 mm apart.


I was wondering about the use of the lidar sensor. Notably they do not say they are using it, but maybe they just wanted to keep it simple? Idk seems weird not to use lidar but also seems weird not to mention it if they are using it.


3D movies have existed for decades, without adjustable viewing angles.


But if you have eyes 50mm apart, and source material from cameras 15mm apart (plus other depth information), you'll need to in-paint a small amount where your eyes could see "around" something and the cameras can't.


Or you can make do with 15mm-apart worth of "around"?

It's still more "spatial"/3D than a regular (single lens) image.

Plus this has a wide lens and a "regular" lens (actually both wide iirc but one is ultrawide), so it's not like 2 equal lenses 50mm apart like in regular stereoscopic "3d" video.


> Or you can make do with 15mm-apart worth of "around"?

You need to move the close objects further apart in left/right than they are in the camera. Then you need to fill the newly empty areas with something.

> Plus this has a wide lens and a "regular" lens (actually both wide iirc but one is ultrawide), so it's not like 2 equal lenses 50mm apart like in regular stereoscopic "3d" video.

This doesn't affect anything.


I was responding to

> The bigger problem is the inpainting needed to generate hidden detail when the movie is viewed from angles that are different from the one it was actually filmed from.


Yes-- so to even create a fixed view with viewports that are further apart than the real cameras are, you have ot inpaint hidden detail.


One of those lenses is a Ultra-Wide though, with a _very_ different FoV than the other one.


It uses a crop from the center. Not sure if that crop has same FOV as the other lens though. I’d expect so?


I would expect some ML magic in the image processing pipeline that would make it pop out.


Early reviews indicate it is, as some reviewers have had access to spatial video taken from a phone, but I’m not sure if those were ideal conditions or just ad-hoc.


As the two cameras have very different focal lengths you get pronounced parallax effect that can be exploited in post.


Two focal lengths at the same physical distance to the subject have exactly the same perspective (i.e. if you crop them to the same area they will look the same). There is no extra information to be had from that.


The depth information that can be obtained from differences in angular position/size of objects within cameras' FOV. There's a reason a photo taken with a 28mm doesn't look the same as with a 50 a few steps back.


> a few steps back

Exactly. The steps back change perspective, not the lenses. That’s what I was trying to say above. In the iPhone both lenses are at the same distance to the subject.


A cropped 28mm is indistinguishable from a 50mm or any other larger size, relative photo site receptor size notwithstanding.


But one of the lenses isn't a few steps back from the other.


One of the two cameras is the ultra-wide camera so it gets some additional parallax and visual information than just the separation to the other camera.


That’s not how parallax works. The wider field of view of the ultra wide camera will show some of the scene that the other camera doesn’t see, but over overlapping parts of the scene the parallax is a strict function of the location of the two lenses’ entrance windows.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: