Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're about as smart as a person who's kind of decent at every field. If you're a pro, it's pretty clear when it's BSing. But if you're not, the answers are often close enough.

And just like humans, they can be very confidently wrong. When any person tells us something, we assume there's some degree of imperfection in their statements. If a nurse at a hospital tells you the doctor's office is 3 doors down on the right, most people will still look at the first and second doors to make sure those are wrong, then look at the nameplate on the third door to verify that it's right. If the doctor's name is Smith but the door says Stein, most people will pause and consider that maybe the nurse made a mistake. We might also consider that she's right, but the nameplate is wrong for whatever reason. So we verify that info by asking someone else, or going in and asking the doctor themselves.

As a programmer, I'll ask other devs for some guidance on topics. Some people can be absolute geniuses but still dispense completely wrong advice from time to time. But oftentimes they'll lead me generally in the right way, but I still need to use my own head to analyze whether it's correct and implement the final solution myself.

The way AI dispenses its advice is quite human. The big problem is it's harder to validate much of its info, and that's because we're using it alone in a room and not comparing it against anyone else's info.





> They're about as smart as a person who's kind of decent at every field. If you're a pro, it's pretty clear when it's BSing. But if you're not, the answers are often close enough.

No they are not smart at all. Not even a little. They cannot reason about anything except that their training data overwhelmingly agrees or disagrees with their output nor can they learn and adept. They are just text compression and rearrangement machines. Brilliant and extremely useful tooling but if you use them enough it becomes painfully obvious.


Something about an LLM response has a major impact on some people. Last weekend I was in in Ft. Lauderdale FL with a friend who's pretty sharp ( licensed architect, decades long successful career etc) and went to the horse track. I've never been to a horse race and didn't understand the betting so I took a snapshot of the race program, gave it to chatGPT and asked it to devise a low risk set of bets using $100. It came back with what you'd expect, a detailed, very confident answer. My friend was completely taken with it and insisted on following it to the letter. After the race he turned his $100 into $28 and was dumbfounded. I told him "it can't tell the future, what were you expecting?". Something about getting the answer from a computer or the level of detail had him convinced it was a sure thing. I donm't understand it but LLMs have a profound effect on some people.

edit: i'm very thankful my friend didn't end up winning more than he bet. idk what he would have done if his feelings towards the LLM was confirmed by adding money to his pocket..


If anything, the main thing LLMs are showing is that the humans need to be pushed to up their game. And that desire to be better, I think, will yield an increase in supply of high-quality labour than what exists today. Ive personally witnessed so many 'so-so' people within firms who dont bring anything new to the table and focus on rent seeking expenditures (optics) who frankly deserve to be replaced by a machine.

E.g. I read all the time about gains from SWEs. But nobody questions how good of a SWE they even are. What proportion of SWEs can be deemed high quality?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: