Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It really makes me believe that the models do not really understand the topic, even the basics but just try to predict the text.

This is correct. There is no understanding, there aren't even concepts. It's just math, it's what we've been doing with words in computers for decades, just faster and faster. They're super useful in some areas, but they're not smart, they don't think.



I’ve never seen so much misinformation trotted out by the laity as I have with LLMs. It’s like I’m in a 19th century forum with people earnestly arguing that cameras can steal your soul. These people haven’t a clue of the mechanism.


And it's really hard to dislodge those mistaken ideas. I had a conversation with a member of the board of my last company and he was going on about AI agents and how awesome they were because it could book a plane ticket from SF to LA. I tried to explain WHY it can do that, and that it's important to test edge cases and not just the few things the demos show as working, I asked him to ask it to book a flight from SF to Tampa on a morning flight in comfort+ on Delta. He did, and we both watched it load up delta.com, it started to search then completely lost the plot and clicked random things for another 30 seconds. He then brushed it off and said "well, they'll get it working soon, these are just details." Yeah, because details famously are irrelevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: