Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This whole attitude against AI reminds me of my parents being upset that the internet changed the way they live. They refused to take part in the internet revolution, and now they're surprised that they don't know how to navigate the web. I think that a part of them is still waiting for computers in general to magically disappear, and everything return to the times of their youth.


Indeed — however it’s interesting that unlike the internet, computers or smartphones the older generation, like the younger, immediately found the use of GPT. This is reflected in the latest Mary Meeker report where it’s apparent that the /organic/ growth of AI use is unparalleled in the history of technology [1]. In my experience with my own parents’ use, GPT is the first time the older generation has found an intuitive interface to digital computers.

I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted. Marcus et al can keep screaming into their echo chamber and it won’t change a thing.

[1] https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...


> I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted.

Where else would AI haters find an echo chamber that proves their point?


It's wild -- I've never seen such a persistent split in the Hacker News audience like this one. The skeptics read one set of AI articles, everyone else the others; a similar comment will be praised in one thread and down-voted to oblivion in another.


IMO the split is between people understanding the heuristic nature of AI and people who dont and thus think of it as an all-knowing, all-solving oracle. Your elder parents having nice conversations with chatgpt is nice aslong it doesnt make big life changing decisions for them, which happens already today.

You have to know the tools limits and usecases.


I can’t see that proposed division as anything but a straw-man. You would be hard-pressed to find anyone who genuinely thinks of LLMs as an “all-knowing, all-solving oracle” and yet, even in specialist fields, their utility is certainly more than a mere “heuristic”, which of course isn’t to say they don’t have limits. See only Terrance Tao’s reports on his ongoing experiments.

Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude? I spoke with a handyman the other day who unprompted told me he was building a side-business and found GPT a great aid — of course they might make some terrible decisions together, but it’s unimaginable to me that increasing agency isn’t a good thing. The interesting question at this stage isn’t just about “elder parents having nice conversations”, but about computers actually becoming useful for the general population through an intuitive natural language interface. I think that’s a pretty sober assessment of where we’re at today not hyperbole. Even as an experienced engineer and researcher myself, LLMs continue to transform how I interact with computers.


> Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude?

Depending on the decision yes. An LLM might confidently hallucinate incorrect information and misinform, which is worse than simply not knowing.


Yup. Exactly this. As soon as enough people get screwed by the ~80% accuracy rate, the whole facade will crumble. Unless AI companies manage to bring the accuracy up 20% in the next year, by either limiting scope or finding new methods, it will crumble. That kind of accuracy gain isn't happening with LLMs alone (ie foundational models).


Charitably, I don’t understand what those like you mean by the “whole facade” and why you use these old machine learning metrics like “accuracy rate” to assess what’s going on. Facade implies that the unprecedented and still exponential organic uptake of GPT (again see the actual data I linked earlier from Mary Meeker) is just a hype-generated fad, rather than people finding it actually useful to whatever end. Indeed, the main issue with the “facade” argument is that it’s actually what dominates the media (Marcus et al) much more than any hyperbolic pro-AI “hype.”

This “80-20” framing, moreover, implies we’re just trying to asymptotically optimize a classification model or some information retrieval system… If you’ve worked with LLMs daily on hard problems (non-trivial programming and scholarly research, for example), the progress over even just the last year is phenomenal — and even with the presently existing models I find most problems arise from failures of context management and the integration of LLMs with IR systems.


Time will tell.


My team has measurably gotten our LLM feature to have ~94% accuracy in widespread reliable tests. Seems fairly confident, speaking as an SWE not a DS orML engineer though.


Yeah, I've had similar results. Even with GPT-o1, I find almost all errors at this point come from the web search functionality and the model taking X random source as an authority. It's interesting that I find my human intelligence in the process is most useful for hand-collecting the sources and data to analyze -- and, of course, for directing the process across multiple LLM queries.


I think there are two problems:

1. AI is a genuine threat to lots of white-collar jobs, and people instinctively deny this reality. See that very few articles here are "I found a nice use case for AI", most of them are "I found a use case where AI doesn't work (yet)". Does it sound like tech enthusiasts? Or rather people terrified of tech?

2. Current AI is advanced enough to have us ask deeper questions about consciousness and intelligence. Some answers might be very uncomfortable and threaten the social contract, hence the denial.


On the second point, it’s worth noting how many of the most vocal and well-positioned critics of LLMs (Marcus/Pinker in particular) represent the still academically dominant but now known to be losing side of the debate over connectionism. The anthology from the 90s Talking Nets is phenomenal to see how institutionally marginalized figures like Hinton were until very recently.

Off-topic, but I couldn’t find your contact info and just saw your now closed polyglot submission from last year. Look into technical sales/solution architecture roles at high growth US startups expanding into the EU. Often these companies hire one or two non-technical native speakers per EU country/region, but only have a handful of SAs from a hub office so language skills are of much more use. Given your interest in the topic, check out OpenAI and Anthropic in particular.


Thanks for the advice. Currently I have a €100k job where I sit and do nothing. I'm wondering if I should coast while it lasts, or find something more meaningful


The timeless dilemma! I don't know how old you are, but just don't let the experience break your internal motivation as it can be hard to recover -- and remember however much 100k is and however much you save, it's still many years at that salary to genuine retirement. I don't know what equity packages are like these days at OpenAI/Anthropic, but especially if you're interested in the topic and have strong beliefs about how AI should play out in the world it's worth considering rolling the dice on a 2-4 year sprint. I imagine SA type positions at either of those companies are some of the most interesting roles, especially working in legacy industries/government [1], since you'd get to see firsthand how/where it's most effective (to say nothing of honing your language skills). Good luck regardless!

[1] https://openai.com/careers/solutions-architect-public-sector... for example - listed salary is 2x your current in the US, not sure what the salary is like in the EU.


I think of the two camps like this: one group sees a lot of value in llms. They post about how they use them, what their techniques and workflows look like, the vast array of new technologies springing up around them. And then there’s the other camp. Reading the article, scratching their heads, and not understanding what this could realistically do to benefit them. It’s unprecedented in intensity perhaps, but it’s not unlike the Rails and Haskell camps we had here about a dozen years ago.


The internet actually enabled us to do new things. AI is nothing of that sort. It just generates mediocre statistically-plausible text.


In the early days of the web, there wasn't much we could do with it other than making silly pages with blinking texts or under construction animated GIFs. You need to give it some time before judging a new technology.


We don't remember the same internet. For the first time in our lives we could communicate by email with people from all over the world. Anyone could have a page to show what they were doing with pictures and text. We had access to photos and videos of art, museum, cities, lifestyles that we could not get anywhere else. And as a non-English guy I got access to millions of lines of written text and audio to actually improve my English.

It was a whole new world that may have changed my life forever. ChatGPT is a shitty Google replacement in comparison, and it's a bad alternative due to being censored in its main instructions.


In the early web, there already were forums. There were chats. There were news websites. There were online stores. There were company websites with useful information. Many of these were there pretty much from the beginning. In the 90s, no one questioned the utility of the internet. Some people were just too lazy to learn how to use a computer or couldn't afford one.

LLMs in their current form have existed since what, 2021? That's 4 years already. They have hundreds of millions of active users. The only improvements we've seen so far were very much iterative ones — more of the same. Larger contexts, thinking tokens, multimodality, all that stuff. But the core concept is still the same, a very computationally expensive, very large neural network that predicts the next token of a text given a sequence of tokens. How much more time do we have to give this technology before we could judge it?


Perhaps enough time for you to build a good understanding of what it is capable of, and how it is evolving over time?


I have a good enough understanding of what it is capable of, and I remain unimpressed.

See, AI systems, all of them, not just LLMs, are fundamentally bound by their training dataset. That's fine for data classification tasks, and AI does excel at that, I'm not denying it. But creative work like writing software or articles is unique. Don't know about you, but most of the things I do are something no one has ever done before, so they by definition could not have been included in the training dataset, and no AI could possibly assist me with any of this. If you do something that has been done so many times that even AI knows how to do it, what's even the point of your work?


The internet predates the Web; people were playing Muds and chatting on message boards before the first browser was made at CERN.


Of course, but does it mean that my argument is flawed? You're just shifting the discourse, without disproving anything. Do you claim that the web was useful for everyone on day one, or as useful as it is today for everyone?

I could just do the same as GP, and qualify MUDs and BBS as poor proxies for social interactions that are much more elaborate and vibrant in person.


As I pointed out in a different comment, the Internet at least was (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.

But LLMs are from the get-go a bad idea, a bullshit generating machine.


> [...] LLMs are from the get-go a bad idea, a bullshit generating machine.

Is that a matter of opinion, or a fact (in which case you should be able to back it up)?


For real? x) Of course it's my opinion, what are your own comments about "silly gifs" and "useless early internet" if not an opinion?Seriously...


That might be a lack of understanding from my part. I had the impression from your comment that you were implying that there was (and is) hope in internet development (ie. many people hold a positive opinion about it), but there cannot be any hope in LLMs (ie. nobody can build a positive opinion about it, because presumably some hard fact prevents it).

As for what I said, I was just mimicking the comment of GP, which I'll quote here:

> The internet actually enabled us to do new things. AI is nothing of that sort. It just generates mediocre statistically-plausible text.


Delusional take.

I’m not even heavily invested into AI, just a casual user, and it drastically cut amount of bullshit that I have to deal with in modern computing landscape.

Search, summarization, automation. All of this drastically improved with the most superior interface of them all - natural text.


Not OP, but how much of the modern computing landscape bullshit that it cut was introduced in the last 5-10 years?

I think if one were to graph the progress of technology on a graph, the trend line would look pretty linear — except for a massive dip around 2014-2022.

Google searches got better and better until they suddenly started getting worse and worse. Websites started getting better and better until they suddenly got worse. Same goes for content, connection, services, developer experience, prices, etc.

I struggle to see LLMs as a major revolution, or any sort of step function change, but very easily see them as a (temporary) (partial) reset to trendline.


Nah. It's just they are upselling us AI so aggressively it doesn't pass the sniff test anymore.


No, your parents spoke out of ignorance and resistance towards any sort of change, I'm speaking from years of experience of both trying to use the technology productively, as well as spending a significant portion of my life in the digital world that has been impacted by it. I remember being mesmerized by GPT-3 before ChatGPT was even a thing.

The only thing that has been revolutionized over the past few years is the amount of time I now waste looking at Cloudflare turnstile and dredging through the ocean of shit that has flooded the open web to find information that is actually reliable.

2 years ago I could still search for information (let's say plumbing-related), but we're now at a point where I'll end up on a bunch of professional and traditionally trustworthy sources, but after a few seconds I realize it's just LLM-generated slop that's regurgitating the same incorrect information that was already provided to me by an LLM a few minutes prior. It sounds reasonable, it sounds authoritative, most people would accept it but I know that it's wrong. Where do I go? Soon the answer is probably going to have to be "the library" again.

All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.


Personally, I have three use cases for AI:

1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.

2. Conversational partner. It's a different question whether it's a good or a bad thing, but I can spend hours talking to Claude about things in general. He's expensive though.

3. Learning basics of something. I'm trying to install LED strips and ChatGPT taught me basics of how that's supposed to work. Also, ChatGPT suggested me what plants might survive in my living room and how to take care of them (we'll see if that works though).

And this is just my personal use case, I'm sure there are more. My point is, you're wrong.

> All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.

Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.


> Conversational partner

This is the ai holy grail. When tech companies can get users to think of the ai as a friend ( -> best friend -> only friend -> lover ) and be loyal to it it will make the monetisation possibilities of the ad fuelled outrage engagement of the past 10 years look silly.

Scary that that is the endgame for “social” media.


People were already willing to do that with Eliza. When you combine LLMs with a bit of persistent storage, WOOF. It's gonna be extremely nasty.

Gaslight reality, coming right up, at scale. Only costs like ten degrees of global warming and the death of the world as we know it. But WOW, the opportunities for massed social control!


> [...] My point is, you're wrong.

Image upscaling is not an LLM technology, using current-gen LLMs as conversational partners is highly undesirable for many reasons, and learning the basics of things IS indeed useful, but it doesn't even begin to offset the productivity losses that LLMs have caused by decimating what was left of the signal-to-noise ratio on the internet.

You haven't even tried to address my chief concern about QUALITY of information at all. I'm perfectly aware that you can ask ChatGPT to do anything, you can ask it to plan your wedding, you can ask it do decorate your house, you can ask if two medications are safe to consume together, you can ask it for relationship advice, you can ask it if your dating profile looks appealing, you can ask it to help diagnose you with a medical conditions, you can ask it to analyze a spreadsheet.

It's going to come back with an answer for all of those, but if you're someone who cares about correctness, quality, and anything that's actually real, you'll have a sinking feeling in your gut doubting the answer you received. Does it actually understand anything about human relationships, or is it giving you relationship advice based on a million Reddit threads it was trained on? Does it actually understand anything about anything, or are you just getting the statistically likely answer based on terabytes of casual human conversation with all of their misunderstandings, myths, falsehoods, lies, and confident incompetence? Is it just telling me what I want to hear?

> Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.

Interesting analogy, because I am the one who's still trying to cross-check multiple websites of information while you blissfully watch your only available TV channel.


<< 1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.

I have a buddy, who made me realize how awesome FSR4 is[1]. This is likely one of the best real world uses so far. Granted, that is not LLM, but it is great at that.

[1]https://overclock3d.net/news/software/what-you-need-to-know-... [2]https://www.pcgamesn.com/amd/fsr-fidelity-fx-super-resolutio...


From my perspective, your argument is:

- AI gives me huge, mediocre prints of my own shitty pictures to fill up my house with - AI means I don’t have to talk to other people - AI means I can learn things online that previously I could have learned online (not sure what has changed here!) - People who cross-check multiple websites for information have a limited perspective compared to relying on a couple of AI channels

Overall, doesn’t your evidence support the point that AI is reducing the quality of your information diet?

You paint a picture that looks exactly like the 21st century version of an elderly couple with just a few TV channels available: a few familiar channels of information, but better now because we can make sure they only show what we want them to show, little contact with other people.


The internet was at least (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.

LLMs are from the get-go a bad idea, a bullshit generating machine.


While the "move fast and break things" rushed embrace of anything AI reminds me of young wild children, who are blissfully unaware of any danger while their responsible parents try to keep them safe. It is lovely if children can believe in magic, but part of growing up involves facing reality and making responsible choices.


Right, the same “responsible parents” who don’t know what to press so their phone plays YouTube video or don’t know how that “juicy milfs in your area” banner got in their internet explorer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: