Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Shallow take. LLMs are like food for thought -- the right use in the right amounts is empowering, but too much (or uncritical use) and you get fat and lazy, metaphorically speaking.

You wouldn't go around crusading against food because you're obese.

Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.



> You wouldn't go around crusading against food because you're obese.

My eateries I step into are met with revulsion at the temples to sugary carbohydrates they've become.

> about 40.3% of US adults aged 20 and older were obese between 2021 and 2023

Prey your analogy to food does not hold, or else, we're on track for 40% of americans to acquiring mental disabilities.


Oh, we for sure are, because much like America's social structure pushes people to obesity with overwork and constant stress, that same social structure will push people to use AI blindly to to keep up with brutal quotas set by their employers.


I'm quite strongly anti-AI, but surely the sudden shift to pushing agentic AI means less direct interaction with the models required?

From what I see from the breathless hype, treating it like a member of the team is what they want instead of it just being a conversational UX for contextual queries.


Shallow take.

Your analogies only work if you don't take in to account there are different degrees of utility/quality/usefulness of the product.

People absolutely crusade against dangerous food, or even just food that has no nutritious benefit.

The parent analogy also only holds up on your happy path.


You've just framed your own argument. In order to be intellectually consistent, you can't crusade against AI in general, but rather bad uses of AI, which (even as an AI supporter) is all I've asked anti-AI folks to do all along.


Computing and AI should be used to do what is humanly impossible, like calculate all the possible paths a hurricane will follow or sequence the human DNA. That would be healthy, cooked food (that wasn't possible until we harnessed fire). If it's just making you lazy and taking away the important part of the thought process, it's junk food. And yeah, LLM unleashed on the bigger population are totally going to make us overall lazier. They will act as "rocket fuel" for the ones that are already wanting to learn and improve, and will tank all the rest, the "normal" people.


I'm aware of my own perspective, i don't generally crusade against whatever flavour of machine learning is being pushed currently.

I was just pointing out that arguing against crusading by using an argument (or analogies) that leaves out half of the salient context could be considered disingenuous.

The difference between:

You're using it incorrectly

vs

Of the ones that are fit for a particular purpose, they can work well if used correctly.

Perhaps i'm just nitpicking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: