Hacker Newsnew | past | comments | ask | show | jobs | submit | more vouaobrasil's commentslogin

The fact is, the vast majority of people on HN have drunk the AI kool-aid and have no desire to be critical of it or avoid it.


> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.

I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.

What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.

One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.


"I would argue that there are very few benefits of AI, if any at all."

OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.

Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.

I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.

Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?

Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?

I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.


1. Here is the subset: any algorithm, which is learning based, trained on a large data set, and modifies or generates content.

2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.

3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.

4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.

5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.

6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.


I wish your parent comment didn't get downvoted, because this is an important conversation point.

"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"

I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.

Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.

[1]: https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiol...


> I think this (esp the sufficient number of bad actors) is vibes based on bad headlines and no actual numbers. In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin.

We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.

In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.

And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.


I think we'll have to wait and see here, because all the layoffs can be easily attributed to leadership making crappy over-hiring decisions over COVID and now not being able to admit to that and giving hand-wavy answers over "I'm firing people because AI" to drive different headline narratives (see: founders/CEO's talking outta their a**).

It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.

You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.

Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.


Thanks for this, it's a good answer. I think "generative AI" is the closest term we have to that subset you describe there.


Just to add one final point: I included modification as well as generation of content, since I also want to exclude technologies that simply improve upon existing content in some way that is very close to generative but may not be considered so. For example: audio improvent like echo removal, ML noise removal, which I have already shown to interpolate.

I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.


> I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.

Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.

The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.


A few programmers being better off does not make an entire society better off. In fact, I'd argue that you shipping code 10x faster just means in the long run that consumerism is being accelerated at a similar rate because that is what most code is used for, eventually.


I spent much of my career working on open source software that helped other engineers ship code 10x faster. Should I feel bad about the impact my work there had on accelerating consumerism?


I don't know if you should feel bad or not, but even I know that I have a role to play in consumerism that I wish I didn't.

That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.

I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.


The TV show The Good Place actually dug into this quite a bit. One of the key themes explored in the show was the idea that there is no ethical consumption under capitalism, because eventually the things you consume can be tied back to some grossly unethical situation somewhere in the world.


That theme was primarily explored through the idea it's impossible to live a truly ethical life in the modern world due to unknowable externalities.

I don't think the takeaway was meant to really be about capitalism but more generally the complexity of the system. That's just me though.


i don't really understand this thought process. all technology has it's advantages and drawbacks and we are currently going through the hype and growing pains process.

you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.


True, but it is meaningful to understand whether the "quantity" advantages - drawbacks decreases over time, which I believe it does.

And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.


hiding from mosquitos under your net is a negative. the point of going out to the woods is to be bitten by mosquitos and youve ruined it.

its impossible to get benefit from the woods if youve brought a bug net, and you should stay out rather than ruining the woods for everyone


Rather myopic and crude take, in my opinion. Because if I bring out a net, it doesn't change the woods for others. If I introduce AI into society, it does change society for others, even those who don't want to use the tool. You have really no conception of subtlety or logic.

If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.


[flagged]


Nobody decides, but that doesn't mean we shouldn't discuss and figure out if there is an optimal point.

Edit: And I think you might dislike automobiles if you were one of the people living right next to a tyre factory in Brazil, which outputs an extremely disgusting rubber smell on an almost daily basis. Especially if you bought your house before the factory was built, and you don't drive much.

But you probably live in North America and don't give a darn about that.


I think this is pretty much how many Amish communities function. As for me, I prefer making decisions on how to use technology in my own life on my own.


Of course that makes sense. But for instance, with SOME technologies, I would prefer not to use them but still sort of have to because some of them become REQUIRED. For example: phones. I would prefer not to have a telephone at all as I hate them with a passion, but I still want a bank account. But that's difficult because my bank requires 2FA and it's very hard to get out of it.

So, while I agree in prinicple that it's nice to make decisions on one's own, I think it would be also nice to have the choice to avoid certain technologies that become difficult to avoid due to their entrenchment.


There's nothing wrong with being negative about AI. Even though one can take material advantage of AI, there is also dignity in not using it. I hate AI myself and I think people are foolish to use it so widely.


Why not just not use dyes? I'd be happy to eat weirdly-colored ice cream, whose color is just determined by the most essential ingredients.


I actually reflexively prefer mint ice cream that's slightly off white to dark green now and I think that's because some of the higher quality brands don't use dye so I associate it with them.


Personally, I have made many critical comments on AI. Quite a few have been downvoted. The vast majority of people here, in my opinion, are pro-technology and have a blind spot towards it and have trouble criticizing it deeply. They hold the instrumental view of technology that it is just a tool, that we should learn to use it properly, and that technology generally improves the human condition. And of course, a large proportion of people are blinded by the fact that they make a LOT of money with AI, so of course there is a strong bias towards it.

Yes, some people do hold mild negative view of specific companies or how some technology is used, but the criticism is usually exceptionally mild towards a general class of technologies.

There is very little room for true critical discussion here. I still occasionally comment because I think AI is a net detriment to humanity and to the world and I feel the need to speak about it, but it is absolutely true that there is a very strong pro-tech bias that precludes real discussion.

In my opinion, the vast majority of responses to critical AI content here have been knee-jerk reactions with very little thought put into them.


The trend of AI companions is another step towards humans putting less value on real human connections, which only further erodes real-life communities. Many people (more than before) will succumb to being too deeply immersed in virtual worlds, and it will only increase the pathology of society to a degree not before seen.

These AI companies are incredibly irresponsible and contemptible.


Yea. I don't even know what to do tbh.

I'm right between GenZ and Millennial. There is something crazy going on with GenZ IMO. It is like pulling teeth to go out and do anything with GenZ friends. Maybe my millennial friends are more bored so its easier to do stuff but it just blows my mind. I hate to get all "phone bad" but it seems scrolling and doing absolutely nothing is the default setting for so many people. I feel like this stuff certainly won't make it any better.


I'm also between GenZ and Millennial. I don't have many (any?) GenZ friends, but I feel like I say "no" to social events the most out of my peers (and have for a while). I frankly don't know how to juggle it all: between maintaining important relationships (two partners, calling family regularly, keeping up with close friends), household stuff (cooking, cleaning, laundry, administrative overhead), exercise/my own hobbies (going to the gym once a week), I feel like I barely have time to do... Anything, let alone have downtime to myself.

I have (single) peers and friends who maintain wall-to-wall social calendars, so I've assumed for a while that the difference is just the amount of engagement multiple romantic entanglements takes, but maybe I'm missing something.

I'd love to "do nothing" much more than I can (read a book, work on a project, tidy my basement, learn a new skill...)


I think it's the opposite. You can't be hyper-social in a thousand hour long sushi packed train ride. Urbanization suppresses real human connections. That creates demand for mock/fake environments that simulate "parking lot" environments where it would not be too inconsiderate to be social. All the successful examples of these parasocial contents eventually grow communities around them.

(Not that I think this particular one goes anywhere, no way.)


That's a wild impression of what living in a city is like


Sorry, but you just made my point. Urban train rides are already an effect of advancing technology. Being packed like sushi simply wouldn't happen if we hadn't become addicted to advancing technology. That's a central tenet of technology which only occured around the 19th century. AI is the next stage and the apex of this process which alienate people from each other.


someone who has never lived in a walkable city..


Maybe it's not such a bad thing. People who go outside will be more intentionally looking to socialize and you'll feel less pressure to keep to yourself.


Actually, it is a bad thing. Because people who have a slight difficulty with socializing, probably because of too much screen time, will be sucked into virtual words with AI companions and not have the chance to learn to life a fulfilling life. Not sure what you mean about "pressure to keep to yourself" because I think a normal person shouldn't feel any pressure when going outside, whether to socialize or not.


>Not sure what you mean about "pressure to keep to yourself" because I think a normal person shouldn't feel any pressure when going outside, whether to socialize or not.

Actually that's exactly what I'm talking about. If you don't want people to feel pressure to socialize then you need to create a social norm of keeping to yourself. It would be nice for that to go away.


> If you don't want people to feel pressure to socialize then you need to create a social norm of keeping to yourself.

This is the norm for any "outside locale" which does not assume socialization. For example, taking a walk on neighborhood sidewalks, visiting to a park, going to a local library.

> It would be nice for that to go away.

Pressure such as this originates from within. So to help that pressure go away, one must first embrace that socialization is a choice.


Alternatively these AI companions are the operating system for the fleshlights of men who won't figure out and get rid of the ick they've got.

If you've got beliefs and you insist on mouthing off on things women find offensive, they won't touch you. It's not a mystery.


It's not an alternative. It's a very tiny effect affecting a small subpopulation. The issue of changing the entire social structure to further eradicate community is vastly larger and more important than a few weird guys, sorry to say.


It's not that complex. Women decided to say no to the ick. Men have decided the manosphere is more important to them than making an attempt. It's easier to complain than it is to change. Too bad so sad.


> the ick

An ever changing and contradictory anti-concept.


I wonder if we'll ever stop to ask ourselves if faster and faster output of software is actually a good thing for the world. Or will we just continue because it's just what we do nowadays in civilization to get ahead?


A lot of people are asking that question, and the answer is emphatically yes. All improvements to the human condition are rooted in technology, and software is technology. Who's to say the latest advancements aren't some tech tree precursor to cure an ailment impacting millions - how could you argue against that? The genie is out of the bottle.


> how could you argue against that?

I would argue against it if the downside is even more technological enslavement for billions.

And while many improvements to the human condition are rooted in technology, many of the problems of humanity are rooted in it as well. There might very well be an optimal point that we've already past.


ahead of what? ahead of generating ugly and mostly usable masks over the same data. I'm in favor of AI but it seems to me that no one really stopped asking himself what real problems people have and how to actually fix them


I think so but not for the reason that you think.

See, most closed source software really just pisses me off of ideological reasons, I just like to tinker with things and just having the possibility to do so by being provided the source code really helps my mind feel happy I guess.

So I "vibe coded" a game that I used to play and some projects that I was curious about and I just wanted to tinker too. sure the game and code have bugs.

Also with the help of AI, I feel like I can tinker about things that I don't know too much about and get a decent distance ahead. You might think that I am an AI advocate by reading this comment, but quite the contrary, I personally think that this is the only positive quality that AI helped in quite substantially.

But at what cost? The job market has sunk a large hole and nobody's hiring the junior devs because everybody feels better doing some AI deals than hiring junior devs.

My hunch is that senior devs are extremely in demand and are paid decently and so will retire on average early too. Then, there would be a huge gap b/w senior and juniors, because nobody's hiring the junior engineers now, so who will become the senior engineers if nobody got hired in the first place. I really hope that most companies actually realize that the AI game is quite a funny game really, most companies are too invested into it to realize that really, open source AI will catch up and there is just no moat with AI and building with AI or just doing stuff with AI isn't that meaningfully significant as they think it is as shown by recent studies.


> that senior devs are extremely in demand

Is this true? I am not seeing salaries rising, the demand seems to be met. But maybe I'm wrong.


Sorry I guess, I may have been incorrect in that regards. I actually just meant as in comparison to juniors really. And I personally felt that way from what I've heard from all the people, I am not sure too about salary rising but still I always thought that seniors are getting on with more and more responsibility since juniors aren't getting hired and so I thought that they were more compensated and I am pretty sure that I heard it somewhere and I think I just repeated that.

Also maybe I felt this way because of 100 Million $ and the 30 Billion $ acquisition by Zuckerberg I guess

I might ask AI (Oh the irony) and here is the chat https://chatgpt.com/share/68756188-d374-8011-9f23-6860d6b1db... and here is one of the major source of this I suppose

https://www.hackerrank.com/blog/senior-hiring-is-surging-wil...

And I would like to quote a part from the hackerrank ie. Taken in isolation, this might suggest a cautious but healthy rebound. But viewed through a 2025 lens, a deeper pattern emerges: teams are leaning hard into experience, and leaving early-career talent behind.


Same thing with LinkedIn for me specifically. I heard that it could be a great way to get a job. But then I tried it, was inundated with AI slop, and contacted by recruiters from companies that sounded dreadfully boring. And when it came to nearly 100% of the people I encountered there who were actually active, I'd rather freeze to death than work with them.


The common way to use LinkedIn is to create a profile, keep it updated, and then ignore the platform until you need to use the job search function.

The majority of people on LinkedIn aren't posting and interacting with things all day.

To be honest, most people I know consider being highly active (posting) on LinkedIn to be a potential warning sign for a hire, because it's associated with people who will spend more time hustling on LinkedIn for their next job than working for your company.


> The common way to use LinkedIn is to create a profile, keep it updated, and then ignore the platform until you need to use the job search function.

Personally, I've had better luck finding jobs in other ways (searching companies websites, using more old-school platforms like Indeed), compared to LinkedIn.


I check LinkedIn to see who is the next person to be laid off from my previous employer. They post a picture of their badge and write this overly positive thank you to the company for the opportunity. They laid you off after 30 years in a group video call where you can't speak, don't thank them.


I haven't posted to (or even logged into) LinkedIn since I was 17 (~15 years ago) and although I've never been hired by one even the ycombinator startups that demand it don't seem to mind and interview me.


linkedin is good at:

- publicly hosting a resume in an "industry standard" format that can easily be shared

- playing queens each day

linkedin is bad at:

- being a social media platform

- accurately capturing real-world relationships/networks/skill assesements

- everything else


The pattern of active LinkedIn posters has a weird thing where it gives off strong vibes of that one person at work who seems to have their hands in everything, but actually accomplishes nothing except to toot their own horn.

In a general sense I'd love to hear about what people do, see where people I've worked with ... work and do now. LinkedIn should be the place I can do that right? And yet LinkedIn feels like such an artificial place that I would want nothing to do with the most active people posting as far as working with them.


I don’t how it is lately, but 4-5 years ago when I used LinkedIn to land a job, a recruiter sent me to a JavaScript interview instead of Java. Not sure what went wrong on their side but it was hilarious and a bit irritating.


Use it to keep in contact with peers from previous jobs. When they see you're looking they'll refer you internally, and that is 1000% more likely to land a job than submitting applications cold.


No thanks, I'd rather not use it at all. If people can't be bothered to keep in touch via email or something less disgusting than LinkedIn, I'm not interested in them anyway.


I saw one on LinkedIn that was so egregious of a lie I just stopped and paused.

A guy claiming he vibe coded "30 startups" in a weekend (yes, 30—with some becoming instantly profitable/revenue generating) and then went on to shame actual developers as being "cooked" and "over" if they were taking their time to build something.

Did he have any evidence or links to these 30 apps? Of course not. But that didn't stop him from lying and peacocking about it on LinkedIn.

This is what scares me the most about social media. We've lost all sense of judgment of what makes someone a professional or expert and have delegated that to likes, comments, and subscribes. Everything is just a "hype" game now, and real honesty and truth are given a "eh, have anything that can entertain me?"


> We've lost all sense of judgment of what makes someone a professional or expert [...]

Well you see, everybody is equal, especially in the age of AI which is the great cognitive equalizer (in the same way gunpowder and firearms equalized one's capacity for violence). There is no such thing as individual expertise now; expertise is soon to become the exclusive domain of AI.

So yes, in the future, the only thing that will matter is hype, marketing, and how good of an "ideas guy" you are. Surface appearances don't need to have anything real underneath - that can just be synthesized and backfilled as needed, on demand, by the AI.


> Well you see, everybody is equal, especially in the age of AI which is the great cognitive equalizer (in the same way gunpowder and firearms equalized one's capacity for violence). There is no such thing as individual expertise now; expertise is soon to become the exclusive domain of AI.

That is right, and it's a terrible shame because society has been built upon the uniqueness of individuals. The loss of that will be a very bit psychological hit to human beings and make life seem much more meaningless. Because in reality, it's demeaning to be an orchestrator of AI, and after the novelty of it wears off we will all be just pulling levers, having lost the uniqueness of our culture and spirit.


If not using AI means becoming irrelevant, I am happy to be irrelevant. Because that's a value of the modern technological system, and it seems to me that any trait negative for that system is actually a positive these days.


> The youth is not ready.

Nobody is ready, and ever will be. Like it or not, we thrive on the scarcity of information. But our instinct to collect it has overpowered that scarcity in a big way, and that will lead to a high degree of neurosis no matter who you are.


Yeah I think we often point to the youth because we often implicitly value them more than others, but I've seen seniors more addicted to Tiktok than any kid I've met. In some ways kids have more adaptive power than older generations when confronted with new technology.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: