Hacker Newsnew | past | comments | ask | show | jobs | submit | more vouaobrasil's commentslogin

Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is. It may not be in terms of its pure reasoning or in the goal of reaching AGI, but it is very disruptive, and it's a guaranteed way to heavily reinforce the requirements of using big tech in daily life, without actually improving it.

Yes, it may not be AGI and AGI may not come any time soon, but by focusing on that question, people become distracted and don't have as much time to think about how parasitic big tech really is. If it's not a strategy used consciously, it's rather seredipitous for them that the question has come about.


> Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is.

I'm not sure what you're trying to say. Most people don't know the difference between AI and AGI. It's all hype making people thinking it's a big deal.

I have family that can't help but constantly text about AI this and AI that. How dangerous it might be or revolutionize something else.


Not to mention all the people on HN arguing we’re close to AGI because LLMs sound like humans and can “think”. “What’s the difference?” they ask, not in curiosity but after already making a strong claim. I assume it’s the same people that probably skipped every non engineering class in college because of those “useless” liberal arts requirements.


I’ve done engineering in college, but I’ve beed dibbling in art since young, and philosophy of science is much more attractive to me than actual science. I agree with you that a lot of takes that AI is great, while consistent internally, are very reductive when it comes to technology usage by humans.


AI is only great when you narrowly define the problem in terms of efficient production of a narrowly-defined thing. And usually, production at that level of efficiency is a bad thing.


"Does AI boost productivity" should not even be the question. The real question should be, "how does AI affect the overall satisfaction and joy that one has in work?" Because in the long term, a lack of satisfaction can lead to faster burnout and dissatisfaction with life.


AI could make me more productive, I know that for a fact. But, I don't want to be more productive because the tasks that could be automated with AI are those I find enjoyable. Not always in an intellectual sense, but in a meditative sense. And if I automated those away, I think I would become less human.


Local hard drive, without version control. Works well enough for me.


How about stop investing in AI and stop using it? If you want to do something about it, stop being spineless and adopt a unilateral policy against AI.


So, someone still has to orchestrate AI, right? But that doesn't negate that a large majority of people will be replaced. Of course, there will always be one or two that won't. And what about in 15 years? Because the direction in which we are heading is rather inevitable unless AI is stopped.


Market displacement is nothing new. Happened in the 2000's (dotcom bubble), happened in the 2010's (cloud infra), happened again in 2020 (services workers being funneled into tech), and it's happening again now (AI is replacing those that fail to adapt with the market changes in tech).

The one thing that has kept me viable as an employee over my 15 years in tech is that I literally don't want to do the same thing I did yesterday 1000 times. I want to do it as few times as possible before I automate the problem away, so I can move on to something new. There will always be something new. There will always be someone with a dream and no skills; for me to step in and help out.

I fail to see the problem.


> I want to do it as few times as possible before I automate the problem away, so I can move on to something new. There will always be something new. There will always be someone with a dream and no skills; for me to step in and help out.

> I fail to see the problem.

You fail to see the problem for YOU. Others may not have a job as flexible, but of course you were only thinking of yourself.


I’m not special and anyone can do what I’m doing. Civilization has been advancing technology since people stood upright. To stand still and not expect change is just ignorance. I can’t fix flawed people, I can only march forward.


I don't think it's the right path, and I think marching forward with innovation is destructive. People who can't adapt to AI aren't flawed, just like people aren't flawed who can't do math even though I can. The true ignorance is thinking that what you are doing does any good in the world.


I think a lot of what you’re pointing this thread towards boils down to philosophical beliefs. Objectively throughout history there have been people resistive to technological advancement and those people have more often than not been the idea losers in history.

I’ll throw you something I believe that we might agree on though. I don’t think the colossus data center Musk setup in Tennessee is good for anyone. Those generators he’s been running are abhorrent and the guy needs realignment of his neurons through some percussive maintenance, but alas that’s probably illegal because he’s too much of a chump to accept a boxing match.


Well, at some point we will need to restructure society and start looking at sustainability at a stable point rather than growth, because we are putting a lot of pressure on the natural environment with all this growth. We can do it now when we have some breathing room or later when we're forced by nature. I think a reduction in growth is a good thing.


If you've got enough wealth to put a down payment even on 1M, not really much sympathy here for this "madness", because you could just move to a cheaper area and quit whatever job requires being in the SF area.


Yes, good point. Even a healthy correction won't restore affordability. A nice start would be for those with this kind of wealth to stop overpaying just because they have some deep-seated desire to live in SF.


Like the idea but I'm not about to create a Tumblr account.


I make all my YouTube videos and for that matter, everything I do AI free. I hate AI.


Once your video is out in the wild there’s as of yet no reliable way to discern whether it was AI-generated or not. All content posted to public forums will have this problem.

Training future models without experiencing signal collapse will thus require either 1) paying for novel content to be generated (they will never do this as they aren’t even licensing the content they are currently training on), 2) using something like mTurk to identify AI content in data sets prior to training (probably won’t scale), or 3) going after private sources of data via automated infiltration of private forums such as Discord servers, WhatsApp groups, and eventually private conversations.


There is the web of trust. If you really trust a person to say that their stuff isn't AI, then that's probably the most reliable way of knowing. For example, I have a few friends and I know their stuff isn't AI edited because they hate it too. Of course, there is no 100% certainty but it's as certain as knowing that they're your friend at least.


But the question is about whether or not AI can continue to be trained on these datasets. How are scrapers going to quantify trust?

E: Never mind, I didn’t read the OP. I had assumed it was to do with identifying sources of uncontaminated content for the purposes of training models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: