Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> considering the scale of the matter, i.e. human extinction.

There is literally no evidence that this is the scale of the matter. Has AI ever caused anything to go extinct? Where did this hypothesis (and that's all it is) come from? Terminator movies?

It's very frustrating watching experts and the literal founder of lesswrong reacting to pure make believe. There is no disernable/convincing path from GPT4 -> Human Extinction. What am I missing here?



Nuclear bombs have also never caused anything to go extinct. That's no reason not to be cautious.

The path is pretty clear to me. An AI that can recreate an improved version of itself will cause an intelligence explosion. That is a mathematical tautology though it could turn out that it would plateau at some point due to physical limitations or whatever. And the situation then becomes: at some point, this AI will be smarter than us. And so, if it decides that we are in the way for one reason or another, it can decide to get rid of us and we would have as much chance of stopping it as chimpanzees would of stopping us if we decided to kill them off.

We do not, I think, have such a thing at this point but it doesn't feel far off with the coding capabilities that GPT4 has.


So what would be the path for GPT5 or 6 creating an improved model of itself? It's not enough to generate working code. It has to come up with a better architecture or training data.


The idea is that a model might already be smarter than us or at the very least have a very different thought process from us and then do something like improving itself. The problem is that it's impossible for us to predict the exact path because it's thought up by an entity whose thinking we don't really understand or are able to predict.


I understand the idea of a self-improving intelligence, but unless there's a path for it to do so, it's just a thought experiment. The other poster who replied to you has a better idea that civilization can be thought of as the intelligence that is improving itself. Instead of worrying about some emergent AGI inside of civilization, we can thing of civilization itself as an ASI that already exists. Anything that emerges inside of civilization will be eclipsed and kept in check by the existing super intelligence of the entire world.


I think "llm builds better llm" is drawing the border at the wrong place. Technical progress has been accelerating for centuries. It's pretty self evident that the technological civilization is improving upon itself.


But GPT4 can’t access or change its model weights… so crisis averted?


> Has AI ever caused anything to go extinct?

We know from human history that intelligence tends to cause extinctions.

AI just hasn't been around long enough, nor been intelligent enough yet.

Though, if you count corporations as artificial intelligences, as some suggest, then yes, AIs have in fact already contributed to extinctions.


… This is literally non logical reasoning. If we redefine AI to mean something it’s never been defined as… unfortunately logic has left the chat at that point




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: