>Slower progress in microprocessors is a good point but specialized hardware like TPUs could overcome some limitations. Also, slower does not mean stop. When we talk about decades into the future, even a 20% average annual improvement amounts to 3 orders of magnitude in 40 years.
My understanding is that there are physical limits, and we will asymtodically approach those physical limits. So while progress will never stop, each year, progress will be less. (you are right that we don't know where it will stop. we could have more breakthroughs in us. even without compute density breakthroughs, there a breakthroughs in compute type, like the TPU you mention. But there are physical limits, and it seems that we are approaching them.)
>Let me ask you (or anyone else) a question: What would be a minimum demonstrated capability of an AI that starts to worry you?
I lived through the end of the 20th century, and have studied a lot of 20th century history. I'm having a really difficult time imagining anything that could be worse than humanity at it's worst.
Really, I'm far more concerned with the renewed stirrings of nationalism than anything else; We've got the technology to end humanity already; we've had that for like half a century now. The thing was? last time Nationalism was a real force, we didn't have that technology.
That is what scares me. If we combine early 20th century politics with late 20th century weapons, civilization will end. humanity will end.
> But there are physical limits, and it seems that we are approaching them.)
Unless you believe in something supernatural providing extra computational capacity, we know that the physical limits allows the manufacture of a sub 20W computational device in about 1.5kg of matter with the capacity of a human brain.
It may be impossible to reach the same using silicon, but we know it is possible to get there somehow within those constraints. We know the processes involved are tremendously inefficient and full of redundancies, because the cost of failure is immensely high, and because the manufacturing environment is incredibly messy and error prone.
It may be reasonable to think we won't get there in the next couple of decades, but unless it's fundamentally impossible (because our brains are mere projections of something more complex "outside" our universe), it's a question of when, not if.
> I lived through the end of the 20th century, and have studied a lot of 20th century history. I'm having a really difficult time imagining anything that could be worse than humanity at it's worst.
That misses the point of the question. It's not "how horrible can an AI get?" It's "at what point is it too late to stop them if they're going the wrong direction?"
The problem is that by the time you realise you were dealing with a human-level AGI, it may already be far too late - it may have cloned itself many times over, and found multiple optimisations, each making the next step up easier.
As for something worse, even the worst human regimes have fundamentally depended on humans for survival. Their own survival have been a strong motivation for trying to avoid the most extreme reactions. An AGI has potentially entirely different parameters - it doesn't have the same needs in order to survive. Put another way: Imagine an AGI caring as little about our survival as we have for other species. Now imagine if humanity didn't itself need to breathe the air, didn't clean water, and so on.
>Unless you believe in something supernatural providing extra computational capacity, we know that the physical limits allows the manufacture of a sub 20W computational device in about 1.5kg of matter with the capacity of a human brain.
(I point out that there is a lot of evidence that the size of the brain in the human has little to do with the IQ of that human. It's not at all clear that you could scale something that works on whatever principles the human brain works on by adding more neurons the way you can scale a transistor based computer by adding more transistors.)
If the self-improving AI stops when it reaches human intelligence, that's amazing, but it's not the singularity.
What I'm saying is that the 'hard takeoff' theory is wrong because making a thing that can make smarter copies of itself... isn't going to mean you get something infinitely smart. It means that you get something that bumps up against the limits of whatever technologies it can use or discover.
Yeah, maybe we'll come up with some new tech that makes slightly-better-than-human brains possible, and those brains will come up with a technology that makes god-brains possible... but there's no evidence at all that what we'd perceive as a god-level intelligence is even possible.
We have no idea how close to the theoretical limit the brain is.
>It may be impossible to reach the same using silicon, but we know it is possible to get there somehow within those constraints.
To get to a human-level intelligence, sure. I'm not saying that AGI is completely impossible, just that a hard takeoff to god-intelligence AGI before we wipe ourselves out or die from other causes is... unlikely.
really, my most important point is that human-level AGI does not necessarily lead to that AI creating smarter copies of itself until we get to the singularity /because physical limits exist/
(I mean, I guess then you are placing your bets on where those physical limits are)
My understanding is that there are physical limits, and we will asymtodically approach those physical limits. So while progress will never stop, each year, progress will be less. (you are right that we don't know where it will stop. we could have more breakthroughs in us. even without compute density breakthroughs, there a breakthroughs in compute type, like the TPU you mention. But there are physical limits, and it seems that we are approaching them.)
>Let me ask you (or anyone else) a question: What would be a minimum demonstrated capability of an AI that starts to worry you?
I lived through the end of the 20th century, and have studied a lot of 20th century history. I'm having a really difficult time imagining anything that could be worse than humanity at it's worst.
Really, I'm far more concerned with the renewed stirrings of nationalism than anything else; We've got the technology to end humanity already; we've had that for like half a century now. The thing was? last time Nationalism was a real force, we didn't have that technology.
That is what scares me. If we combine early 20th century politics with late 20th century weapons, civilization will end. humanity will end.