Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, deal with novel situations better than humans. In a world where we are approaching the physical limits of integrated circuit feature size? I like my chances.

Personally, I find it amazing that humanity survived the cold war. the fact we did not self-immolate goes against everything I believe about humanity and human nature. The fact that we made it through that gives me a lot of hope for the future.

But, the fission bombs were expensive, centralized weapons. there was a small number of men who could have pushed the button.

The problems I see now aren't general AI; the problems I think you should fear is that we have very powerful tools that are very cheap. I mean, like the deepfakes thing. If instead of making creepy porn, what if people used that technology to cause chaos? Make targeted ads/videos of leaders saying nutty things? I mean, I think we could adapt to deepfakes style attacks fairly quickly; someone would have to use it decisively while it's new. But things like that are being invented all the time.



> Yeah, deal with novel situations better than humans. In a world where we are approaching the physical limits of integrated circuit feature size? I like my chances.

Why do you think this is relevant?

Consider that a) human brain does intelligence within a small package made out of structures that are large compared to what silicon fabs deal with, and b) we've already beaten the brain in straight compute density by many orders of magnitude. The answer doesn't seem to be "more compute"; it seems to lie in what we're doing with that computing power.

> the fact we did not self-immolate goes against everything I believe about humanity and human nature. The fact that we made it through that gives me a lot of hope for the future.

Back then, we've learned something new about nature of human societies. I don't believe the concept of MAD was known before nuclear weapons. The individual human nature didn't change, nor did the social one; it's just that we already know a lot about the former, but we're still discovering the latter.


>b) we've already beaten the brain in straight compute density by many orders of magnitude.

citation needed.

computers are better at arithmetic, sure, but computers can only do things that you can map to arithmetic. I'm given to understand that we don't understand how the brain works well enough to simulate even a very small one. Not because we lack the compute power, but because we don't know what to simulate.

I think there's a lot of evidence that brains are doing something powerful (that we don't yet fully understand) that computers are unable to do. It's quite possible that brains are doing something that can't be done in a practical way with transistors.

In short, I don't know of any solid evidence that the brain is only a turing machine. I mean, brains can be used as a turing machine, but a brain is terrible at that, and can't compete with even really primitive purpose built turing machines.

I mean, I'm not saying that it is theoretically impossible to map what a brain does on to sufficiently powerful silicon, or that we wont figure out, at some point in the future, whatever non-arithmatic primitives the brain uses... just as far as I can tell, we haven't yet, and that means it's likely that a brain is still more powerful in some ways than even our largest computers. (obviously, even a small computer is more powerful when the problem maps cleanly to mathematical primitives. )

>Back then, we've learned something new about nature of human societies. I don't believe the concept of MAD was known before nuclear weapons. The individual human nature didn't change, nor did the social one; it's just that we already know a lot about the former, but we're still discovering the latter.

What if we just got lucky?


Slower progress in microprocessors is a good point but specialized hardware like TPUs could overcome some limitations. Also, slower does not mean stop. When we talk about decades into the future, even a 20% average annual improvement amounts to 3 orders of magnitude in 40 years.

Malicious humans are in general easier to deal with since we know most of them are subject to ingrained psychological tendencies we have learned much about over the course of history and they depend on other humans to pull off something as catastrophic as a world war.

A general AI would have certain immediate advantages over humans:

  - Immensely larger communication bandwidth
  - Broader knowledge than any single group of humans
  - Faster thinking speed (at least by enlisting other processors)
Let me ask you (or anyone else) a question: What would be a minimum demonstrated capability of an AI that starts to worry you?


>Slower progress in microprocessors is a good point but specialized hardware like TPUs could overcome some limitations. Also, slower does not mean stop. When we talk about decades into the future, even a 20% average annual improvement amounts to 3 orders of magnitude in 40 years.

My understanding is that there are physical limits, and we will asymtodically approach those physical limits. So while progress will never stop, each year, progress will be less. (you are right that we don't know where it will stop. we could have more breakthroughs in us. even without compute density breakthroughs, there a breakthroughs in compute type, like the TPU you mention. But there are physical limits, and it seems that we are approaching them.)

>Let me ask you (or anyone else) a question: What would be a minimum demonstrated capability of an AI that starts to worry you?

I lived through the end of the 20th century, and have studied a lot of 20th century history. I'm having a really difficult time imagining anything that could be worse than humanity at it's worst.

Really, I'm far more concerned with the renewed stirrings of nationalism than anything else; We've got the technology to end humanity already; we've had that for like half a century now. The thing was? last time Nationalism was a real force, we didn't have that technology.

That is what scares me. If we combine early 20th century politics with late 20th century weapons, civilization will end. humanity will end.


> But there are physical limits, and it seems that we are approaching them.)

Unless you believe in something supernatural providing extra computational capacity, we know that the physical limits allows the manufacture of a sub 20W computational device in about 1.5kg of matter with the capacity of a human brain.

It may be impossible to reach the same using silicon, but we know it is possible to get there somehow within those constraints. We know the processes involved are tremendously inefficient and full of redundancies, because the cost of failure is immensely high, and because the manufacturing environment is incredibly messy and error prone.

It may be reasonable to think we won't get there in the next couple of decades, but unless it's fundamentally impossible (because our brains are mere projections of something more complex "outside" our universe), it's a question of when, not if.

> I lived through the end of the 20th century, and have studied a lot of 20th century history. I'm having a really difficult time imagining anything that could be worse than humanity at it's worst.

That misses the point of the question. It's not "how horrible can an AI get?" It's "at what point is it too late to stop them if they're going the wrong direction?"

The problem is that by the time you realise you were dealing with a human-level AGI, it may already be far too late - it may have cloned itself many times over, and found multiple optimisations, each making the next step up easier.

As for something worse, even the worst human regimes have fundamentally depended on humans for survival. Their own survival have been a strong motivation for trying to avoid the most extreme reactions. An AGI has potentially entirely different parameters - it doesn't have the same needs in order to survive. Put another way: Imagine an AGI caring as little about our survival as we have for other species. Now imagine if humanity didn't itself need to breathe the air, didn't clean water, and so on.


>Unless you believe in something supernatural providing extra computational capacity, we know that the physical limits allows the manufacture of a sub 20W computational device in about 1.5kg of matter with the capacity of a human brain.

(I point out that there is a lot of evidence that the size of the brain in the human has little to do with the IQ of that human. It's not at all clear that you could scale something that works on whatever principles the human brain works on by adding more neurons the way you can scale a transistor based computer by adding more transistors.)

If the self-improving AI stops when it reaches human intelligence, that's amazing, but it's not the singularity.

What I'm saying is that the 'hard takeoff' theory is wrong because making a thing that can make smarter copies of itself... isn't going to mean you get something infinitely smart. It means that you get something that bumps up against the limits of whatever technologies it can use or discover.

Yeah, maybe we'll come up with some new tech that makes slightly-better-than-human brains possible, and those brains will come up with a technology that makes god-brains possible... but there's no evidence at all that what we'd perceive as a god-level intelligence is even possible.

We have no idea how close to the theoretical limit the brain is.

>It may be impossible to reach the same using silicon, but we know it is possible to get there somehow within those constraints.

To get to a human-level intelligence, sure. I'm not saying that AGI is completely impossible, just that a hard takeoff to god-intelligence AGI before we wipe ourselves out or die from other causes is... unlikely.

really, my most important point is that human-level AGI does not necessarily lead to that AI creating smarter copies of itself until we get to the singularity /because physical limits exist/

(I mean, I guess then you are placing your bets on where those physical limits are)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: