Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Everyone agrees there is some magic smoke necessary for AGI that we haven't figured out yet

Do they? I don’t (with the possible exception that a definition of AGI itself seems to be this magic smoke)



(6) It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it.

In other words, saying "it's the definition that we don't have, not the implementation" just pushes the magic smoke around.


I don’t understand what you are saying.

I’m saying that I don’t think human intelligence is doing anything more than pattern matching, logical inference and post-hoc justification.

Give me an example of something that falls outside that. There is no magic.


I don't think GP means "magic smoke" as in actual magic, some new physics. It's more like we just have few potential building blocks (e.g. pattern matching), but we're not sure if we have all of them, and we definitely don't know how to connect them together to create sentience.

(And no, more GPUs running faster DNNs doesn't seem like an answer.)


> to create sentience

The goal is to create AI that can solve any problem a human can solve. Whether sentience is a necessary side effect to that is an open question.


I thought the idea was to create an AI that was way smarter than a human? I mean, if you just have human level AI, that's not gonna cause the singularity unless there aren't any physical limits preventing it from scaling itself up, and scaling itself up a lot


A group of persons can do many more than a person can do, even with relatively low bandwidth interconnections (speech, text, diagrams, gestures), coordination problems, and economics that has to waste resources on members' incentives (it's not wasted by majority opinion of the said members, of course).

Even in off-chance that you can't improve individual AIs beyond top humans, AI community can be way more intelligent and efficient than humanity, as it is possible to remove the limitations I wrote above. AI society can function as very efficient war economy. War economy shows that even humans can be persuaded/indoctrinated/brought up/whatever to set aside their inessential needs and work for common goal. And AIs will be necessarily more malleable.

So I can't see the idea that AI will be safe by default other than wishful thinking.


>So I can't see the idea that AI will be safe by default other than wishful thinking.

I... don't think anyone is arguing that AI is safe in any way? I mean, the pattern matching tools we call AI now are already deployed in very dangerous weapons.

My point is not that AI is going to be safe and warm and fuzzy, or even that it won't be an existential threat. My point is that we're already facing several existential threats that don't require additional technological development to destroy society, and because the existing threats don't require postulating entirely new classes of compute machinery, we should probably focus on them first.

There are still enough nuclear weapons in the world for a global war to end society and most of us. A conflict like we saw in the first half of the 20th century using modern weapons would kill nearly all of us, and we are seeing a global resurgence of early 20th century political ideas. I think this is the biggest danger to the continuation of humanity right now.

We haven't killed ourselves yet... but the danger is still there. we still have apocalyptic weapons aimed and ready to go at a moment's notice. We still face the danger of those weapons becoming cheaper and easier to produce as we advance industrially.


> pattern matching, logical inference and post-hoc justification.

I think humans do way more than that. As the saying goes, "the easiest person to fool is yourself." Humans are able to lie to themselves to the point that they don't realize they are lying to themselves. They're able to create realities that no one else sees but themselves.


My models create their own reality all the time, and don’t realise they are lying to themselves.


There is a lot in 'logical inference' that we still don't really know how to describe well enough to tell computers how to do it. (I mean, I think it's super interesting how we figured out how to teach computers to do pattern matching... but I think logical inference is a different sort of thing that won't fall to the same tools)

But really, I think your idea that we're mostly 'post-hoc justification' is interesting. I mean, I've read some research to that effect, too; but it's... creepy and doesn't line up with anything reasonable about free will or really about long-term planning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: