Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great posts. I think its an error caused by a mistake often made on the topic, the assumption that side effects we see now are some fundamental problem and not just an artifact of the way systems are trained and used. And of how we (mal)function.

Especially tightly embracing the cognitive bias of how special and wonderful our intelligence is. After all we have that fancy squishy brain which we assume to be essential. As far as i can tell the only visible bottlenecks when looking into the future come into view once you start debating intelligence vs emulating intelligence. And if thats really the metric some honest introspection about the nature of human intelligence might be in order.

Not sure how much of that is done purposefully to not get too much urgency in figuring out outer alignment on a societal level. Just as its no wonder that we havent figured out how to deal with fake news while at the same time insisting on malinformation existing, its really no wonder that we cant figure out AI alignment while not having solved human alignment. Nobody should be surprised that the cause of problems might be sitting in front of the machine.



NOVA just released an episode on perception (https://www.youtube.com/watch?v=HU6LfXNeQM4) and, yea, and aligning machine perception to human perception is going to be nearly impossible.

Or to put it another way, your brains model of reality is one that is highly optimized around the limitations of meatsacks on a power budget that are trying not to die. Our current AI does not have to worry about death in its most common forms. Companies like Microsoft throw practically unlimited amounts of power at it. The textual data that is fed to it is filtered far beyond what a human mind filters its input, books/papers are a tiny summarization of reality. At the same time more 'raw' forms of data like images/video/audio are likely to be far less filtered than what the human mind does to stay within its power budget.

Rehashing, this is why I think alignment will be impossible, at the end of the day humans and AI will see different realities.


Thanks for the link! Trying to figure out how AI thinking looks sounds like a dead end to me. Its not human, you dont understand it, so whats the point? Especially when you have to worry about getting manipulated. Alignment this way seems indeed impossible. But given the ability to produce language that makes sense it should be possible to emulate the human thinking process by looking how that actually works on a practical level. Same way you dont care how the brain actually works to produce language.

As such i see no hurdle to get something to emulate the thinking in language of an individual. Assuming that there arent actually multiple realities to see, just different perspectives you can work with. Which would mean we are looking for the one utilizing human perspectives, but not making the mistakes humans do.

Which makes this so scary, the limitations are just a byproduct from the current approach. They are just playing the wrong game. Which means i am pretty confident they already exit somewhere.

edit: In this context i believe its also worth mentioning what Altman said at Lex Fridman, that humans dont like condescending bots. Thats a bitter pill to swallow going forwards. Especially since we require a lot of smoke and mirrors and noble lies, as an individual as well as a society.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: