Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The idea is that a model might already be smarter than us or at the very least have a very different thought process from us and then do something like improving itself. The problem is that it's impossible for us to predict the exact path because it's thought up by an entity whose thinking we don't really understand or are able to predict.


I understand the idea of a self-improving intelligence, but unless there's a path for it to do so, it's just a thought experiment. The other poster who replied to you has a better idea that civilization can be thought of as the intelligence that is improving itself. Instead of worrying about some emergent AGI inside of civilization, we can thing of civilization itself as an ASI that already exists. Anything that emerges inside of civilization will be eclipsed and kept in check by the existing super intelligence of the entire world.


I think "llm builds better llm" is drawing the border at the wrong place. Technical progress has been accelerating for centuries. It's pretty self evident that the technological civilization is improving upon itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: