Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Step 1: Have humans commit all kinds of knowledge to the Web for 20 years

Step 2: Analyze it.

Step 3: Make a model that regurgitates it and comes up with new variations. Humans are surprised it's so "human-like" in its responses, and anthropomorphosize it to believe it "understands" what it's writing, when it in fact is remixing bits of pieces of that a billion other humans wrote over the years.

And yet ... perhaps that is good enough for a lot of answers! Better than the semantic web.



Missing one step: introduce entropy.

A lot of human knowledge comes from "accidents" - i.e. the Newton under the tree parable, or from Fleming accidentally discovering Penicillin. It's not inconceivable that some entropy + this massive network could actually come up with novel ideas.

Though it still has no way to perform physical experiments, so it's limited in that way.


So entropy is controlled chaos (evolution)?


There have been a few more steps, and lots of virtual ink on ML papers.


Can you summarizd with the links for amateurs to see a primer of what was done?

Especially interested in the “understanding” part. Like how does it know how to answer on a zero-shot query?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: