Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT-3 is not trained to produce true statements. It is trained to output credible sentences that could be found on internet.

It is a master bullshitter and will never say "I don't understand what you mean" if it can wriggle around.

This is a great step towards a real AI. It is a sort of raw intuition. It now needs to learn to filter its intuition with rationality and epistemological techniques.



It may be A something, but not I.


For as long as we make intelligent algorithms, people will continue moving the goalposts by changing the definitions of intelligence.

We have an algorithm that can learn to outperform humans at any deterministic game and learn the rules by simply watching it. We have algorithms that outperform humans at image recognition and OCR. We have algorithm that draw images according to descriptions. We have algorithms that decipher languages just by looking at a corpus of text.

But somehow, as soon as a computer can do it, it is not considered an intelligent task anymore. Kasparov or Champollion used to be considered very smart for what they did, but now apparently that's just considered dumb applied statistics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: