Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. The problem with ChatGPT, GPT-3, etc is that it is unable to transparently explain itself as to how it got to that 'incoherent bullshit'. It just spits an output and can't explain why except for saying:

"As a language model, I do not have the ability to provide mathematical proofs or solve mathematical problems. My primary function is to generate text based on the input that I receive, and I do not have access to external information or the ability to execute code."

So until it is able to transparently give a detailed explanation on how it got to that decision and is able to provide a novel approach to a solution to unsolved computer science and mathematical problems that it has not already been trained on before, then it is complete hype and mania.

Frankly speaking, this is not going to be taking over industries any time soon this is as long as it continues to repeat answers and incoherent jargon without any explanation.



From what I understand, it never will.

Its design is too far from what we call "intelligence", essentially it's just a fancy Markov process plus language rules.


GPT-2 is 4 years old and word2vec 9 years old. I feel like you're underestimating the rate at which we're progressing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: