Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There was a Twitter thread discussed here just a day or two ago where it was giving serious, in-depth, plausible answers to questions around Typescript.

Yet apparently those answers were quite wrong.

https://news.ycombinator.com/item?id=33817682

That trust factor is huge. I had this argument in the thread: when a human isn't certain of the right answer, they will (typically) provide answers, an explanation for their reasoning, and cautions about where they could be wrong.

An AI that sounds confident 100% of the time but is occasionally very wrong is going to be a real problem.



Why can't you hook it up to canonical references? If you asked it how to search those references then I feel certain it would be able to navigate to, parse and summarise the answers with ease.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: