Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So probably another stupid question, but how do you know what it's spitting out is accurate?


One has to be aware of the possibility of hallucinations, of course. But I have not encountered any hallucinations in these sorts of interactions with the current leading models. Questions like "what does 'embedding space' mean in the abstract of this paper?" yield answers that, in my experience, make sense in the context and check out when compared with other sources. I would be more cautious if I were using smaller models or if I were asking questions about obscure information without supporting context.

Also, most of my questions are not about specific facts but about higher-level concepts. For ML-related topics, at least, the responses check out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: