Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ... we don't actually understand how LLMs work on a high level.

Are you saying that people who created ChatGPT don't understand how it works? Or that we the rest of people don't?



Training a model doesn't mean you understand what the neurons actually do to influence output. Nobody knows that. That's where the black box analogies come in. We know what goes in the box and what comes out. We don't know what the box is doing to the data




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: