Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t understand, so this is just about training an LLM with bad data and just having a bad LLM?

just use a different model?

dont train it with bad data and just start a new session if your RAG muffins went off the rails?

what am I missing here



The idea of brain rot is that if you take a good brain and give it bad data it becomes bad. Obviously if you give a baby (blank brain) bad data it will become bad. This is about the rot, though.


Do you know the conceot of brain rot? The gist here is that if you train on bad data (if you fuel your brain with bad information) it becomes bad


I don’t understand why this is news or relevant information in October 2025 as opposed to October 2022




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: