Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, but the models are all based on explicit data. I'm saying humans have prior wiring that allows them to extract and keep context that LLMs do not have access to.




So the suggestion here is that RAG, tools, LLM memory, fine tuning, context management etc are not enough to take advantage of all this context? Is there any evidence that these things aren't on a trajectory to be optimized enough to do the job?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: