Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm currently involved in a project where we are getting the LLM to do exactly that. As someone who _does_ have a working theory of the software (involved in designing and writing it) my current assessment is that the LLM generated docs are pure line noise at the moment and basically have no value in imparting knowledge.

Hopefully we can iterate and get the system producing useful documents automagically but my worry is that it will not generalise across different system and as a result we will have invested a huge amount of effort into creating "AI" generated docs for our system that could have been better spent just having humans write the docs.



My experience has been mixed with tools like deepwiki, but that's precisely the problem. I tried it with libraries I was familiar with and it was subtly wrong about some things.


We are not at the subtly wrong stage yet, currently we are at the totally empty words devoid of real meaning stage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: