Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's done in a roundabout way. Usually with a variation of "you had a bad experience because you are using the tool incorrectly, get good at prompting".


That's a response to 'I don't get good results with LLMs and therefore conclude that getting good results with them is not possible'. I have never seen anyone claim that they make no mistakes if you prompt them correctly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: