Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So far:

- when I ask models to do defined that I know how to do and can tell them about that method but can't remember the details off off hand and then I check the answers things work.

- when I attempt to specify things that I don't understand fully the model creates rubbish 7 out of 10 times, and those episodes are irretrievable. About 30% of the time I get a hint of what I should do and can make some progress.



Probably down to a combination of LLMs on average having a harder time with tasks that humans typically find difficult, and the task of prompting being easier the more you know about the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: