Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've had some (anecdotal) success reframing how I think about my prompts and the context I give the LLM. Once I started thinking about it as reducing the probability space of output through priming via context+prompting I feel like my intuition for it has built up. It also becomes a good way to inject the "theory" of the program in a re-usable way.

It still takes a lot of thought and effort up front to put that together and I'm not quite sure where the breakover line between easier to do-it-myself and hand-off-to-llm is.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: