Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found that posing questions and possible causes/solutions/options/examples completely change the output even in cases where none of the options is correct. OpenAI models for me tend to be quite positive bending rules or logic to answer positively to whatever you pose.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: