Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You don't need to, you always tell it it's wrong.


But if you always tell it it's wrong, it will sometimes come up with worse answers on the second try. Which means you still need to know whether the first answer is correct or not (or neither of them)

I'm reminded particularly of a screenshot of someone gaslighting ChatGPT into repeatedly apologising and providing different suggestions for Neo's favourite pizza topping, despite it answering correctly that the Matrix did not specify his favourite pizza topping first time round, but it applies equally to non-ridiculous questions


The idea isn't that you tell it to simply change what it wrote on the first try. The idea is that having a first draft to work with allows it to rewrite a better version.


This technique works fine if you're creating new iterations of creative work, or have a specific thing you want it to fix

It's not much use if ChatGPT gives you a diagnosis of your symptoms which may or may not be accurate.


Great example!

What if someone refers to x part of their body incorrectly? What if they're not able to think clearly due to y reasons and tell it something completely wrong?

An LLM is wholly inappropriate.

As for its "bedside manner"/how polite or friendly it is, that's meaningless if it isn't good at what it does. Some of the best docs/profs I've known have been very detached and seemingly unfriendly, but I'll be damned if they weren't great at what they do. Give me the stern and grumpy doc that knows what they're doing over the cocksure LLM that can't reason.


That's assuming it doesn't just enter a fail state and keep providing the same answer again and again and again, despite explaining what about the answer is wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: