OK, tell me what an example is, if not a description of something that lends -- or removes -- credence toward a position.
And I do want to know if you think GPT-4 is actually incapable of making mistakes, because that's what you seem to be implying (even though it's flatly ridiculous).
No one can deduce anything from it and attempt to reproduce the problem to see where it failed.
You basically wrote “I asked for white and the mail was grape juice is the best!”
What can anyone do with that? Nothing. It’s meaningless.
This is why open source projects hate people who raise issues because they are like “error happened fix it” and say nothing more then get upset when the issue is closed.
You're not an OpenAI developer, as far as I know, so it didn't occur to me you wanted more than a basic description of the problem I had. If this were a bug report of course I would have been more detailed, but you're just some dude on the internet with a throwaway HN account. I'm not going to dig up a transcript just to convince you that yes, sometimes, ChatGPT is not infallible (which is a thing you apparently believe).
And I do want to know if you think GPT-4 is actually incapable of making mistakes, because that's what you seem to be implying (even though it's flatly ridiculous).