Except there’s an entire thread of people saying it’s useful. No one is trusting it implicitly, but I work with a bunch of folks that are pretty good at what they do but aren’t infallible and I do have to verify a lot of what they do and say. I don’t dislike them for it, they’re human. Why when it’s a machine that’s largely accurate but sometimes hallucinates it’s a perceived failure, while these folks I work with keep getting promoted and praised for their sometimes untrustworthy work?
Because we (or perhaps I) apply different standards to different situations - a bad car driver that causes accidents are accepted as facts of life, whereas a computer driven car is expected to be far safer and have no fatalities.
Personally I find it useless to see a machine as a colleague when it is not better in any way then a colleague, in the same way I don't see a hammer as a very punchy workmate. If I want to have a conversation about something I'll go talk to a human, when I interrogate a database I expect it to be better then a random human.