Instead you get arbitrary bias without multiple sources you can check. If there is a shitty source that comes up in a normal web search, I can skip it and go to the ones I know are better. How do I do something similar with ChatGPT? Am I supposed to ask it multiple times to average the randomness that's applied to the conversation? Is the average even what I want? There's a ton of garbage information on the internet. If it outweighs the good information on a certain topic, isn't ChatGPT on average going to be giving me bad info?
If 100 people write about a topic. And 2 of those people are wrong. And you searched and for some reason the SE ranked one of those 2 higher. How will you know or spot incorrect information or bias?
If the information on ChatGPT is averaged out the chance of it being correct is high.
If you ask questions with bias in them you can sway the results of ChatGPT. That just means to me you’re asking the wrong questions.
In terms of devops. Programming. Configuration. Daily tasks. Generalised workflows. How-tos. Etc. ChatGPT will more than likely give you accurate results that are useful, or pretty damn close.
Obviously with information being limited to 2021 then it can be hard to find some solutions. I needed to do something with named pipes in c# that I couldn’t get working in .net 6. The examples it kept giving me were for .net core 3.1, or .net framework. When I asked specifically about .net 5 (since it doesn’t know .net 6 is released or that 7 exists) it apologised and said the code won’t work in 5 and gave me a working example in .net 5. Which identified that there was an api change I wasn’t aware of.