Good point about Wikipedia. In just a few years the image people had of Wikipedia changed from an untrustworthy joke (The Office had a joke about Michael Scott trusting Wikipedia) to one of the first places people went to get information.
Before that, there was a lot of concern about blogs when they were the hot new thing. Here’s a New York Times article[1] from that period discussing the blogging controversy:
> There are two obvious differences between bloggers and the traditional press: unlike bloggers, professional journalists have a) editors and b) the need to maintain a professional reputation so that sources will continue to talk to them. I've been a journalist for more than a decade, and on two occasions I asked acquaintances whether I could print information that they had told me in social situations. Both times, they made clear that if I published they would never speak to me again. Without a reputation for trustworthiness, neither friendship nor journalism can be sustained over time.
The concerns weren’t incorrect - Wikipedia or Blogs are often unreliable. But the concerns overlooked how useful these things could be despite their problems.
As an aside, it's interesting to see comments here bring up the potential unreliability of ChatGPT - and then say that people should go read Wikipedia instead.
> Good point about Wikipedia. In just a few years the image people had of Wikipedia changed from an untrustworthy joke (The Office had a joke about Michael Scott trusting Wikipedia) to one of the first places people went to get information.
> As an aside, it's interesting to see comments here bring up the potential unreliability of ChatGPT - and then say that people should go read Wikipedia instead.
On the other hand, Wikipedia's image changed largely due to the transparency behind its editorial processes- both the edit history and the discussions regarding editorial decisions are available in public. Are there pages where it's important to take the information presented with a grain of salt? Sure, but I have FAR more metadata available to help me discern how reliable the article may be.
And that's kinda the difference- having a way to figure out the source of the information. I was in high school when Wikipedia really took off, and although there were more than a few kids who got caught copying their essays from Wikipedia (and a few more who got chastised for the lesser sin of trying to cite it as a source), the vast majority of folks were perfectly capable of realizing that all you needed to do was scroll down to the list of citations and work from there.
With all that in mind, the contrast with ChatGPT should be pretty clear. You can ask it to explain a topic, and it'll probably do a decent job (or appear to do so) but the process by which it produced that output is extremely opaque. And (as has been noted in other threads) OpenAI hasn't been particularly forthcoming about how they influence its responses, which is pretty ironic considering their name.
> On the other hand, Wikipedia's image changed largely due to the transparency behind its editorial processes- both the edit history and the discussions regarding editorial decisions are available in public.
That was still true early on when people were making fun of the idea. Like with many things, the public perception changed when it got popular and the people who had been laughing at it realized they also had a use for it.
I also can't agree that the production of Wikipedia articles isn't opaque. In fact, I would say that the perception of reliability of Wikipedia (as noted in many of these comments) makes it more likely to lead people astray .
Do Wikipedia articles often list sources? Sure. Do most people not bother checking the sources, and simply rely on a third-party to accurately summarize them? Also true. Do the few who actually check the sources often find it impossible, because the source is simply an academic book that most people don't have available? True as well. Even if it was available to them, the source is often (though not always) give as the entire book, so tracking down the source for a particular claim means you're searching for a needle in a haystack (and that's for just one claim).
I say this as someone who's actually interested in the source for the claims in Wikipedia articles, and have spent a lot of time trying to track them down. Often to no avail (and when I do manage to track them down, it's not uncommon to find the summary on Wikipedia misleading).
For instance, it takes me all of 30 seconds to write:
> Lincoln was actually in favor of subdividing Texas into four different states, and had the support of Congress to do so. It probably would have happened if he had lived. Source: Harris, William C. (2011). Lincoln and the Border States: Preserving the Union. Lawrence, Kansas: University Press of Kansas.
The source exists, but I have never read the book. Almost no one that reads this comment has easy access to the book, and even if they did I doubt anyone would go through it to fact check me (and by the time anyone did, this conversation would have been long over).
Does ChatGPT sometimes give inaccurate answers? Certainly. But at the moment more of its users seem to be more aware of the potential issues with the platform than the users of Wikipedia, AskHistorians, The New York Times, etc. are.
Before that, there was a lot of concern about blogs when they were the hot new thing. Here’s a New York Times article[1] from that period discussing the blogging controversy:
> There are two obvious differences between bloggers and the traditional press: unlike bloggers, professional journalists have a) editors and b) the need to maintain a professional reputation so that sources will continue to talk to them. I've been a journalist for more than a decade, and on two occasions I asked acquaintances whether I could print information that they had told me in social situations. Both times, they made clear that if I published they would never speak to me again. Without a reputation for trustworthiness, neither friendship nor journalism can be sustained over time.
The concerns weren’t incorrect - Wikipedia or Blogs are often unreliable. But the concerns overlooked how useful these things could be despite their problems.
As an aside, it's interesting to see comments here bring up the potential unreliability of ChatGPT - and then say that people should go read Wikipedia instead.
[1] https://www.nytimes.com/2004/12/19/magazine/your-blog-or-min...