> ChatGPT’s output might not be as good as the best writing, but most of what people read is mediocre content with questionable accuracy.
I agree with this, but I think we may be missing the point.
Why are people reading mediocre content with questionable accuracy?
The problem is not: how to quickly and cheaply create content which is marginally less mediocre and marginally more accurate than what we currently have.
> Why are people reading mediocre content with questionable accuracy?
For most people that consume content, accurate information is not a priority. Being entertained, belonging to a group, or satisfying some immediate need or want matter more for most people, most of the time.
It's doubtful that people who consume content will change their wants to prize accuracy (most of the time). Partly, this is because accurate, precise, and/or qualified statements are often boring and hard to follow.
Consider also incentives that drive content creators:
1) the internet means many people can find a platform for many messages (democratization of speech is supposed to be a virtue of the internet)
2) the "freemium" advertising model means that most messages can be financially rewarded, regardless of accuracy
Engagement and financial rewards are much easier to get with lies, drama and controversy than with accuracy, precision, and qualified statements. In my experience, most people cannot find something worth striving for in the modern world, and settle on whatever comes easiest.
> The problem is not: how to quickly and cheaply create content which is marginally less mediocre and marginally more accurate than what we currently have
I'll grant that this is not a problem you're trying to solve. However, this sounds like exactly the arms race that content creators are engaged in until it stops paying.
>> The problem is not: how to quickly and cheaply create content which is marginally less mediocre and marginally more accurate than what we currently have
> this sounds like exactly the arms race that content creators are engaged in until it stops paying
I don't think they're trying to achieve these things at all. Look at the broken incentives you raised in the first part of your response, that's what this is about, and all this is about.
Content creators mostly don't care about how mediocre their output is, and they mostly don't care about how accurate it is either, as long as the clicks keep coming.
We don't need (and can't expect) ChatGPT to "fix" this, the underlying problem is the broken incentives, not the actual quality - or lack of it - in the content.
"Why are people reading mediocre content with questionable accuracy?"
Because it's literally everywhere you look and is very hard to filter due to huge amount of it everywhere. Particularly if you are trying to learn and grow on a certain subject, it's difficult to filter out what you don't know is 'mediocre' or 'questionable' at least not until you've already consumed it.
A lot of it is for entertainment, even when it’s infotainment. News reporting, Wikipedia, current affairs podcast, Tweets, Reddit, Hacker News comments, etc. The inaccuracies here aren’t that important, because the vast majority of this content is just being consumed for dopamine hits and not actually putting the information to use.
Some of it is out of necessity. I don’t think it’s controversial to say that many people use Stack Overflow, and though some of the answers there are very good, others aren’t. If someone needs help with a specific problem they’ll often post a question somewhere (Reddit, Stack Overflow, etc.), and the quality of the answer they get is a roll of the die - even whether they’ll get an answer at all.
Some of it is because blog spam has made traditional searches much more difficult. There’s another comment that says that there are comments better than Reddit and Hacker News, but if you go into a discussion about Google you’ll find that a lot of people have been doing things such as appending “reddit.com” to searches because otherwise they’ll drown in a mountain of blog spam.
I agree with this, but I think we may be missing the point.
Why are people reading mediocre content with questionable accuracy?
The problem is not: how to quickly and cheaply create content which is marginally less mediocre and marginally more accurate than what we currently have.