Perhaps everyone there is LARPing - but if you start typing stereotypical psychosis talk into ChatGPT, it won't be long before it starts agreeing with your divinity.
pre 2023, it took real human effort to make shit up, and there was much less incentive for the amount of effort, and you could more easily guess what was made up by judging whether a human would go through the effort of making it up. these days it's literally anything, all the time, zero effort. you're right there's always been fake shit but it's more than half the posts on /r/all these days are misleading, wrong, or just fake
Probably you are right. Early adopters prefer not to be bullshitted generally, just like how Google in the early days optimized relevancy in search results as opposed to popularity.
As more people adopted Google, it became more popularity oriented.
Personally I pay more not to be bs-d, but I know many people who prefer to be lied to, and I expect this part of the personalization in the future.
It kind of does matter if it's real, because in my experience this is something OpenAI has thought about a lot, and added significant protections to address exactly this class of issue.
Throwing out strawman hypotheticals is just going to confuse the public debate over what protections need to be prioritized.
Speaking anecdotally, but: people with mental illness using ChatGPT to validate their beliefs is absolutely a thing which happens. Even without a grossly sycophantic model, it can do substantial harm by amplifying upon delusional or fantastical material presented to it by the user.
I personally know someone who is going through psychosis right now and chatgpt is validating their delusions and suggesting they do illegal things, even after the rollback. See my comment history