Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you sure that was real? I thought it was an made up example of the problems with the update


There are several threads on Reddit. For example https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_in...

Perhaps everyone there is LARPing - but if you start typing stereotypical psychosis talk into ChatGPT, it won't be long before it starts agreeing with your divinity.


reddit is overwhelmingly fake content, like a massive percentage of it. a post on reddit these days is not actually evidence of anything real, at all


I take issue with the qualifier "these days". On day one, it was mostly fake accounts set up by the founders.

https://m.economictimes.com/magazines/panache/reddit-faked-i...


pre 2023, it took real human effort to make shit up, and there was much less incentive for the amount of effort, and you could more easily guess what was made up by judging whether a human would go through the effort of making it up. these days it's literally anything, all the time, zero effort. you're right there's always been fake shit but it's more than half the posts on /r/all these days are misleading, wrong, or just fake


It didn't matter to me if it was real, because I believe that there are edge cases where it could happen and that warrented a shutdown and pullback.

The sychophant will be back because they accidentally stumbled upon an engagement manager's dream machine.


Probably you are right. Early adopters prefer not to be bullshitted generally, just like how Google in the early days optimized relevancy in search results as opposed to popularity.

As more people adopted Google, it became more popularity oriented.

Personally I pay more not to be bs-d, but I know many people who prefer to be lied to, and I expect this part of the personalization in the future.


It kind of does matter if it's real, because in my experience this is something OpenAI has thought about a lot, and added significant protections to address exactly this class of issue.

Throwing out strawman hypotheticals is just going to confuse the public debate over what protections need to be prioritized.


> Throwing out strawman hypotheticals is just going to confuse the public debate over what protections need to be prioritized.

Seems like asserting hypothetical "significant protections to address exactly this class of issue" does the same thing though?


Speaking anecdotally, but: people with mental illness using ChatGPT to validate their beliefs is absolutely a thing which happens. Even without a grossly sycophantic model, it can do substantial harm by amplifying upon delusional or fantastical material presented to it by the user.


Seems to be common on conspiracy and meme stock Reddits.

"I asked ChatGPT if <current_event> could be caused by <crackpot theory>." and it confirmed everything!


At >500M weekly active users it doesn't actually matter. There will be hundreds of cases like that example that were never shared.


I personally know someone who is going through psychosis right now and chatgpt is validating their delusions and suggesting they do illegal things, even after the rollback. See my comment history


even if it was made up, its still a serious issue




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: