I have a question. Section 230 basically says companies are allowed to moderate without being defined as publishers and losing liability protection.
Could X just basically stop moderating all together? The one (many?) conflicts here is that they legally have to moderate some things (CSAM) and there would be conflict in terms of moderating adult content. Basically is the law consistent enough to adopt a hands off strategy to maintain liability protection? Or would you be forced to go the other direction.
X could do just about anything. It’s actually hard to know what the current state of liability is these days, now that platforms have integrated algorithmic decision-making regarding what to show you.
In Anderson v TikTok, the appeals court decided that since the little girl did not specifically search for the videos she watched, TikTok’s algorithm made what amounted to an editorial decision to show her the videos she watched and thus Section 230 did not give them any protection. TikTok ultimately chose not to appeal to the Supreme Court and thus this is the current state of the law in Pennsylvania, New Jersey and Delaware. Other courts may decide differently.
The general idea is that whenever algorithms are deciding what you see Section 230 is not in play - but the First Amendment might be. The Supreme Court hinted that this is how they view things, BTW. If this is how it is, then Section 230 is essentially a dead law already and losing it only affects old fashioned blogs and forums.
Curating HN to only be tech-specific topics is protected by 230. Literally the history of 230 is there were 2 lawsuits [1].
1. Website was sued over having "defamatory" content posted by a user and website won because they had no moderation (minus illegal stuff).
2. Website was sued over having "defamatory" content posted by a user and website lost because they had moderation (curated to be "family friendly").
Politicians (and less importantly, the general public) like the idea of websites being able to be "family friendly".
So forums and blogs can still exist but if you do any sort of not strictly legally required moderation you have legal liability for all content without 230.
> So forums and blogs can still exist but if you do any sort of not strictly legally required moderation you have legal liability for all content without 230.
Which means the consequence for any mistake on sticking exactly to the bounds of legally mandatory moderation is enormous liability (either massive civil liability if you go slightly beyond the bounds of the minimum, or given the source of most minimums catastrophic criminal liability if you fall below it); the only realistic approach at non-trivial scale is just not to allow UGC except at the level you are willing to edit as if it were first party content you were going to be fully responsible for.
It's going to be fun watching HN, which is full of people who support this sort of thing (and even more extreme regulations to boot,) deal with the ramifications of this forum's guidelines and moderation policies being de facto illegal.
It won't even be "turning into Reddit" it's all going to turn into 4chan.
> The general idea is that whenever algorithms are deciding what you see Section 230 is not in play
This isn't correct. The ruling was very narrow, with a key component being that a death was directly attributed to a trend recommend by the algorithm that TikTok was aware of, and knew was dangerous. That part is key - from a section 230 enforcement perspective it's basically the equivalent of not acting to remove illegal content. Basically everything we've understood about how algorithms are liable since section 230 was enacted remain intact.
I don’t agree. The ruling used logical reasoning based on the 2023 Netchoice decision in which the Supreme Court ruled that the actions of the moderating algorithms enjoyed first amendment protection. The first amendment protects you from liability from your own speech, while section 230 protects you from liability from somebody else’s speech. Ergo, if the platform was protected by the first amendment then the algorithm output was the speech of the platform.
Netchoice had a bunch of concurring opinions, including from ACB that essentially says they really aren’t sure how they’d rule in a case directly challenging algorithmic recommendations. That’s why I say it’s not clear how the liability situation is, and it really is baffling why TikTok chose not to appeal.
> Could X just basically stop moderating all together?
An algorithmic feed is one of the things that would make them a publisher without Section 230. So, they could, but they wouldnt be anything like X anymore.
> Basically is the law consistent enough to adopt a hands off strategy to maintain liability protection?
No, that’s why section 230 was adopted, to address an existential legal threat to any site of non-trivial scale woth user generated content. Withoutt section 230 or a radical revision of lots of other law, the only practical option is for providers to do as much review and editing of, and accept the same liability for, UGC as they would for first-party content.
If you wanted to tighten things up without intentionally nuking UGC as a viable thing for internet businesses practically subject to US jurisdiction, you could revise 230 to explicitly not remove distributor liability (it doesn't actually say it does and the extension to do this by the courts was arguably erroneous), which would give sites an obligation to respond to actual knowledge of unlawful content but not presume it from the act of presenting the content. But the “repeal 230” group isn't trying to solve problems.
lets be realistic, CSAM is just a vehicle to kill 230 here. And i'll be first to admit that CSAM is a problem, either in this context or "chat control".
i work in company that provides some enterprise messaging and we were rather surprised to find few years ago that there was a bunch of people who used our service for CSAM sharing. I had friends in other industries run into cases where there products (not even chat) were (ab)used for same purpose
In my opinion and based on my limited knowledge not being a lawyer, moderating is not equal to publishing. Deleting illegal material is not publishing. At least that is how I will move forward on my little semi-private and private forums until my lawyers advise me otherwise.
Could X just basically stop moderating all together? The one (many?) conflicts here is that they legally have to moderate some things (CSAM) and there would be conflict in terms of moderating adult content. Basically is the law consistent enough to adopt a hands off strategy to maintain liability protection? Or would you be forced to go the other direction.