It seems a lot of people in tech believe the ability for anybody to share largely unmoderated content on large social media platforms is very important for free/open online communication. At the same time you could argue there is a desire/need to combat spam, disinformation campaigns, propaganda, hate speech, AI generated content, etc.
Much of the discussion around content moderation is about the many challenges, sometimes suggesting it is an impossible problem to solve. A lot of discussion also suggests content moderation is inherently bad because “free speech”. However, lots of problems are technically challenging and/or ethically challenging (e.g., self driving cars), but are not discussed the same way.
A perfect solution to content moderation likely does not exist, but I am curious what companies are working solely on providing content moderation as a service and what innovative solutions are being proposed. Also, what is the state of the art in content moderation research assuming such a thing exists?
At the top-down platform level, focus on keeping out illegal material only. Take any further aspects of your value system and your theories about what is or isn't "good" for people out of it.
The current model is terribly contemptuous and anti-human. It assumes that someone or something has more legitimacy than "those people" to decide what EVERYBODY can/can't should/shouldn't create and consume. Whether that someone is "a private company that can do what it wants, no further examination needed", or a government, "society", "the people (really, 'my people')", the majority, the elite, the experts, the intellectuals, the 'responsible adults in the room', it's all the same. Tyranny over the experience of every single person by a subset of people in a position to make it so.