Hacker Newsnew | past | comments | ask | show | jobs | submit | x38iq84n's commentslogin

Not being ruled by unelected bureaucrats is well worth it.


I thought we still had the House of Lords.


Level 3 and beyond are nonsense. Just give users the tools to moderate what they see. Don't want to see slurs? Here's a toggle for that. No need to censor those who say them. Memes fall under Fair Use, nothing needs to be taken down. Government of Malaysia can start their own Twitter. $TWTR needs to follow US laws, not laws of Malaysia.

Here is a radical content moderation policy: Block/take down things that are illegal by law and leave the rest.

Why is it so hard?


No open online community with such a radical content moderation policy can survive. It is too easy for a small number of people to flood a community with spam and abuse. It has happened countless times. And there is a lot of good writing on the phenomenon:

Clay Shirkey, A Group is its Own Worst Enemy: https://www.gwern.net/docs/technology/2005-shirky-agroupisit...

Lesswrong, Well-Kept Gardens Die by Pacifism: https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-...


> Here is a radical content moderation policy: Block/take down things that are illegal by law and leave the rest.

That’s essentially what every social network catering to the far right claims as their shtick. They soon realise it doesn’t work: https://www.techdirt.com/2021/07/12/it-appears-that-jason-mi...


You know what doesn’t work?

Attempting to moderate all human communication.

Impossible, by definition - it is produced at a rate as high as the humans involved, so short of a Stasi-level police state, it can’t work.

So instead of trotting out the “far-right” boogie man every time someone doesn’t give you the impossible, let’s figure out how to give the local police the tools to do their jobs?


I tried to reply, but I genuinely find your reply incomprehensible. You’re responding to points I didn’t make and throwing around suggestions which have nothing to do with the what I said or is being discussed.


Hmm, ok.

"every social network catering to the far right", "shtick". I presume you mean gab.com, etc. -- the "far right" boogie man, no?

"they soon realize it doesn't work". I presume you refer to "block/take down..illegal..leave the rest", no?

And then an article about badly implemented moderation, boiling down to "things you don't like" and banning accounts.

So, am I incorrect to infer you don't want sites (somehow) banning just illegal stuff, and badly implemented moderation -- you want well-implemented moderation that "just works"?

Which I claim is impossible (from first principles, based on scale), but you still ... just want it!

If that's not a correct interpretation, then clarify, please.


> So, am I incorrect to infer you don't want sites (somehow) banning just illegal stuff, and badly implemented moderation -- you want well-implemented moderation that "just works"?

You’re not only incorrect, you’re astronomically wrong. I pointed out one approach that we know doesn’t work, specifically to answer someone who made that suggestion. I haven’t made any comment or judgement on other solutions.

> but you still ... just want it!

No, no I do not. You’re not just “inferring”, you’re down right constructing a straw man from things I never claimed. That’s the opposite of constructive discussion.

> "shtick"

A schtick is a defining characteristic. Twitter’s character limit is a shtick. Doesn’t mean it’s good or bad.

> the "far right" boogie man, no?

You keep using that expression. More caricatures. The social networks I had in mind cater to the far right and don’t hide it, that’s their growth plan. If you felt attacked, that’s on you.


1) you'd be surprised at how many things are illegal to say somewhere. Sure, malaysia can mame its own twitter, and so can india, and the EU, and russia, and china, and japan, and soon you only have a costumer base in the united states.

2) okay. Do that. Make a platform that stops at 3). Get sued despite fair use (you have a case, but do you have sony money to defend it in court? If you fail, you are now responsible for the legal precedent that memes aren't fair use). Allow people to say the n-word and see how quickly you'll get dropped by advertisers who think having an ad next to a tweet asking for racial genocide is maybe not exactly good brand image.

3) spam is a considerable part of user experience. If every second tweet on your timeline is spam, users aren't going to block and move on, they're going to stop using twitter. Your antispam is not flawless.

It's hard because "block things that are illegal by law and leave the rest" is unfathomably hard to follow for a human, let alone an automated system.


1/ Twitter is a US company. It needs to follow US laws and it can ignore laws of all countries that are not the USA. Why should a US company dance to the tune of foreign governments anyway? Not to mention that those requirements may be mutually exclusive or contradictory.

2/ Twitter, not being a publisher, is not responsible for user-generated content. Just like Verizon is not responsible for what people say on the phone to each other. Advertisers should have some controls over what content their ads display next to. Giving controls to users and advertisers is the key, not heavy moderation.

3/ once again, controls. Twitter already has an option to show (in feed) only tweets from the people you follow and their connections, that's a good start and I see no spam at all.

"Block things that are illegal by law and leave the rest" is hilariously easy but must be accompanied by tools that allow users and advertisers to tailor their own experience.


> Twitter is a US company. It needs to follow US laws and it can ignore laws of all countries that are not the USA.

Twitter needs to follow the laws of countries it wants to operate in. By your logic Apple wouldn’t need to change to USB-C in iPhones¹ because it’s a European law and Apple is a US company.

¹ https://www.businessinsider.com/usb-c-iphone-coming-apple-no...


Apple has presence in EU (and elsewhere) which exposes them to the whims of local governments. Without local offices, employees, subsidiaries and a product to sell Twitter can ignore all other countries and their laws.


Just last year Twitter blocked over 500 accounts because the Indian government didn’t like the criticism: https://www.nytimes.com/2021/02/10/technology/india-twitter....


has to do with big tech algorithms running unchecked and prone to abuse


this , and the worst thing seems that there are no humans to appeal to , only automated reply


The explanation is a load of BS. This "brief disappearance" has been going on for >12 hours now and as far as everybody is aware only one person is affected by this.


What a coincidence that such alleged bug exists for this particular person at this particular time. A virtue-signaling google employee blocking the photo in support of BLM is the simplest and most likely explanation.


It's not much of a coincidence that the bug manifests at a time when there is considerable internet activity around Winston Churchill when Google's system automatically chooses photographs to display based on an effectively black boxed algorithm. It does not seem far fetched that the activity may have caused the algorithm to change the result to use an image that has some kind of error, for example. Your "simplest explanation" is a wild leap.


This is no bug, this is intentional. Google the names of UK's PMs, from Johnson, May, Cameron going all the way. Only one PM does not get their photo displayed on the page with search results. Not because it failed to load/whatever, the page layout simply does not contain the photo in the place where it is for all others.


You don't know that. For all you know, some kind of knowledge graph tag propagated too far. There is zero evidence of it being intentional.

You really think they'd just erase the photos of UK PMs? Please. If there was some kind of conspiracy there, don't you think hiding the information too (vs linking directly to the pages that have it) would be a more effective strategy?

I know it's fun to jump on Google as the dictator erasing history here, but have you really thought this position through? Especially when one of Google's big industries is education, and kids are going to write papers about WW2, and various British PMs?


Not erase, just add it to the already long list of suppressed and filtered content. Have you tried what I suggested, have you googled the names of few PMs? The page with search results for Churchill lacks the code to display photo in the bio at the right side. The layout and code is different, it's not just failing to load, cache problem or whatever.

The most likely explanation is that a google employee added the photo on a block list in support of BLM.


A (cache or whatever) problem that leads to the property not being set would produce exactly that result.

It also seems clear that there isn't one global list of these people and their portraits Google uses to feed these boxes: When I compare the screenshot someone posted below with what I see, many of the leaders have different portraits.


It would be an incredible coincidence that it's just him and right at this time. Also, from Google's support thread, he is receiving the same treatment as Weinstein, apparently Churchill was added to a list of people for whom Google won't show photo in the bio summary, next to Weinstein.


And somehow that "list of people Google won't show photos for" applies in some European countries and not in others? Because Google is showing pictures for both of them to me.


Interesting. I've never considered Perl's name to be its biggest problem. The readability is and renaming the language does not address that.


It's to show your friends and colleagues, it has no practical purpose given the conditions and requirements for the use of the feature.


Your life circumstances may not present practical uses... But I can assure you they exist.


In what circumstances will you have 100%, unobstructed line of sight to:

1. Your car.

2. Your car's avenue of travel. [1]

[1] This is the important part. If you can't clearly see the entire path of travel, you could run a child over. Tesla obviously doesn't give a shit about that, because their collision detection doesn't work, and because the feature does not require this kind of line-of-sight to work.

The former is hard, but they want to be first to market, so they don't care about not building it right, and the latter would make this feature near-useless, so naturally, they don't ship this sort of fail-safe.


This is a critical part of the overall full self driving equation. Any vendor working on this problem will be spending inordinate amounts of resources on this as parking lots are one of the more challenging environments.

In certain locations (especially in winter) this will be a very nice feature. This is especially true to anyone with disabilities or extreme laziness. ;-).


Yes, it's very common to only allow corporate laptops provisioned with a standard image, certificates etc. If you need to remote in then you must have a corporate laptop.

From a security standpoint it is risky and amateurish to allow VPN from an unknown device under someone else's management.


They weren't entirely unmanaged devices as they had to fulfill additional criteria.


And you are right.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: