The fact that some comments are defending Google is insane to me. My only guess is that they are all from USA and have a very "american" view of things.
Google has an ethical and moral responsibility to not allow its platform to be used for defamatory purposes.
While Google is an American company, it operates in many countries around the world and should be held accountable for its actions in each of these countries. The issue is not about one country telling another what to do but rather about a company being held accountable for its actions in a particular jurisdiction. The Canadian court system has the responsibility to enforce Canadian laws and protect the rights of its citizens, including the Montrealer in this case.
If you don't agree with that, don't offer your service in Canada.
You probably don't believe a library or book store should be sued if they sell a book with defamatory content, so why do you think Google should be here?
The connection between wrongdoing by Google is even more attenuated here, since Google is mostly an indexer, with limited curation.
If the library was informed of the defamatory content, was asked to remove it, complied, then restored it, was asked to remove it again, complied, restored it again, then finally started refusing to remove it, I’d say suing them is reasonable (which is what happened here as far as I can tell - the exact sequence of events is a bit hard to reconstruct).
No in the US libraries and bookstores are considered "distributors" so they have no obligation to actively review all of their content like "publishers" have, but when given notice they are still obligated to remove any content which violates the law, including defamatory content.
You aren't going to like this, but it turns out the US doesn't reliably enforce anything in its constitution, and even contradicts it on a constant basis. Effectively, the US is governed by an aggregate of very wealthy corporations who get what they want by bartering with politicians.
You're along the right lines, but it's gnarlier than that. National politicians are more like WWE wrestlers or reality show contestants. They have some freedom of action, but not much, and they must stay within the lines of the script. Tellingly, none of them actually legislate anymore, with the exception of the occasional bit of grandstanding that isn't intended to ever get past being a bill. All the bills that do become laws are written by private lawyers working for known or unknown principals, who then hand them off to lobbyists, who then arrange for a politician to sign off on it.
We receive some clues as to who the real shot callers are, because certain agenda items reliably receive bipartisan support. So for example we can conclude that whoever really runs the USA is fully in favor of involvement in the Ukraine war. There's enough publicly available information to build a relatively complete profile of the rulers, but not, so far as I know, enough information to reliably identify any specific persons. Needless to say, efforts to publicly develop and share that profile are rigorously suppressed.
Doesn't matter if it's unconstitutional in the US. Different countries have different laws. If you can't abide by the laws of other countries, don't operate there.
In theory. But Google (and all other "platforms" like FB and Twitter) crossed the Rubicon of curation when they went along with suppression and curation at the behest of agencies around the world.
This "but they're mostly an indexer" argument has not been valid for a very long time. If you want them to be an indexer (and I believe they should) call your local elected representative and have them push legislation defining the "platforms" a public utility following whatever principles of free speech are relevant in your jurisdiction.
And stop pretending Google doesn't already suppress information
Let's establish here that this is reputation and possibly life-ending defamation. The man became suicidal after all.
If I were to write a book with such a claim about you, completely made up, and you'd report this to the book store. Then yes, I very much expect the book store to remove the book. Nobody needs to sue anybody, this only happened as Google could not make up its mind.
Which means that even if you believe they have no responsibility, they're still in the wrong. You can't remove it 5 times and keep letting it resurface. Take a stance and stick with it.
'Ask'ing isn't at issue here. The question is whether the library should bear any liability if it declines to act as a fact checker for every book on its shelves.
In the case where you got your preference you'd mostly see controversial books disappear from the shelves. The business they bring wouldn't be worth a lawsuit.
The first amendment explicitly protects the book store. It's a pretty clear cut case.
I still don't get why the plaintiff didn't simply go after the publisher or host of the offending website to get the content removed. That's the party who's liable for defamation.
Actually why not? Would it put unreasonable burden on them to not distribute libel?
I might let newstands to slide it as reviewing each newspaper every day might be too high ask. But they certainly already curate the books they sell or lend. So why not expect them to also be aware of contents.
A library or book store would probably get sued if they prominently featured a book called "avesteele is a horrible criminal" (full of unsubstantiated claims) and refused to remove it despite being asked nicely and eventually ordered to do so by a judge.
I think your characterization is a straw man. Nobody is arguing that Google shouldn't comply with court decisions. The question is what should happen when an individual comes to Google with a complaint, before they have sought any legal verdict. How is Google supposed to decide whether a claim is true or false? Even if it were their responsibility to determine what counts as defamation, how would we prevent people abusing that system to take down legitimate content they don't like? Do we want tech companies making decisions like this independently of the legal system?
My thought is that it's not up to Google or any company to be the judge and jury... if the statements are defamatory then the plaintiff should go to an elected judge and have their accuser brought to justice, and the judge can wield the power of the state to demand search engines and other publishers remove those results. I think you would agree that simply removing the search results in the short term doesn't bring the wrongdoers to justice at all. Perhaps this is an indictment of the slow movement of the justice system for cases like this.
Which is exactly what happened: the Montrealer did indeed ask Google to remove the defamatory search results, but they refused. So he took the matter to court and obtained a ruling from an elected judge and jury.
The court's decision to order Google to pay $500,000 in damages shows that there are consequences and you can't ignore them forever just because you're Google.
I'm also sure that the Montrealer tried to take down the original source directly, without succeeding.
I can’t find any indication this was deemed defamatory in a court of law until this ruling. Did I miss it?
My whole point is that Google has to remove anything the moment somebody asks, because if it is later determined to be defamatory by a court, they’ll be liable. That appears to be what happened here, unless I missed something.
That's a fair point. It's implied to be defamatory but not explicitly cited as legally defamatory. If that's so, then I completely agree with you.
I'm going to try to search around and see if there's more information about this. For example, did the man in question try to sue the website and get it removed? (I realize this isn't always possible.)
If Google allows defamatory content to be displayed on their platform, knowing that it is defamatory, it cannot continue to promote it and be exempt from any consequences.
While I am not suggesting that every reported link should be removed, when a Judge makes a ruling, it must be taken seriously and given value.
I agree with you, but my understanding in briefly reading this case is that the plaintiff (via their 'lifelong friend' Mr. T. U.) interacted with Google; not a judge with a ruling. But perhaps I'm missing some part of it.
The correspondence with Google begins at line [66]
I'm not american and I don't like this ruling. Google shouldn't be responsible for censoring the internet. If there is a site out there then Google should be allowed to link to it.
It’s not about should Google police the internet, but instead should they follow the laws of the countries they are operating in. If Google failed to follow a DMCA takedown request on YouTube videos you bet they would get into legal trouble in the US, why should things be different if a different country is involved.
Google didn’t need to identify the link that needed to be removed, they where ordered to remove it and then restored it.
You may want Google to not be responsible for censoring the internet, and you might not like a Canadian court telling what Google to do on google.ca, but this is what Google has to say:
> For many issues, such as privacy or defamation, our legal obligations may vary country by country, as different jurisdictions have come to different conclusions about how to deal with these complex topics.
> Beyond removing content as required by law, we also have a set of policies that go beyond what’s legally required, mostly focused on highly personal content appearing on the open web.
Google isn’t able to censor the Internet. They could remove all links to anything Microsoft, MS would still be on the Internet.
If you truly believe that delisting someone is “censoring the Internet” then they’re also censoring me when they don’t list my homepage on queries for “very cool dudes.”
Take it a step further. If your DNS is seized, your site is still accessible as it would still be on the internet. It's pretty similar to deleting a link to your site as only previous users with the IP could access it, much the same like users navigating directly to microsoft.com if Google removed all links in search.
Is DNS seizure censorship? I'm not equating the two, but I want to understand your view that deleting or restricting information is not censorship.
If it was the gov't directing a registrar to seize a domain name, that's pretty clearly a violation of the first amendment.
If a registrar seized a domain name, that wouldn't be a first amendment per se, but it would probably be a breach of contract and should be illegal one way or another.
And then there's some grey areas with copyright that I think have been hammered out in courts… IANAL
Your argument seems to be that Google should not be held responsible for removing links from their platform, regardless of their content.
However, it is important to consider the potential harm caused by certain links, such as revenge pornography, which can be extremely damaging to individuals. Should Google be allowed to link to such content?
If you are against the ruling in this case, are you also opposed to DMCA takedowns?
- "If you are against the ruling in this case, are you also opposed to DMCA takedowns?"
Yes, and yes; and triply-yes if it's a DMCA theory that alleges that linking to a page containing allegedly infringing material also constitutes infringement.
> it is important to consider the potential harm caused by certain links, such as revenge pornography, which can be extremely damaging to individuals.
Information does not cause damage, people do. Surely the onus must be on the people who use the information to cause damage? For example, those in the article who chose to not do business with the Montrealer because of the invalid information they leveraged. They caused the damage claimed in court. Why is Google responsible for their poor judgement?
> are you also opposed to DMCA takedowns?
If you stand by takedowns, why prefer removal of a link over the source material? Surely once the actual content goes away so too will the link? If the content is still out there, it will still be found, if not by Google, by some other means.
It's important to recognize the role that platforms like Google can play in facilitating that harm.
What if the website containing harmful content is hosted on a Russian server that ignores DMCA takedowns and there is no way to remove it?
What if the majority of traffic to that website comes from Google, should Google not take any responsibility for promoting that content? Or should Google take proactive measures to prevent harm, even if they are not the source of the content?
> Or should Google take proactive measures to prevent harm
Meaning, should Google babysit people with occasional poor judgement? After all, if people always acted rationally with a clear head information would be completely innocuous. But, indeed, there will always be some crazies out there.
In a similar vein, does a hammer manufacturer have the responsibility to babysit the occasional person who will use a hammer to bludgeon another to death? I say no. The is no intent by the manufacturer to see bludgeoning carried out. If the user of a tool uses poor judgement, that's on them.
We don't go after the Ford Motor Company every time someone gets a speeding ticket while driving a Ford, so what is special about Google?
I'm not sure I follow. We are talking about the party who caused the harm. The harm isn't caused when a lie is posted on the internet. The harm isn't caused when the lie is repeated on the internet. The lie itself can cause no harm. It is just information, and information cannot harm.
Irrational people can, and do, reach for a lie and, out of poor judgement, create harm. But the poor judgement is the problem, not the lie, hammer, or car. The latter three do not act. If people were infallible the existence of the lie would mean nothing as it could not possibly lead to harm.
Of course, people are not infallible and harm will be created. The onus being placed on the party causing harm, not those who made commonly used tools available to the party at fault is recognized everywhere else. What is special about Google?
This is so patently false I don't think it's worth continuing this discussion.
There are so many very obvious ways that information, especially false information, can cause demonstrable, material harm that I cannot view this argument as anything other than ideological dogma with no basis in reality.
> There are so many very obvious ways that information, especially false information, can cause demonstrable, material harm
Like what? Let's pretend, for argument's sake, that this website containing inflammatory information was never found by another person. What harm would the information cause? The answer is that it wouldn't cause any harm. How could it?
Not even the court case tried to make this claim. It claimed the harm was caused by the poor judgment of people in the man's life.
Why? What's the incentive? It affects me in no way if you missed something, or you don't believe me, or whatever it is that prompted this request. I find enjoyment in writing down my own neural activity, but there is nothing exciting about copying/pasting someone else's.
> I can't see anything that looks remotely like what you say here.
What did you see? What harm do you think was caused? The article I read said that the man was harmed by having people disassociate with him. Not the lie disassociating with him, people disassociating with him. Those people exhibited poor judgment in their willingness to harm another person and, if the the courts determine the harm is worthy of legal reprieve, why are the people making those poor decisions not who were penalized for their actions? Why is their stupidity Google's responsibility?
"I didn't know not to harm this man. A computer I was using said it was okay! It must be the computer's fault." should not be a sufficient argument in a court of law. But here we are.
The lie can cause harm. Your logic is flawed. It's as saying as telling a captain telling a soldier to kill a child is harmless. Because it's only information,and information is harmless. The cause of the cause is a cause.
Like what? I'm going to write a lie on a piece of paper and seal it in a safe which no person can access. Is it going to break free and kill us all? Or what harm should we expect from it?
Back to reality, it won't cause any harm. If someone with poor judgment found a way into the safe, read the lie, and then did something stupid, that could result in harm. But it would be the person doing something stupid that caused the harm.
Even the court case was clear that the harm caused was in people making poor decisions after reading the lie, not the lie itself. Why are the people who caused the harm claimed in the case not held responsible for their poor judgment? What is special about Google that it gets to take responsibility for unrelated people doing something stupid?
Fortunately in this case the harm caused by those people was limited, but if the harm was greater, like someone murdered the guy after reading the lie, would it be reasonable to charge Google with murder and absolve the murderer of responsibility?
Your example with the safe is obviously absurd. No one is suggesting that the mere existence of untrue information, in a vacuum, causes harm. Communicating it to people, presented as true information, is where the harm comes.
If you tell someone there are no peanuts in their meal, and they have a peanut allergy, they will eat it because of your false assurance and be harmed.
If you tell someone the car dealership down the street always gives amazing deals and gives lifetime warranties for free, but they're selling lemons and fraudulent warranties, your endorsement of their lies can entice more people into getting swindled by them.
If you tell someone the car dealership down the street is selling lemons and fraudulent warranties, when in reality they give good deals and honor their warranties faithfully, you are driving away business from them, which harms them financially.
"I didn't harm the person with the peanut allergy; the peanuts did!" Bullshit. They ate it in this scenario specifically because they trusted your assurance that it was safe.
"I didn't harm the people who got swindled; the car dealership did!" You both harmed them. Your false statements gave them extra legitimacy. Plus, the lies of the car dealership itself caused harm here.
"I didn't harm the car dealership; the people who didn't go there did!" Bullshit. You gratuitously introduced false information into a system where it didn't otherwise exist, defaming the car dealership and causing it to lose business that would otherwise have supported it financially.
I think that should be sufficient, since you're making an absolute, categorical claim, meaning that any nontrivial counterexample refutes it.
> If you tell someone there are no peanuts in their meal, and they have a peanut allergy, they will eat it because of your false assurance and be harmed.
The harm here is in the act of serving peanuts to someone who is known to have an allergy, not the lie. You can say there are no peanuts and then briskly take back the food before consumption, replacing it with a peanut-free alternative. Nobody would be harmed in that scenario, even with the exact same lie told. The lie is not where the harm is found.
> If you tell someone the car dealership down the street always gives amazing deals and gives lifetime warranties for free, but they're selling lemons and fraudulent warranties, your endorsement of their lies can entice more people into getting swindled by them.
Slightly closer, but still misses the mark. You are only harming yourself by acting on the lie.
With respect to what we are actually talking about, there are four parties:
1. Someone who told a lie.
2. Someone who perpetuated a lie.
3. Someone who caused harm after encountering the lie perpetuated.
4. Someone who was harmed by the person causing harm.
If you tell me that the cars at the dealership down the street are free, all you have to do is ask for a test-drive and never come back! And if I tell someone else and if that someone else follows through: Harm will ensue from the theft. But why am I, #2 on the list, who did nothing but repeat what I heard, the one going to court on theft charges?
Even if you want to say I am an accessory and should be punished for that, why do I have to take the entire brunt of it? Why do #1 and #3 get off scot free?
We used to say ignorance is no excuse, but it seems you are saying that ignorance is a perfectly valid excuse. We used to believe that one should know not to cause harm to others even when there is lie trying to justify it. What happened?
The article indicates that the law was only concerned with removal of links in Canada anyway, so if total removal is impossible as you say, it can still be firewalled at the border. China has no problem removing undesirable content from outside of their jurisdiction. What's Canada's problem?
>Google shouldn't be responsible for censoring the internet.
Then maybe they shouldn't have started doing so to begin with? As the joke goes, 'we've settled that, now we're negotiating on price.' This is nothing more than an acknowledgement of what has already been happening.
False, they should be responsible and they already are.
Google does not link to pirated content. If and when they do, they take it down when it's reported. As they are legally required to do.
Google may remove personal information as part of the "right to forget" EU directive. Death threats. Calls to violence. Revenge porn. As they are legally required to do.
Google will remove CP/underage harmful content as they are legally required to do.
Google may take down insults to the king, gambling sites, drug traffic sites, and all kinds of local legislation.
Besides strictly illegal content, Google also censures content that might be technically legal yet considered distasteful or widely recognized as harmful: porn, terrorist propaganda, gore, etc.
Google is not above the law, it can't do whatever it wants.
Should it censor your political speech? No. But that's not what this is about.
Should ISP's be required to screen all their clients?
Then what about datacenters?
Then what about utility providers to the datacenters?
Then what about roads or food service to people who work at datacenters that host websites that promote false content?
I don't think it's a US-centric philosophy that the evildoer should be punished, not everyone who breathes the same air.
(And for anyone who didn't read the full text of the court's ruling, the URL was changed, the article was re-written, and eventually the person's name was misspelled to keep trying to evade Google's sanctions. How does anyone fight that level of misdirection and evil?)
If an ISP was hosting child porn and someone asked them to stop and they didn't then they would be held responsible. It's not the fact that someone's using it for bad things, it's the not putting it right.
Hacker News skews very American so that statement can be made about most topic - my suspicion is that most of the comments against google are also from Americans
> The fact that some comments are defending Google is insane to me. My only guess is that they are all from USA and have a very "american" view of things.
This is Hacker News; the readership has a lot of temporarily embarrassed monopolists raised on business models that assume that the regulations that apply to others, don't apply to them.
> My only guess is that they are all from USA and have a very "american" view of things.
I would guess it's narrower than that, and what you're seeing is mostly from SV. Even then, not everyone in SV is so willing to cut Google slack. But they can be the loudest on HN at times.
If we are just talking about the search engine, does google offer services in Canadan via local caching, or do Canadians have to retrieve Googles services from the US. It is probably a relevant question, at least legally. Ethically I think there is no "fair" way to do page rankings that would both yield the results people want and avoid pages with false accusations.
Also, legally If google were to start filtering the information it provided in the US based on some criteria other than "we think this is most likely what you were searching for, and look these people paid us to have you look at these first" then they would fall afoul to US laws that would remove their common carrier status. At least that is my understanding.
I mean democracies tell each other what to do in one form or another all the time. The US is not at all shy about pushing other countries around. Sometimes that's good, sometimes that's bad.
So much international law is about synchronizing laws to allow for dealing with these kind of things within a framework where we agree on the common principles.
Sad to say for the US ranters but defamation is a common principle and exists just as much within US law (albiet varying implementations).
I don't understand how it's Google's responsibility to fact-check and police the links it merely indexes (to me, akin to suing a distributor) which is completely protected first amendment speech.
Why didn't the plaintiff simply go after the publisher (the website) and get the false information removed (as well as sue for defamation).
My theory is it’s because tech is braking rules more and more often, and we all just love tech so it’s easier to just victim blame rather than address the actual issues.
I love tech. But you're implying that what tech means to me is basically just a large company. And in fact I don't love large tech companies. The Internet was much more enjoyable before a sales person got a hold of it. And while I do understand that a lot of really great engineering has come from within these companies it doesn't matter if they are abusing their users. Because Google loves it when their brand is associated with something positive. But Google should also be responsible for owning all of the negative their company and employees build.
Finally, I'm an American. I have a very dim view on the positive that Google, Facebook/Meta, Microsoft contribute to society. They're all for profit companies no longer being run with a strong engineering or customer focus. Their goal is to turn more profit. That's not the spirit of tech in my mind. I don't consider any of these companies strong technology contributors anymore. They employ a number of great engineers that still do amazing things. But the companies (the executives, the senior management, the board of directors, etc) are not in it to move the industry in a positive direction. They're in it to make more money. The cost of that misdirection is very high to all. These companies have impacted many lives in very negative ways even though many have never intentionally wanted to use, or be part of, their services. These companies need far more accountability. Hiding behind a company logo should not be something society continues to support.
Everyone secretly thinks they're a tech (or general) genius and will (eventually) be a tech tycoon. An extension of the "temporarily embarrassed millionaire thing".
So imagine this happening to these commenters. Somebody publishes that you're a pedo. Out of the blue. The relationship with your sons break down, nobody wants anything to do with you in community or business and everything you worked for falls apart. For no reason.
Would said commenter now truly say: ah well, go Google! This trillion dollar company should absolute be able to spread this, versus the 5 seconds it takes to take it down.
> Google has an ethical and moral responsibility to not allow its platform to be used for defamatory purposes.
Fine. But do not assume it should be a legal responsibility.
Rephrase the statement to "it has the responsibility to forbid defamatory content" and now you will start wondering if it should have prohibited fox news content on dominion voting or CNN content on Trump's hush money scandal.
In the article it says Google stopped removing the links because of a CA Supreme Court case on an "unrelated matter". It's not unrelated. It's Crookes v Newton and it said hyperlinks aren't defamatory:
I'll save you some time: Defamation in Canada requires: "(1) that the impugned words were defamatory, (2) that the words referred to the plaintiff and (3) that they were published." And the court said linking is not publishing.
So Google stopped removing the links at his request.
> He also asked Google to remove links to the website, as well as a short extract from the site, on the search engine’s results page.
Emphasis mine. I'd argue laws and rulings can be contradictory in the first place, that's one reason there's judges. But here it's simpler than that: it's not just about linking, and the defamatory part was probably visible without having to follow the link.
Just speaking strictly on a moral level, there seems to be a qualitative difference between Google saying "Joe is a pedophile" versus "That guy Bob over there is saying that Joe is a pedophile". The first statement is either true or false. The second statement is inarguably factually true and casts no moral judgement on Joe.
You are missing the part where Google ranks search results. If you Google "Joe" and the first result is "Bob says Joe is a pedophile", Google is implicitly saying that is the single most important result about Joe.
Thus, I reject this argument unless Google is willing to argue its search result ranking is no better than random chance and that results on the first page are equally as relevant and useful as those on the last.
Isn't that kinda reductive? We absolutely do expect google to be somewhat legitimate and correct, in a wide range of queries. In the most generous to your meaning terms, We expect them to, when googling stuff about someone, not rank a random tweet alongside a nyt article, right?
> We absolutely do expect google to be somewhat legitimate and correct, in a wide range of queries.
I don't have any such expectation. Why do you have such an expectation?
Google is not magic. It is impossible to divine objective reality from counting links.
> We expect them to, when googling stuff about someone, not rank a random tweet alongside a nyt article, right?
If a random everyday Joe had a New York Times article written about them, I'd expect that to rank at, or very near, the top.
However, most people not only don't have an NYT article written about them, most have almost nothing about them online other than some standard social media or random data brokers content. So it seems likely that some unique content about a random Joe should probably rank highly because there's no real competition. That doesn't imply its true.
I guess you're right, I'm being too loose. I should say, I expect google to at least give some consideration to the quality of the source it considers when ranking results and giving me an answer, and for queries about pop culture stuff or news something it can access wolfram for, who am I trusting more there? Google? Or the sources it pulls from? I agree it's not magic but to waive liability of consideration for search results off under the guise of "don't trust a google search" seems reckless?
Say that somebody talks some trash about a random person online.
This isn't the New York Times where there may be some link authority, or may Allah forgive me for even uttering these haram words, Buzzfeed News.
Where should that random trash-talking rank? Doesn't it seem more relevant than a lot of the random links about random people from random data brokers online? Should random unsourced trash talk about some random person rank closer to the first page or closer to the last page? To me, it seems notable and should be close to the top.
And I think one improvement (that will never happen) is that Google should be telling people some polite version of "Literally everything you read on the Internet should be considered bullshit until verified".
Results on the last page may very well be as useful as results on the first, if not more so. Especially considering for a lot of search queries there's countless biz and nefarious individuals gaming the SERPs with SEO schemes.
IMO, how Google presents data to users is largely worthless in 2023 unless you are utilizing special operators like quotation marks and minuses. Even then, it's really how the user is choosing to filter through that data.
This might be true before the plaintiff contacted Google, but once contacted by the plaintiff the situation might be more like:
I know that "Bob says Joe is a pedophile" and "Joe says that he is not a pedophile". It is factually true for you to continue to say that "Bob says that Joe is a pedophile", but if that's ALL that your saying, then morally you're lying by omission.
It is of course an interesting question of how to boil this situation down to the reality of how a search engine works and displays results, but the general point stands that your presented situation isn't really representative of the ruling.
Oh, also in this specific case the claim wasn't that Joe is a pedophile, it's that Joe was a convicted pedophile, which if you want to poke at the morality of, would push Google into a even more precarious position, since that's harder fact. If Joe brings proof that he was never a convicted pedophile, then you repeating that "Bob says that Joe is a convicted pedophile" with no qualifiers is barely distinguishable from saying "Joe is a pedophile" from a moral standpoint.
Keeping the metaphor, it's responding to the question of "Do you know Joe?" with "Yes, and that guy Bob over there says Joe is a pedophile" whenever asked.
The problem is that people trust the internet too much, removing false information would make that worse. People will think ‘if that wasn’t true then they’d be forced to remove it, since it wasn’t removed it must be true’. Maybe the solution is more fake news not less of it.
Too much trust is much easier to fix than too little trust. Like how chaos engineering seeks to improve resiliency by introducing damaging noise. There is also the idea of 'anti-fragile' coined by Nassim Taleb.
I know that sounds logical ... but it doesn't work in practice and that's why the law doesn't work like that. We don't want a situation in which newspapers and other powerful organisations can avoid being sued for libel simply by prefixing every article with "$RANDOM_PERSON says". If libel law were just going to be a booby trap for ordinary people who don't have legal advisors like newspapers do then it would be better not to have any libel law.
But that’s not really the way information works. If you say that and don’t immediately follow up with saying “but Bob is an untrustworthy source of information and is often full of shit, don’t believe it”, then a person looking for information on whether or not Joe is a pedophile will take that as a ringing endorsement of Bob’s statement.
That's not how defamation works. You're saying that if the statement is out there, surely people will do a deep investigation into the correctness of it?
OP was saying that even the sentence "bob says that jim is x" is defamation because readers will take it as "a ringing endorsement of bob's statement", which is patently untrue unless you are a little kid.
It is not patently untrue. It wholly depends on the reader’s opinion of Bob. Also most people’s intelligence is not that much greater than that of a little kid. They are only more knowledgeable, but not necessarily more intelligent.
This is a rational take. And worth considering, fwiw I agree with it
I'm distressed further by other threads that begin to point this out being derailed with comments like "America should to keep to its own borders" and unrelated factoids about oil wars. This is not just an American precedent.
On paper, yes. In reality however? People care a lot about what is written online.
"The man, who is now in his early 70s, told the court that he believes potential clients have backed out of deals because they saw the post, adding that his career, which had previously been marked by success, began to spiral.
Two friends testified they refused to use their influence to help him find jobs because they worried the post would make those efforts fruitless.
His personal relationships also suffered, including those with his two sons, the ruling said.
One son testified that his girlfriend’s parents declined to meet his father because of the defamatory internet posts. The son said that after he experienced high-profile success, people would tell him they searched his name on Google and asked him about the post involving his father."
Using the word "care" here is a particularly interesting word choice to me. The real problem is not the fact that wrong statements exist online that's a problem, it's the fact that people care or believe them that's the real problem. So how do we get people to care less?
I look at your post, and see a goal-oriented rather than a system-oriented approach. You see an injustice happening, and want to put a law in place to stamp it out. That's all well and good, but what about looking at the problem on a systemic level? What if the very acts you want to put in place to help people (strong anti-defamation protection) makes the problem you want to work against even worse?
My theory that I've been running with for years is that the existence of strong anti-defamation laws psychologically makes people think something like this whenever they see something crazy online: "This wild claim probably has to be true because it's online or the person who posted it would have their pants sued off otherwise." The stronger the anti-defamation laws that exist, the worse this psychological problem of believing nonsense online gets. IMO, the mere existence of strong anti-defamation protection makes masses of people shut their brain off and believe almost anything dumb online.
Personally speaking, I strongly believe we should consider the opposite, systemic approach. Let's conduct a thought-experiment. How would truth and defamation work in a society with non-existent or very weak anti-defamation laws? Would wild claims be made online daily? Of course. But who would believe any wild nonsense they saw online without verifying it if anti-defamation laws were very weak, or didn't exist?
If your dad told you someone is a pedo and you have many more friends than your dad, but you yell out "My dad says so and so is a pedo". I still think some of the onus is on you to shut your mouth if you're told to do so.
I don't understand how the website in question can avoid a direct defamation suit. While I don't really sympathize with Google in this case because they're somewhat kingmakers as they craft their search algorithms, and therefore not impartial, surely he can find someone to target in an actual defamation case.
Either the author of the article or the owner of the website. If it's not possible to determine the owner of the website then the registrar needs to be raked over the coals. We all have to keep our details up to date every year, presumably for exactly this kind of reason (and yes, I know you can lie, but at that point you lose the domain).
> [56] As for the possibility of suing the author of the Defamatory Post for defamation, the Plaintiff was advised by a lawyer in Town B that he was time-barred because, under [State A] law, the action must be brought within one year of its appearance, regardless of when the victim of the defamation sees the publication.[2]
> [57] The content of [State A] law in this respect is uncontested by the parties.
As for the owner of the website, it's a bit more unclear. It does say that the plaintiff corresponded with the website operator.
> [62] The email correspondence shows that Mr. Magedson asked Mr. T. U. to provide documentation from a police authority showing that the Plaintiff was never the subject of the kind of charges alleged in the Defamatory Post, a Kafkaesque reverse-burden demand to prove one’s innocence.
> [63] If that information were to be provided, Mr. Magedson stated that he would be willing to insert a statement that RipOffReport investigated and concluded that the post is not true. He said that the Defamatory Post would not be removed but certain words would be redacted.
> [64] Mr. Magedson said he never takes down a report posted on his site. In his last email to Mr. T. U., he signed off ominously with the following sentence, “We will all be blogged – good or bad, right or wrong – WE WILL ALL BE BLOGGED” (reproduced as is).
> [65] Mr. T. U. testified that he abandoned the correspondence with Mr. Magedson since he lost hope of obtaining satisfactory relief.[4]
I think it's because the 1 year from time of appearance also holds.
Given that this has been going on since 2007 this doesn't seem like the kind of frivilous lawsuit just going after Google's money. In fact it looks like the plaintiff made good faith attempts (and partially successful ones) with Google on a number of occasions.
The man sought $6 million. It was reduced to $500,000 for "moral" damage by the court. Punitive damages were rejected because they believe Google was also acting in good faith.
I didn't make any claims about the plaintiff's state of mind. For the record I think the plaintiff acted reasonably and in good faith in seeking a remedy against Google based on what I know about the case.
The decision (in English) is available online [0]. Essentially the Court determined that provisions of Quebec's Civil Code relating to defamation were applicable to the case and not US Federal or State law.
The article doesn't do a great job of explaining the legal issues or what the current status is. Can anyone here do a better job? Can any Canadian request links be taken down?
The article doesn't make it clear what happened to the, apparently, two publications at the source of the defamatory statements, only that Google no longer include those publications in its results in Quebec (specifically). Did the plaintiff find who was making the false statements and have them brought to justice?
it is trivial (at least from outside Canada) to find via Google the actual post using some other keywords (instead of the anonymized name of the plaintiff).
The post, though clearly defamatory, is as vague and generic as such posts can be.
Without entering the debate on the actions or lack of actions by google and/or the actual site publisher, what I find almost incredible is that it had such a heavy impact on the plaintiff (see the court paper points [407] onwards) and his reputation.
To only list a few, being removed from advisory board of an institute, being not recommended by old time friends for some jobs, the parents of his son's fiancee refusing to meet him, having promising business agreements canceled, influencing the reputation of his sons, the list is long.
All for a single anonymous post? (presumably from a disgruntled ex-worker)
Are people so stupid, do they believe the contents of this (single) data point and as a consequence take the actions that have been described?
EDIT: corrected "removed from presidency of an institute" to "removed from advisory board of an institute"
Would you have appointed someone to lead an institute if you knew that there would be a PR fiasco the moment his name was announced? No, you would have told them, "I can't do this unless you sort out this other thing". Which the man has been trying to do for 15 years.
Sorry, I probably mixed up the description, seemingly he was removed from the advisory board of the institute, where he already was:
>He was removed from the advisory board of the prestigious Roosevelt Institute, which manages the Franklin D. Roosevelt presidential library.
And no, if anyone could be removed from the advisory board of a prestigious institute through a single (defamatory) post, all advisory board places would be vacant.
What would stop a pair of people from colluding such that one defames the other with good SEO, they extract money from Google, and split the difference?
> What would stop a pair of people from colluding such that one defames the other with good SEO, they extract money from Google, and split the difference?
First suing Google is expensive, and you'd have to prove actual damages (loss of business, reputation, divorce) before a court a law. Cases such as these aren't a slam dunk. Why would anyone want to risk their reputation over that kind of unhinged and risky scheme? A man's reputation is everything.
As an American who enjoys America’s freedom of speech, I find it distressing that the largest companies, who play a real role in the distribution of that speech, are so very large and global that they are beholden to the law of other jurisdictions, who then afflict US residents with their extraterritorial censorship.
We need many viable not-Googles, instead of just one behemoth.
Falsely accusing somebody of a criminal act is defamation per se in the US.
I'm unsure if a US court would agree that search results would constitute defamation as well, but it certainly seems possible, especially if the search query was entirely neutral (i.e. just looking up this victim's name).
“Restoring a link to content Google had indexed which called someone a pedophile” is a lot different than the content itself and was just fined $500k. That’s a lot of liability for a link. That means any time someone asks for content to be taken down there’s a lot of risk in saying No.
The censors will win. The reputation management firms hired by billionaires will win. They will silence criticism, even true criticism. Even, especially, when they are in fact pedophiles, they will win a fight with Google, demand the Google AI take down all content that suggests they are anything but saints.
This world is coming. It may be here within a decade or two. HN used to be a place where people thought this was a bad thing.
While freedom of speech is protected in the United States, it is not an absolute right, and defamation laws are an example of a legal restriction on speech that is permissible under the First Amendment. Defamation laws are common amongst most democracies that enjoy some level of free speech.
This is an action that applies to Google, not the defamer, using Quebec law, where American law would not find liability. This fine included liability for posts that were visible in America, invisible in Quebec. The fact libel law exists in America generally is true but minimally informative and not really germane.
The Quebec court disagrees. Google is defaming the plaintif, and doing so in a global manner. Both in their ranking that information as important and additionally in refusing to remove that information when requested.
I'm really glad that American companies can't just operate with impunity in other countries. Google and other US multinationals SHOULD be bound by the superset of laws for all of the countries they operate in. If they can't accept those laws, for example in the case of authoritarian states, then they shouldn't be operating in those markets at all.
And indeed as I have said: A reason this is a big problem is that we have like 1.5 competitors out there (for search: Google, Bing) and they’re all trying to sell ads into all the jurisdictions ever. As a result all of us who have nothing else to use find we are practically limited by the set-intersection of all the extraterritorially enforced liberties in the world. Our set of liberties is getting smaller.
I will be upset at that even if today it’s a marginally better reason than most.
Google has offices and employees in Canada, have .ca domains, sell customers good, etc. You don't think they should be subject to local laws? Is the reciprocal? Should Chinese company operating is the US be only subject to Chinese laws?
You mean for the deaths of over 7,000 Iraqi civilians by US armed forces because of the invasion he started on false pretenses? Or are you thinking more about the roughly 300,000 Iraqis who died in that conflict? Or maybe you're thinking about 100+ Iraqis who died while in US detention centers, several of which resulted in actual homicide charges? Or perhaps you are deeply concerned about the distinction between murder and torture authorized by GWB?
You missed the point. Were you over-caffeinated or something?
I was using an example of saying someone is a "murderer" as free speech. Many people have called GWB a murderer and they are allowed to - because of freedom of speech.
Why would calling someone a pedophile vs a murder not be covered by the same freedom? Why am I engaging with you? Don't answer that, this is pointless.
“JK Rowling critic forced to publicly apologize for calling her a Nazi after lawsuit threat. Self-professed 'drag queen' and 'jazz hands enthusiast' apologized to JK Rowling 'for causing potential upset'”
Can’t wait for the rich to shop around to the best defamation venue and take down all the Google links with a $500k-ish threat each.
I was not asking if calling someone a pedophile is freedom of speech under the current legislation of the US, but more on the philosophical idea of freedom of speech.
As an American who used to enjoy the freedom of speech 20 years ago and whose freedom of speech has been greatly eroded by (among others) the big tech I strongly support this verdict.
If Google provides a neutral search it should not be responsible for what the found search results say. If those are illegal, courts should go after the author.
But should Google search suppress and promote specific viewpoints beyond clearly neutral pagerank-style search, their top links become promotions. And Google can be held liable for what those show. My 2c, IANAL.
Freedom applies to both parties. Your freedom to swing your fist ends where another's nose begins. Similarly, your freedom to baselessly disparage and slander has similar restrictions, because the other person has the freedom against such attacks.
Libel is very restricted in the United States as well. Indeed, the whole "section 230" argument that keeps getting revived deals specifically with this topic (paradoxically by the "freedom" party), so clearly your "freedom" isn't quite as absolute as you imagine it is.
Further, Google was ordered to restrict the content in Quebec, and the fine they are paying is a tiny fraction of the money they make in Canada, which is a country where they have offices and significant business. I see zero reason why you decided to make this American centric.
No, this is called looking beyond the single issue in front of us, to see the broader trend which confronts us
Some day it may be fines for carrying links tho content that says mean things about a noted UK author with controversial opinions, under the notoriously plaintiff-favorable UK libel law, often used to bully critics. Some day it will be fines for US-visible links to content critical of the Chinese Communist Party, and other threats to its business.
It was not ordered to pay because the links exists.
It was ordered to pay because they restored links it had previously removed.
"A Quebec Superior Court judge has ordered Google to pay $500,000 to a Montreal man who sued the company after it restored a link to an online post falsely accusing him of being a pedophile."
"Google removed a link to the post from the search results that appeared on its Canadian website. Google would remove links twice more at the man’s request — later that year and in 2011 — after the post resurfaced in its search results. oogle removed a link to the post from the search results that appeared on its Canadian website. Google would remove links twice more at the man’s request — later that year and in 2011 — after the post resurfaced in its search results."
Google restored links to the defamatory content but the content was unable to be removed from where it was hosted. Do you try to go after the person who posted the content? What if you cannot? It's easy to end up in a place where victims have no recourse because Section 230 protects the platforms and the original poster cannot be tracked down to answer for their libel.
I wonder how many of them would say the same thing if instead of Google being the aggregator of third party information it was Equifax, Experian, or TransUnion, the false third party information being published by the aggregator was that the person routinely failed to make bill and loan payments, and the consequences was their credit score was so low they have to use cash for everything and prepay all their utilities.
Would it be acceptable for the credit reporting agencies to tell them it is not the agency's problem and if they want to fix it they need to get the third parties the reported the false information to the agencies to fix it?
TLDR; The Delhi High Court on Thursday restrained various YouTube channels from disseminating, publishing or sharing videos or any fake content relating to the child of a Bollywood celebrity couple. Google LLC was one of the parties that was issued a legal notice to comply with removing the misinformation.
Can you explain why? If his life has been negatively impacted by something false, and that impact has come at the expense of job opportunities, general quality of life, why should he not be reimbursed for damages?
Google has an ethical and moral responsibility to not allow its platform to be used for defamatory purposes.
While Google is an American company, it operates in many countries around the world and should be held accountable for its actions in each of these countries. The issue is not about one country telling another what to do but rather about a company being held accountable for its actions in a particular jurisdiction. The Canadian court system has the responsibility to enforce Canadian laws and protect the rights of its citizens, including the Montrealer in this case.
If you don't agree with that, don't offer your service in Canada.