Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It'll be interesting to see what they can cook up at home. Chat Control was pushed in large part by Denmark, and Minister of Justice Peter Hummelgaard is on record saying some pretty disturbing things regarding the right to privacy online.[1] Now for this, they don't need the entire EU to go along, and any laws already on the books might prove ineffective to protect against means that end up achieving similar goals to Chat Control.

Denmark's constitution does have a privacy paragraph, but it explicitly mentions telephone and telegraph, as well as letters.[2] Turns out online messaging doesn't count. It'd be a funny one to get to whatever court, because hopefully someone there will have a brain and use it, but it wouldn't be the first time someone didn't.

[1] https://boingboing.net/2025/09/15/danish-justice-minister-we...

[2] https://www.grundloven.dk/



Whether internet is covered by § 72 seems undetermined; as far as I can tell the Supreme Court hasn't made a decision on it; but considering that it considered fake SMS train tickets to be document fraud, even though the law text never explicitly mentions text messages: it seems clear that internet communication ought to be covered, if challenged.

Regardless, this wouldn't run afoul of this. This is similar to restricting who can buy alcohol, based purely on age; the identification process is just digital. MitID - the Danish digital identification infrastructure - allows an service to request specific details about another purpose; such as their age or just a boolean value whether they are old enough. Essentially: the service can ask "is this user 18 or older?" and the ID service can respond yes or no, without providing any other PII.

That's the theory at least; nothing about snooping private communication, but rather forcing the "bouncer" to actually check IDs.


>considering that it considered fake SMS train tickets to be document fraud, even though the law text never explicitly mentions text messages

That has nothing to do with the medium of the ticket and is all about knowingly presenting a fake ticket. The ticket is a document proving your payment for travel. They could be lumps of dirt and it would still be document fraud to present a fake hand of dirt.


Except the Supreme Court deemed the case to be of a principal nature, and granted relieve (i.e. no cost to either party), since it was disputed whether a fake SMS train ticket counted as document fraud.


> Regardless, this wouldn't run afoul of this. This is similar to restricting who can buy alcohol, based purely on age; the identification process is just digital. MitID - the Danish digital identification infrastructure - allows an service to request specific details about another purpose; such as their age or just a boolean value whether they are old enough. Essentially: the service can ask "is this user 18 or older?" and the ID service can respond yes or no, without providing any other PII.

> That's the theory at least; nothing about snooping private communication, but rather forcing the "bouncing" to actually check IDs.

Hopefully the theory will reflect the real world. The 'return bool' to 'isUser15+()' is probably the best we can hope for, and should prevent the obvious problems, but there can always be more shady dealings on the backend (as if there aren't enough of those already).


Given the track record of digitalization in Denmark, you can be rest assured this will be implemented in the worst possible way.

This is Denmark. The country who reads the EU legislation requesting the construction of a CA to avoid centralizing the system and then legally bends the rules of EU and decides it's far better to create a centralized solution. I.e., the intent is a public key cryptosystem with three bodies, the state being the CA. But no, they should hold both the CA and the Key in escrow. Oh, and then decides that the secret should be a pin such that law enforcement can break it in 10 milliseconds.

I think internet verification is at least 10 years too late. Better late than never. I just lament the fact we are going to get a bad solution to the problem.


> Denmark's constitution does have a privacy paragraph, but it explicitly mentions telephone and telegraph

That's very much not how danish law works. The specific paragraph says "hvor ingen lov hjemler en særegen undtaglse, alene ske efter en retskendelse." translated as "where no other law grants a special exemption, only happen with a warrant". That is, you can open peoples private mail and enter their private residence, but you have to ask a judge first.


People continue to believe that the "Grundlov" works like the US constitution, and it's really nothing like that. If anything it's more of a transfer of legislation from the king to parliament. Most laws just leaves the details to be determined by parliament.

Censorship really is one of the few laws that are pretty unambiguous, that's really just "No, never again". Not that this stops politicians, but that's a separate debate.


And yet they wanted to push a proposal where the government would have free access to all digital communication, no judge required. So if it happens through a telephone conversation, you need a judge, while with a digital message, you wouldn't have, since the government would have already collected that information through Chat Control.


I don't know where you get your information, but that was not in the chat control proposal I read.


Patrick Breyer has some good thoughts on this.[1]

The relevant points I believe to be:

> All citizens are placed under suspicion, without cause, of possibly having committed a crime. Text and photo filters monitor all messages, without exception. No judge is required to order to such monitoring – contrary to the analog world which guarantees the privacy of correspondence and the confidentiality of written communications.

And:

> The confidentiality of private electronic correspondence is being sacrificed. Users of messenger, chat and e-mail services risk having their private messages read and analyzed. Sensitive photos and text content could be forwarded to unknown entities worldwide and can fall into the wrong hands.

[1] https://www.patrick-breyer.de/en/posts/chat-control/


> All citizens are placed under suspicion

> No judge is required to order to such monitoring

That sounds quite extreme, I just can't square that with what I can actually read in the proposal.

> the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State

It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.


Didn't the proposal involve automated scanning of all instant messages? How isn't that equivalent of having an automated system opening every letter and listening to every phone call looking for crimes?


Not from what I can tell. From what I can read, it only establishes a new authority, under the supervision and at the digression, of the Member state that can, with judicial approval mandate "the least intrusive in terms of the impact on the users’ rights to private and family life" detection activities on platforms where "there is evidence [... ] it is likely, [...] that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material".

That all sounds extremely boring and political, but the essence is that it mandates a local authority to scan messages on platforms that are likely to contain child pornography. That's not a blanket scan of all messages everywhere.


> platforms that are likely to contain child pornography

So every platform, everywhere? Facebook and Twitter/X still have problems keeping up with this, Matrix constantly has to block rooms from the public directory, Mastodon mods have plenty of horror stories. Any platform with UGC will face this issue, but it’s not a good reason to compromise E2EE or mandate intrusive scanning of private messages.

I would not be so opposed to mandated scans of public posts on large platforms, as image floods are still a somewhat common form of harassment (though not as common as it once was).


The proposal is about deploying automated scanning of every message and every image on all messaging providers and email client. That is indisputable.

It therefore breaks EtoE as it intercepts the messages on your device and sends them off to whatever 3rd party they are planning to use before those are encrypted and sent to the recipient.

> It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.

How can a judge be involved when we are talking about scanning hundreds of millions if not billions of messages each day? That does not make any sense.

I suggest you re-read the Chat control proposal because I believe you are mistaken if you think that a judge is involved in this process.


> That is indisputable.

I dispute that. The proposal explicitly states it has to be true that "it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material;"

> How can a judge be involved

Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.

I suggest YOU read the proposal, at least once.


You must be trolling.

> it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material

That is an absolute vague definition that basically encompasses all services available today including messaging providers, email providers and so on. Anything can be used to send pictures these days. So therefore anything can be targeted, ergo it is a complete breach of privacy.

> Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.

Your assertion makes no sense. The only way to know if a message contains something inappropriate is to scan it before it is encrypted. Therefore all messages have to be scanned to know if something inappropriate is in it.

A judge, if necessary, would only be participating in this whole charade at the end of the process not when the scanning happens.

This is taken verbatim from the proposal that you can find here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A20...

> [...] By introducing an obligation for providers to detect, report, block and remove child sexual abuse material from their services, .....

It is an obligation to scan not a choice based some someone's opinion like a judge, ergo no one is involved at all in the scanning process. There is no due process in this process and everyone is under surveillance.

> [...] The EU Centre should work closely with Europol. It will receive the reports from providers, check them to avoid reporting obvious false positives and forward them to Europol as well as to national law enforcement authorities.

Again here no judge involved. The scanning is automated and happens automatically for everyone. Reports will be forwarded automatically.

> [...] only take steps to identify any user in case potential online child sexual abuse is detected

To identify a user who may or may not have shared something inappropriate, that means that they know who the sender is, who the recipient was , what bthe essage contained and when it happened. Therefore it s a complete bypass of EtoE.

This is the same exact thing that we are seeing know with the age requirements for social media. If you want to ban kids who are 16 years old and under then you need to scan everyone's ID in order to know how old everyone is so that you can stop them from using the service.

With scanning, it is exactly the same. If you want to prevent the dissemination of CSAM material on a platform, then you have to know what is in each and every message so that you can detect it and report it as described in my quotes above.

Therefore it means that everyone's messages will be scanned either by the services themselves or this task will be outsourced to a 3rd party business who will be in charge of scanning, cataloging and reporting their finding to the authorities. Either way the scanning will happen.

I am not sure how you can argue that this is not the case. Hundreds of security researchers have spent the better part of the last 3 years warning against such a proposal, are you so sure about yourself that you think they are all wrong?


> This is taken verbatim from the proposal that you can find here

You're taking quotes from the preamble which are not legislation. If you scroll down a little you'll find the actual text of the proposal which reads:

> The Coordinating Authority of establishment shall have the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State to issue a detection order

You see, a judge, required for a detection order to be issued. That's how the judge will be involved BEFORE detection. The authority cannot demand detection without the judge approving it.

I really dislike you way of arguing. I thought it was important to correct your misconceptions, but I do not believe you to be arguing in good faith.


Let me address your points here and to make it more explicit, let me use Meta/Facebook Messenger as an example.

> You see, a judge, required for a detection order to be issued. That's how the judge will be involved BEFORE detection. The authority cannot demand detection without the judge approving it.

Your interpretation of the judge's role is incorrect. The issue is not if a judge is involved, but what that judge is authorizing.

You are describing a targeted warrant. This proposal creates a general mandate.

Here is the the reality of the detection orders outlined by this proposal:

1: A judicial authority, based on a risk assessment, does not issue a warrant for a specific user John Doe who may be committing a crime. 2: Instead, it issues a detection order to Meta mandating that the service Messenger must be scanned for illegal content. 3: This order legally forces Meta to scan the data from all users on Messenger to find CSAM. It is a blanket mandate, not a targeted one.

This forces Facebook to implement a system to scan every single piece of data that goes through them, even if it means scanning messages before they are encrypted. Meta has now a mandate to scan everyone, all the time, forever.

Your flawed understanding is based on a traditional wiretap.

Traditional Warrant (Your View): Cops suspect Tony Soprano. They get a judge's approval for a single, time-limited wiretap on Tony's specific phone line in his house based on probable cause.

Detection Order: Cops suspect Tony “might” use his house for criminal activity. They get a judge to designate the entire house a "high-risk location." The judge then issues an order compelling the homebuilder to install 24/7 microphones in every room to record and scan all conversations from everyone (Tony, his family, his guests, his kids and so on) indefinitely.

That is the difference that I think you are not grasping here.

With E2E, Meta cannot know if CSAM is being exchanged in a message unless it can see the plain text.

To comply with this proposal, Meta will be forced to build a system that bypasses their own encryption. There is no other way.

This view is shared by security experts, privacy organizations, and legal experts.

You can read this opinion letter from a former ECJ judge who completely disagrees with your view here:

https://www.patrick-breyer.de/wp-content/uploads/2023/11/Vaj...

I am sorry if you think that I am arguing in bad faith. I am not.

While there is nothing I can do to make you like my arguing style, just know that I am simply trying to make you understand your misconceptions about this law.


The ombudsman will say some strong words and everything will continue as is.


> Denmark's constitution does have a privacy paragraph, but it explicitly mentions telephone and telegraph, as well as letters

And this is why laws should always include their justification.

The intent was clearly to protect people - to make sure the balance of power does not fall too much in the government's favor that it can silence dissent before it gets organized enough to remove the government (whether legally or illegally does not matter), even if that meant some crimes go unpunished.

These rules were created because most current democratic governments were created by people overthrowing previous dictatorships (whether a dictator calls himself king, president or general secretary does not matter) and they knew very well that even the government they create might need to be overthrown in the future.

Now the governments are intentionally sidestepping these rules because:

- Every organization's primary goal is its own continued existence.

- Every organization's secondary goal is the protection of its members.

- Any officially stated goals are tertiary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: