Hacker Newsnew | past | comments | ask | show | jobs | submit | ZeroConcerns's commentslogin

Somehow I feel that if all the time that has been invested in debating and discussing this had been spent on patching the affected apps, the problem would be properly solved.

I mean, yeah, I get it, systemd bad, democracy good, but these world-writable lock folders are actually a huge pain, and adding some shim code to upgrade to a more secure solution seems achievable?


Genuinely curious - why would world-writeable directory be bad for security? Assuming of course, it's on a separate filesystem mounted with sensible options. Here's what I see from "grep /run/lock /proc/mounts" in sid:

  rw,nosuid,nodev,noexec,size=5120k

The classic is say you know a root process will write a file called foo.lock in /run/lock, and you (a bad person) have write access to that directory. Then you make foo.lock a symlink to some file (/bin/init or /bin/sh or ld.so for example would be very inconvenient choices) and when the root process writes its lock it destroys that file.

Now obviously people these days generally know about that so hopefully don’t use predictable file names but that’s one way.


> and when the root process writes its lock it destroys that file.

Unless you do open("/run/lock/foo.lock", O_WRONLY|O_CREAT|O_EXCL|O_NOFOLLOW)


Yep. And for good measure, first open with O_CREAT as tempfile with random name, then rename() it to predictable "foo.lock".

Yup to both of you. But all of this is to say, running shellscripts as root (in particular) needs to be done with extreme care, because if people forget those precautions when writing C, they sure as heck don’t trouble themselves to do it when they’re writing shell.

I remember the time (around 2001-2002) when just about every binary was discovered to have some variant on this exact exploit. I happened to be linux sysadmin for a very large, high-profile set of linux boxes at the time. Happy times.


> Now obviously people these days generally know about that so hopefully don’t use predictable file names but that’s one way.

Annoying side effect: now you gotta guess which process created the darn lockfile.

A more sensible approach is to do sanity checking on the lockfile and its contents (i.e. does the contained PID match one's own binary).


> now you gotta guess which process created the darn lockfile

Or you can use “lsof” to just tell you.


The argument is also that you could effectively DoS the system by exhausting space or inodes.

Hmm - I see there's now "lockdev" for managing access to things like serial lines, but what's the preferred method of expressing "only one instance of this program should run at any one time"?

I don't know what the preferred method is. But so far, flocking on my own executable works for me.

Yes, simpler times and such. And I get the feeling someone is about to discover RFC 864, which is even more fun (as in: a DDOS amplification vector of note, but this stuff actually was useful for a while...)

https://www.rfc-editor.org/rfc/rfc864.html

> UDP Based Character Generator Service

> When a datagram is received, an answering datagram is sent containing a random number (between 0 and 512) of characters (the data in the received datagram is ignored).

> The service only send one datagram in response to each received datagram, so there is no concern about the service sending data faster than the user can process it.

Oof...

Yeah apparently the idea that the "user" might not be the real sender wasn't yet well-known.

Simpler times indeed.


Oh, come on, having your brand-new AirPod Pro 3s listed in the Bluetooth summary of your also-pretty-recent iPhone as ACCESSORY_MODEL_NAME is a small price to pay for the 3 months of free Apple Music that take up so much more space in the UI anyway...

I mean, some people are just impossible to please!


So, yeah, only tangentially related, but if anyone at Anthropic would see it fit to let Claude loose on their DNS, maybe they can create an MX record for 'email.claude.com'?

That would mean that their, undoubtedly extremely interesting, emails actually get met with more than a "450 4.1.8 Unable to find valid MX record for sender domain" rejection.

I'm sure this is just an oversight being caused by obsolete carbon lifeforms still being in charge of parts of their infrastructure, but still...


a not really related fact. I remember reading some RFC, and the sender should try sending to the server specified in A record if there are no MX records present

This sounds like it's an inbound check, as part of spam prevention, by seeing if the sending domain looks legitimate. There are a whole lot of those that are common that are not covered in RFCs.

You are correct that is the expected order of operations

I'm not sure if item #2 in the linked advisory ("identify if the networked management interface is accessible directly from the public internet") indicates whether compromise is only likely in that situation or not, but... lots of remote workers are going to have some time for offline reflection in the next week, it seems regardless.

Yes, CWE-444, "Inconsistent interpretation of HTTP requests."

Well, I guess a whole-three-genuine-US-dollars is actually pretty expensive for an ADC, and that the person-in-charge-of-your-BOM in one of the countries that can actually still manufacture things can get one for way less than that.

Does it work? Well, does your design power up during factory testing, and then pass whatever things your rig (hope you made a few!) has in mind? Well, then, yes, in fact it does...


These numbers add up fast when you have dozens or hundreds of components on your board. A $3 part is often one of the most expensive items on a board! If you're trying to get something shipped to consumers for $20 each, with an enclosure, packaging, shipping, retail markup, and profit...that's a huge price disparity.

Also, and perhaps more importantly, the test rig is a lot simpler and a lot cheaper if you can generally trust manufacturer data. Sure, send off a few samples (likely prototypes with parts from Digikey instead of LCSC) to run extended testing in an environmental chamber with thermal imaging, build an endurance test rig that pushes the button once a second for four weeks to simulate once-daily use for years, whatever you want to do...but after that, if TI says it's good from -40 to +125, you're going to trust them on a lot of the edge cases.

Do 100% testing of the things you can test in-circuit if you can - power it up at room temperature and make sure it works once - but that doesn't mean you get the actual rated performance across all published environmental conditions.


Yeah, sure, I get it, Spotify==Big Tech==Bad, self-hosting is nirvana, et cetera.

But, one simple question: how are the Creators (especially those not signed with a Big Bad Label) expecting to be paid in this marvelous post-Spotify era? Because, fact: like 80% of revenue (if not more, and the rest is pretty much evenly divided between YouTube, the remains of iTunes, and some niche portals like Beatport) flows through them these days.

And, for all Spotify's flaws, that revenue stream might be something to have a pretty good plan to replace, and I don't see any hints at that in the linked article?


Distribution and discovery existed in the before times, it’s just that they didn’t take such an obscene slice of the pie. Bandcamp and iTunes at least give you the option to purchase music outright. The artist gets a more substantial cut and that music is yours to keep.

To your point though, streaming allows people to listen to a greater variety of music for little cost, and I’ve discovered music through other peoples playlists that have been really enjoyable. I think most people want to have a larger library without paying more and that’s a significant part of the problem.


Are non big label musicians even making any money on Spotify given the notoriously low per stream rate that Spotify pays out ?

Even if 80%[1] of all money is going through large platforms like Spotify and YouTube, the real question is how much % of indie money is going through them.

The best bet for semi professional or indie today is to do live performances, sell merch or have fans on Patreon or get viral on TikTok and so on, nobody is living on Spotify money.

Platforms are more used to grow audiences and improve discoverability than make any real money as an indie artist.

---

[1] Big platforms combined may very well be 80%, however I doubt Spotify alone is 80% of the even the English market, let alone global where it is just many times pretty much only YouTube or some regional player bundling services.

[2] iTunes may not be significant, Apple Music and Amazon Music are. They have enormous distribution due to install base and Prime, and they sell a ton of bundled deals with telecom and other packages.

Then there is TikTok which is huge for music too

There are other players in streaming like Satellite with Sirius XM or traditional FM/AM Radio who also pay for streaming music.

The organized music market is pretty vast, Spotify hardly controls 80% of anything.


Yes, they're absolutely making money there and probably more than they did in the era of CDs.


You get any Rechtsanwalt to send them a certified letter asking them to explain themselves and outlining the monetary damages on your side


OK, really hot take here:

-ChatControl, as it is currently defined, is not going to happen, because it's absolutely stupid and would make impossible, amongst other things, online banking

-Yet, there is a growing and legitimate demand for lawful interception of 'chat' services. I mean, "sure, your bank account got emptied, but we can't look into that because it happened via Signal" just isn't a good look

-So, something has got to give. Either 'chat' services need to become 'providers of telecoms services' and therefore implement lawful interception laws, or the malware industry will continue to flourish, or something even more stupid will happen

Pick your poison.


> -Yet, there is a growing and legitimate demand for lawful interception of 'chat' services. I mean, "sure, your bank account got emptied, but we can't look into that because it happened via Signal" just isn't a good look

Why on earth would mass intercept be necessary or even help in that?

If you got scammed by someone, then you can contact the police and hand over your message history. Why would the cops be interested in someone else's message history for this?


> Why on earth would mass intercept be necessary

Lawful interception is not "mass intercept."

It's the ability to surveil traffic from/to a clearly identified party, upon a judicial order for specific reason, for a limited time.

ChatControl, on the other hand, is mass interception. I'm against it. Most people in the EU are against it. But to prevent things like ChatControl coming up over and over again, a basic tool to combat Internet crime is required.


The problem we have is that was OK when someone had to actually listen in or you had to have a tape recorder connected up to every line you want to tap, or physically open individual letters.

Now we have found “lawful intercept” can easily just become mass surveillance, and not just by the people who are meant to use it but other parties too. We saw this with CALEA which was used by China (and who knows who else) for espionage and spying for years before anyone realised.

You make a system for the “good guys” and it always turns out adversary, criminal groups etc. will gain access, even if the “good guys” don’t start acting like bad guys themselves.

Technology made mass surveillance easy, so every lawful intercept becomes mass surveillance as well as vulnerable to scammers, criminals and foreign intelligence.

And we don’t have any way of making lawful intercept possible without that unfortunately.


From what I know this basic tool already exists. In the US, the government can just ask any old company for their data and they have to give it up, just like they would their mail or their physical locations. I'm assuming the rest of the West has similar tools, warrants of some kind.

The problem is nobody uses them to combat crime on the internet. They use them for stupid shit usually or stuff that involves lots and lots of money.

We're jumping the gun here. We already have a fire bomb, and we're not using it, but we're going ahead to developing the nuke. Makes no sense.


We're talking about end-to-end encrypted data here. It doesn't matter if LE have the company's data because it's just a scrambled mess. They don't have the keys to decrypt it. They only exist on the users' devices.

Chat Control seeks to execute on each and every device before/after encryption so it has access to the data pre/post encryption.


Sigh. We already have a mechanism to get the data off the devices.

If the servers don't have it, what do you do? You go to the end points, you issue a warrant, and there's your unencrypted data.

What if they don't wanna do that? I don't know, that's out of scope.

People refuse warrants all the time. You know what we DON'T do? Say, "fuck it" and no longer require warrants.

Again, let's look at good old mail. I can encrypt mail. I can write in ciphers.

Okay, now FedEx gets a warrant. They give me the mail. I can't read it. Uh oh. What do I do? I go to the sender and recipient, and I issue warrants. Problem solved.

That's how we do things, that's how weve always done things, and that's the only reasonable way to do things. We don't say "hey post office, open up every letter and read it. And if it sounds suspicious, tell us". We don't do that.

Okay, so everyone understands that and there's no confusion. When we go online, suddenly there's confusion. Is it confusion, or is the confusion a viel for authoritarian?


> So, something has got to give.

Something does have to give: the constant demands for interception capabilities on end-to-end encrypted protocols. Those demands must be thoroughly destroyed every time they rear their head again.


It's interesting that this initiative seems to be mostly driven by influential actors in the "online safety" space that want their flawed scanning tech embedded into every device. Thorn is the most public-facing one, but if you dig into advocacy groups you'll find there's a dozen or so more, and they competed for being the technical solution to the UK online safety act too. But if it involves CSAM it's an even more perfect monopoly - only a very select group of people can train these models because the training data is literally illegal to possess.

If you needed any indication for how these pseudo-charities (usually it's a charity front and a commercial "technology partner") are not interested in the public good, SafeToNet, a company that up until last year was trying to sell a CSAM livestream detection system to tech companies to "help become compliant" ("SafeToWatch") now sells a locked down Android phone to overprotective parents that puts an overlay on screen whenever naked skin can be seen (of any kind). It's based on a phone that retails for 150 pounds - but costs almost 500 with this app preinstalled into your system partition. That's exceptionally steep for a company that up until last year was all about moral imperatives to build this tech.


I haven't seen anything that suggests chat control would do anything to e2e. I am genuinely curious. It seems to be an often parroted point but... ?

It's just local image hashing and matching? Or is this only one implementation idea?


Chat Control is in some ways a response to E2E, by saying "let's backdoor the endpoints, then".


I am having a legitimately hard time wrapping my head around not being able to prosecute bank fraud because signal exists. Was it impossible before when criminals would talk in person instead over a recorded telephone?


There is a famous case of US Mafia meeting in rooms, or out on streets to discuss their "business activities" face to face to prevent authorities from surveilling the phone calls.

The reason we know is because authorities were able to place listening devices into the rooms that they were in, or surveil them from other buildings.


This is analogous to getting a warrant to someone’s phone. (Chat control is like putting a microphone into every room in case the government wants to listen after the fact.)

I’m still unconvinced that this make’s law enforcement’s job so hard that something has to give.


No? But lawful intercept laws were never about "criminals [talking] in person".

There's a different set of laws for that...


And we all know those laws are never abused and are absolutely only used to target criminals.


No, there is definitely abuse of lawful interception.

But, in a jurisdiction with a functioning rule of law, these abuses can be spotted and remedied.

Doing the same for mass surveillance (such as ChatControl) or state-sponsored malware is much harder.

I'm advocating against ChatControl and malware, and proposing existing lawful interception frameworks as an alternative. But, apparently it's not my day :)


ChatControl is just lawful interception under a different name, but worse.


Why would the malware industry benefit from digital message privacy?

If you're the victim, just hand over the relevant chats yourself. Otherwise, just follow the money. And if the attackers are sitting in a country whose banks you can't get to cooperate, intercepting chat messages from within that country won't do you any good either.

Also, if someone has malicious intent and is part of a criminal network, the people within that network would hardly feel burdened by all digital messages on all popular apps being listened in on by the government. These people will just use their own private applications. Making one is like 30min of work or starting at $50 on fiverr.


”Follow the money”. Yes, let’s decide that no bank is to have anything to do with crypto from next year. And not do business with other banks that accepts crypto. That would help stop fraud much more effective than Chat Control.


For the vast majority of crypto currencies tracing the transactions is trivial. And even currencies like XMR are hardly as anonymous as people think.

The challenging regulations around technically anonymous crypto currencies require you to actively make trackable arrangements with your financial service providers. VERY few people will ever do this, and therefore if anything suspicious were to occur, all you've achieved is putting yourself on the suspect list preemptively.


> Why would the malware industry benefit from digital message privacy?

Because if lawful interception of in-transit messages is not possible or permitted, hacking either the client or the server becomes the only option.

You may enjoy reading https://therecord.media/encrochat-police-arrest-6500-suspect.... Or just downvoting me. Or both.


Sure, if you want to read the messages, but the whole point is that that's rarely necessary and the price isn't worth the minimal gain.

Of the serious criminals, the only ones you'll be catching are those with low technical knowledge (everyone else will just be using their own applications) and the Venn diagram of those with little tech knowledge and those whose digital privacy practices could deceive law enforcement resembles AA cups against a pane of glass.

Regarding Encrochat, it is no surprise that an (unintentional?) watering hole gathered up a bunch of tech-illiterate, the fallacy is that those people wouldn't have been caught if they weren't allowed to flock to a single platform for some time.

Would some people have not been caught until much later or even not at all? Sure, but if LE would do its job (and not ignoring, or even covering up, well known problem areas and organizations for years to decades), only those of low priority.

Is that little gain worth creating a tool to allow Iran or similar countries to check every families' messages if they suspect some family member might be gay?

Hard nope.

> Or just downvoting me.

Don't worry, I rarely do that and that's not just because I can't...


How do you propose it's implemented though?

The two sides in this debate seem to be talking at cross purposes, which is why it goes round and round.

A: "We need to do this, however it's done, it was possible before so it must be possible now"

B: "You can't do this because of the implementation details (i.e. you can't break encryption without breaking it for everyone)"

ad infinitum.

Regardless of my own views on this, it seems to me that A needs to make a concrete proposal


Lawful intercept laws exist, and they've been sort-of functional for ages.

Apps like Signal don't entirely fall within the scope of these, which is the cause of the current manic attempts to grab more powers.

My point is that these powers grabs should be resisted, and that new services should be brought into the fold of existing laws.

The prevailing opinion here seems to be that, instead, state hacking should be endorsed. Which, well...


The prevailing opinion here seems to be that we’d really like for there to not be an omnipresent panopticon because protect the children or terrorists or, apparently, malware. If your imagination is particularly lacking on how this might be weaponized just remember that antifa is now designated as an terrorist organization in US, so you better not be a suspected member of it — as in, you best not have sent a buddy a message on signal about how those tiki torch carrying nazi larpers aren’t exactly great guys, or off to a black site you go for supporting terrorism.

If you want to prosecute people send physical goons, which are of limited quantity, rather than limitless, cheaper and better by the day pervasive surveillance of everybody and everything.


> an omnipresent panopticon

OK, sorry to keep repeating myself here, but... I strongly oppose any kind of "panopticon" like ChatControl.

What I would like to see, is, say, Signal complying with lawful interception orders in the same way that any EU telecoms provider currently does.

So, provide cleartext contents of communications to/from a cleary identified party, for a limited time, by judicial order, for a clearly specified reason.

> pervasive surveillance of everybody and everything

This is exactly what lawful intercept laws are supposed to prevent. And yeah, of course, abuse, but under a functioning rule of law there are at least ways to remedy that, unlike with mass surveillance and/or malware...


> I strongly oppose any kind of "panopticon" like ChatControl. What I would like to see, is, say, Signal [...] provide cleartext contents of communications to/from a cleary identified party

Those statements simply aren't compatible.

Right now, Signal is designed by cryptography experts to provide the best privacy we know how to build: messages are only readable by you or the intended recipient. "Lawful intercept" necessarily means some additional third party is given the ability to read messages.

It doesn't matter what kind of legal framework you have around that, because you can't just build a cryptosystem where the key is "a warrant issued under due process." There has to be a system, somewhere, that has access to plaintext messages and can give law enforcement and courts access. The judges, officers, technicians, suppliers, and software involved in building and using this system are all potential vectors by which this access can be compromised or misused -- whether via software or hardware attacks, social engineering, or abuse of power.

Maybe your country has "functioning rule of law", and every single government official and all the vendors they hire are pure as snow, but what about all the rest of us living in imperfect countries? What about when a less-than-totally-law-abiding regime comes into power?

You're proposing that we secure our private conversations with TSA luggage locks.


> You're proposing that we secure our private conversations with TSA luggage locks

No -- that's an incredibly reductive summary, and the attitude you display here is, if left unchecked, exactly what will allow something equally ridiculous like ChatControl to pass eventually.

There has been plenty of previous debate when innovations like postal mail, telegraph traffic and phone calls were introduced. This debate has resulted in laws, jurisprudence, and corresponding operating procedures for law enforcement.

You may believe there are no legitimate reasons to intercept private communications, but the actual laws of the country you live in right now say otherwise, I guarantee you. You may not like that, and/or not believe in the rule of law anymore anyway, but I can't help you with that.

What I can hopefully convince you of, is that there needs to be some way to bring modern technology in line with existing laws, while avoiding "9/11"-style breakdowns of civil rights.


We can draw analogy between any two things. An encrypted chat is not a letter in the mail or a call on the telephone. It is an entirely new thing. Backdooring such chats is not "bringing technology in line with existing laws" it is, very clearly, passing new laws, and creating new invasions of privacy. It must be justified anew. The justification for wiretapping was not that there was no way to fight crime without it. Otherwise, when the criminals became wise to it, and began to hold their conversations offline, there would have been a new law, requiring that all rooms be fitted with microphones that the police could tap into as necessary. No such law was passed. Instead, the justification for wiretapping was simply that, once police had identified some transmission as relating to the committing of a crime, they could obtain a warrant, and then tap into the communication. The physical capacity without any effort by uninvolved individuals was the entire justification. That capacity does not exist with encrypted chats. The analogy is therefore much closer to the "mandated microphones" described above. Everyone is being required to take action to reduce their own privacy, regardless of whether they are subject to a warrant.

What is most striking about our "mandated microphone" analogy is the utter futility of it. Criminals have no issue breaking the law, and hence have no issue outfitting a room with no microphones in which to carry out their dealings. The same is true of any law targeting encrypted chats.


For a real-world example of the problem you're describing, China's Salt Typhoon attacks compromised lawful intercept infrastructure in the USA to engage in espionage. A mandatory backdoor in Signal would be at risk from similar attacks.

https://en.wikipedia.org/wiki/Salt_Typhoon


I would rather online banking be impossible, or only available to those that take training and sign waivers, than have all my communications surveiled.


OK, you be you, But please note that I did not list "online banking becoming impossible" as a likely outcome. Merely malware continuing to be state-sponsored, or certain communications to be surveilled. Not all of yours, unless you draw an especially vinidicative judge (and yes, I'm assuming a functioning rule of law here -- if that's gone, what's left?)


> OK, you be you

I don't know what you mean by this.

> But please note that I did not list "online banking becoming impossible" as a likely outcome.

No, but it should be a likely and maybe even desired outcome, especially if a justification for surveillance is the prevention of online banking fraud among other crimes.

> Merely malware continuing to be state-sponsored, or certain communications to be surveilled.

Norms and mores change over time, so the only conclusion is that "certain communications" will become "all communications" at some point in the future. I'd love to be proven wrong.


> Norms and mores change over time

Yeah, but laws tend to be more constant, and lawful interception laws are, 100% guaranteed, a thing, right now, in the country where you live.

They apply to telegrams, postal mail, telephone conversations, and a whole bunch of other things nobody really does anymore. They don't really apply to the things people do tend to do these days.

ChatControl is an incompetent attempt to remediate the lapses in law enforcement that this has caused. I strongly oppose it. But I also strongly oppose the idea that the Internet should be off limits for any kind of law enforcement, unless it is through dubious mechanisms like state-sponsored malware.

Your "slippery slope" argument is much more compelling in the absense of extended lawful interception than in the situation where Signal messages would somehow be equated to postcards or SMS messages...


And yet lawful intercept laws cannot force you to decrypt the OTP-encrypted physical letter you sent to your friend. (Except in authoritarian shitholes like the UK.) Same principles would seem to apply here.


A hot take: removing protections guaranteed by constitution should require modification of the constitution. There is already a "temporary" European regulation [1] that is in violation of the German constitution [2]. CSAR would be a further erosion of the legal foundation. Americans were happy when their federal laws that restrict marijuana use were simply ignored by executive fiat without proper processes, well, they aren't so happy now to see that other laws can be freely ignored too.

If people speak up and say "take away our rights" at a referendum, let that be their decision, not a political backroom deal.

[1] https://eur-lex.europa.eu/eli/reg/2021/1232/oj

[2] Article 10 at https://www.gesetze-im-internet.de/englisch_gg/englisch_gg.h...


> A hot take: removing protections guaranteed by constitution

Lawful intercept laws exist in most, if not all, EU countries.

It's just that super-national overlay services like Signal don't entirely fall within the framework of those.

So, there is now a choice: expand interception powers indefinitely (a.k.a. ChatControl, which, to make things crystal-clear, I'm 100% against), or bring new services into the fold of existing legislation.


No existing legislation requires proactive interception of mail, physical or electronic. Bringing new services into the fold of existing legislation would mean forbidding any proactive scanning by civilians and forbidding such scanning by authorities without a warrant or court order.


> proactive interception of mail, physical or electronic

Lawful interception is not proactive: it requires a judicial order to collect plaintext communications from/to specifically identified individuals (resident in the country demanding the interception), for a limited time and for a specific purpose.

ChatControl, which I strongly argue AGAINST would sort-of be what you describe. But: I. Am. Arguing. AGAINST. That.


A piece of open source software running on Alice's computer exchanges keys with a piece of open source software running on Bob's computer. Later Alice and Bob exchange messages encrypted with those keys through Charlie's server.

Eve, a police officer has evidence that Alice and Bob are messaging each other about crimes and obtains a warrant to require Charlie to intercept their communication. Charlie has no ability to do so because it is encrypted with keys known only by Alice and Bob.

If you want a different result, someone has to proactively change part of this process. Which part should change?

One option is to mandate that any encrypted messaging software also give a key to the government or the government's designee, but someone using open source software can modify it so that it doesn't do that, which would be hard or impossible to detect without a forensic search of their device.

Another option is to mandate that a service provider like Charlie's only deliver messages after verifying that it can decrypt them. This, too is hard to enforce because users can layer additional encryption on top of the existing protocol. Signal's predecessor TextSecure did that over SMS.

Both of those options introduce a serious security vulnerability if the mechanism for accessing the mandatory escrowed keys were ever compromised. Would you like to suggest another mechanism?


About the only thing I can think of is to mandate the use of (flawed) AI to identify messages that seem nonsensical and refuse to pass them, and then to play a game of Chinese-style DPI whack-a-mole in an attempt to suppress open alternatives.

If you have the ability to run custom software—even if it’s a bash script—you can develop secure alternatives. And even if you somehow restrict open source messaging, I can just use good old pen-and-paper OTP to encrypt the plaintext before typing it in, or copy/paste some other text pre-encrypted in another program. But even then, all this will do is kick off a steganographic arms race. AI generated text where the first letter of each word is the cyphertext may be nearly impossible to identify, especially at scale.

If anything like this were to pass, my first task would be making a gamified, user-friendly frontend for this kind of thing.


> modification of the constitution

Don’t give them any ideas!


Nothing has to give. Police did their work fine for centuries, they can continue doing it without mass surveillance.


But that's not a fair statement. Police did their work for centuries but it was nowhere near "fine" by modern standards and today there's a hundred ways more to commit crime



I’ll take the malware thanks


while this is a link to the malware site x.com, it is shown in a protective trustworthy hull, called xcancel.com


Without confidential and private spaces, how in the world can relational trust be cultivated?

And how in the world can we have safety if relational trust is suffocated before it can even take root?

Please use your imagination! Those aren't the only options if we embrace trust as essential rather than looking at any need for it as a liability.


why do you think they want relation trust. unless you mean trusting that if you go against the man, the man will come for you. maybe it would be better for s/trust/fear/


: _ (


Malware has existed nearly since the dawn of computing. Making the world even less secure under the guise of combating w/e today's latest bogeyman is is not gonna solve that. And having secure private communications is not gonna make it worse.

That anyone thinks this blatantly obvious attack on free speech is actually going to be used only for law enforcement is wild to me.


Im sorry but I know my countries history, there is no good in "lawful interception"


" "sure, your bank account got emptied, but we can't look into that because it happened via Signal" just isn't a good look"

Do you want the police to regularily intercept and check your signal chats for fraud and crime so this does not happen, or what is the point here?


> You want the police to regularily intercept and check your signal chats for fraud

No, that's not how lawful intercept laws work.

I want police to be able to obtain a judicial order to intercept, for a limited time, in cleartext, the (Signal chats, or whatever other encrypted communications) of identified parties reasonably suspected to be involved with criminal activity.

ChatControl is not that, and it's one of the reasons it's a nonstarter.


"I want police to be able to obtain a judicial order to intercept, for a limited time, in cleartext, the (Signal chats, or whatever other encrypted communications) of identified parties reasonably suspected to be involved with criminal activity."

They already have that in most (?) jurisdictions by now.

With a warrant, they can install a virus on the device that will then do targeted surveillance.

ChatControl is bad, because it is blanket surveillance of everyone without warrant.


> With a warrant, they can install a virus on the device that will then do targeted surveillance

Yeah, and that sponsors an entire malware industry!

I don't really know how I can make my position any clearer, but...

-Malware: bad!

-ChatControl (encryption backdoors): bad!

-Inability to do any kind of law enforcement involving "the Internet": double-plus bad!

-Enforcement of existing lawful interception laws in the face of new technology: maybe look at that?


"I don't really know how I can make my position any clearer, but..."

You could state in plain words what do you propose as an alternative.

I read what you wrote, but have no idea what you propose.


> I [...] have no idea what you propose

It's literally the last item in my list?

But to further clarify: I would like existing lawful interception laws to be extended to services like Signal.

Not in the sense that any EU country should be able to break Signal crypto (as ChatControl proposes, and which I think is an utterly ill-advised idea), but that competent law enforcement agencies should be able to demand unencrypted Signal communications from/to an identified EU party, for a limited time and purpose, upon a (reviewable) judicial order.

Most, if not all, EU countries currently have similar laws applying to telegrams, snail mail, email, telephony and whatnot. If you don't like those either, that's fine, but that's the status quo, and I would like to see that extended to services like Signal, as opposed to incompetently dumb measures like ChatControl...


Ok, so you want to break Encryption by law demand. Because this is what this means. Or how exactly would it work, technically? Signal does not know the private key of the 2 parties. Signal would have to inject a infected update into the client .. which is also malware. I rather have just those on target devices with a warrant, instead of breaking all encryption.

Or would you go extreme and outlaw decentraliced encrypted communication alltogether?


The law enforcement of which countries, under which sets of laws?

Should Thailand be granted access to enforce their lease majeste laws, for example? https://en.m.wikipedia.org/wiki/L%C3%A8se-majest%C3%A9_in_Th...

Who gets to decide what gets made available to who?


> law enforcement of which countries, under which sets of laws?

We're taking about ChatControl here, so law enforcement of EU countries, under their respective laws, into which EU law should have been incorporated

> Should Thailand be granted access to enforce their lease majeste laws

Same answer as "should Thailand be granted arrest rights to enforce <whatever>": they submit a legal assistance request to the country where the alleged crime occurred.

In the case of a lawful interception request for "lease[sic] majeste" reasons, I'm pretty sure this would be immediately rejected.

But, if not, the EU subject of such interception would have lots and lots of avenues to get redress.

Again, and I'm getting sort of tired from repeating myself: "lawful interception" does not mean "indiscriminate surveillance at the whim of whomever" -- it is a well-defined concept that has been used to determine which telegrams and mail pieces to open and which telephone calls to record for ages now. Your country absolutely does it, as we speak, no matter where you live. It's just that modern technology has far outpaced the scope of this legislation, and things like ChatControl are (incompetent) responses to that.

ChatControl is not a good idea, and has very little chance of becoming reality. But to stop dumb proposals like this from coming up over and over again, something has got to give.


And when some other countries pass laws demanding access to the same mechanism that the EU gets?


I’ll go a step further: if EU sovereigns claim the right to “lawfully intercept” their citizens' private communications, why shouldn’t every state enjoy the same privilege? Russia, Saudi Arabia, Egypt, and Uganda will be exemplary custodians of such technology. You have nothing to fear, citizen: their democratic constitutions and impeccable legal codes will protect you.


Your comment is incoherent and displays a total misunderstanding of the technology. There is no way to make lawful intercept work with end-to-end encryption: it is mathematically impossible. And don't waste our time with stupid suggestions about key escrow or the like. We know that those keys will inevitably be leaked or misused for political purposes.


Fail to see that it would even work. If the scam has happened how would lawful interception afterwards help? The criminal can just use burner accounts and the chat log exist on the scammed persons device.


Malware, easily


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: