Rust can create statically-compiled binaries on Linux by using musl instead of glibc, but it’s not the default like it is in Go and is as a result not quite as effortless. There are a lot of crates with native dependencies that require slight environment tweaks to work on a musl build. Go on the other hand goes to great lengths to not need to link to any C code at all, to the point of shipping its own default TLS library and cryptographic primitives.
I thought the default for both Rust and Go were to statically compile everything except the dynamic link to libc? And that you could optionally statically include musl with a bit of extra work.
I've never had to do anything unusual with building, but I thought the "almost-ststically-compiled" thing was somewhere Go and Rust were almost identical.
Golang doesn't dynamically link to libc by default on Linux either - it calls the Linux kernel ABI directly (since that ABI is stable). The main upshot of this is that you don't have to worry about the classic glibc version bingo by default, while with Rust you have to go through some extra steps to avoid that.
You and your parent are talking about slightly different things. You are both correct in different ways.
Your parent is saying that, while what you say is true for Rust and Go code themselves, in practice, it is less true for Rust, because Rust code tends to call into C code more than Go code does, and Rust doesn't always statically link to that C code by default.
Mozilla was the primary steward of Rust for most of the time that the Servo project was active. So if you want to lay Servo’s failure at the feet of the Rust language, it’s pretty hard to cast Mozilla as the blameless victims of… whatever it is that Rust users as a whole did to make Servo fail.
That’s true for earlier iterations, but definitely not for an actual HDMI 2.1 signal. I think you can still connect to a DVI-D monitor and the source will automatically downgrade, but I haven’t tried it in a very long time.
How much money could PCI SIG possibly be making for the rightsholders with those fees? They’re not charged to members, they’re not per-seat (so each company only needs to pay once even if they have 100 engineers that need to read it), and they don’t include patent licenses for shipping actual hardware. Nobody’s business model is threatened even slightly by making the standards public.
And as we saw with AV1 vs H.265, the IP encumbrance of multiparty standards can create barriers that kill their adoption and the corresponding ability for rightsholders to make money off them. It looks like that family of encodings is going to die off, with basically zero interest from anybody in licensing H.266 when you’ll be able to build AV2 software and hardware for free.
For stuff like connectors, this gets worked around by using terminology like “compatible with HDMI” all the time. You are explicitly permitted to reference your competitor’s products, including potential compatibility, by trademark law. I suspect the risk here is mostly contractural - AMD likely signed agreements with the HDMI forum a long time ago that restrict their disclosure of details from the specification.
Im shocked i had to scroll so far to find a real hard stop blocker mentioned.
Valve has no reason to care about using the HDMI trademark. Consumers dont care if it says HDMI 2.1 or HMDI 2.1 Compatible.
The connector isnt trademarked and neither is compatibility.
The oss nature of isnt one either as valve could just release a compiled binary instead of open sourcing it.
The 'get sued for copying the leak' argument implies someone would actually fancy going toe to toe with valves legal team which so far have rekt the eu, activision, riot games, microsoft, etc. in court.
Proving beyond doubt that valve or their devs accessed the leaks would be hard. Especially if valve were clever from the get go, and lets face it, they probably were. Theyre easily one of the leanest, most profitable, and savviest software companies around.
Never call a man happy until he is dead. Also I don’t think your argument generalizes well - there are plenty of private research investment bubbles that have popped and not reached their original peaks (e.g. VR).
Okay, but the only part that’s specific to AI (that the companies investing the money are capturing more value than they’re putting into it) is now false. Even the hyperscalers are not capturing nearly the value they’re investing, though they’re not using debt to finance it. OpenAI and Anthropic are of course blowing through cash like it’s going out of style, and if investor interest drops drastically they’ll likely need to look to get acquired.
Here is one sentence from the referenced prediction:
> I don't think there will be any more AI winters.
This isn't enough to qualify as a testable prediction, in the eyes of people that care about such things, because there is no good way to formulate a resolution criteria for a claim that extends indefinitely into the future. See [1] for a great introduction.
Does GDPR (or similar) establish privacy rights to an employee’s use of a company-owned machine against snooping by their employer? Honest question, I hadn’t heard of that angle. Can employers not install EDR on company-owned machines for EU employees?
(IANAL) I don't think there is a simple response to that, but I guess that given that the employer:
- has established a detailed policy about personal use of corporate devices
- makes a fair attempt to block work unrelated services (hotmail, gmail, netflix)
- ensures the security of the monitored data and deletes it after a reasonable period (such as 6–12 months)
- and uses it only to apply cybersecurity-related measures like virus detection, UNLESS there is a legitimate reason to target a particular employee (legal inquiry, misconduct, etc.)
It has to have a good purpose. Obviously there are a lot of words written about what constitutes a good purpose. Antivirus is probably one. Wanting to intimidate your employees is not. The same thing applies to security cameras.
Privacy laws are about the end-to-end process, not technical implementation. It's not "You can't MITM TLS" - it's more like "You can't spy on your employees". Blocking viruses is not spying on your employees. If you take the logs from the virus blocker and use them to spy on your employees, then you are spying on your employees. (Virus blockers aiming to be sold in the EU would do well not to keep unnecessary logs that could be used to spy on employees.)
What’s the definitive answer? From what I can tell that document is mostly about security risks and only mentions privacy compliance in a single paragraph (with no specific guidance). It definitely doesn’t say you can or can’t use one.
That's probably because there is no answer. Many laws apply to the total thing you are creating end-to-end.
Even the most basic law like "do not murder" is not "do not pull gun triggers" and a gun's technical reference manual would only be able to give you a vague statement like "Be aware of local laws before activating the device."
Legal privacy is not about whether you intercept TLS or not; it's about whether someone is spying on you, which is an end-to-end operation. Should someone be found to be spying on you, then you can go to court and they will decide who has to pay the price for that. And that decision can be based on things like whether some intermediary network has made poor security decisions.
This is why corporations do bullshit security by the way. When we on HN say "it's for liability reasons" this is what it means - it means when a court is looking at who caused a data breach, your company will have plausible deniability. "Your Honour, we use the latest security system from CrowdStrike" sounds better than "Your Honour, we run an unpatched Unix system from 1995 and don't connect it to the Internet" even though us engineers know the latter is probably more secure against today's most common attacks.
Okay, thanks for explaining the general concept of law to me, but this provides literally no information to figure out the conditions under which an employer using a TLS intercepting proxy to snoop on the internet traffic a work laptop violates GDPR. I never asked for a definitive answer just, you know, an answer that is remotely relevant to the question.
I don’t really need to know, but a bunch of people seemed really confident they knew the answer and then provided no actual information except vague gesticulation about PII.
Are they using it to snoop on the traffic, or are they merely using it to block viruses? Lack of encryption is not a guarantee of snooping. I know in the USA it can be assumed that you can do whatever you want with unencrypted traffic, which guarantees that if your traffic is unencrypted, someone is snooping on it. In Europe, this might not fly outside of three-letter agencies (who you should still be scared of, but they are not your employer).
Your question
So does nobody in Europe use an EDR or intercepting proxy since GDPR went into force?
Given that a regulator publishes a document with guidelines about DPI I think it rules out the impossibility of implementing it. If that were the case it would simply say "it's not legal". It's true that it doesn't explicitly say all the conditions you should met, but that wasn't your question.
They can, but the list of "if..." and "it depends..." is much longer and complicated, especially when getting to the part how the obtained information may be used
Yes.
GDPR covers all handling of PII that a company does. And its sort of default deny, meaning that a company is not allowed to handle (process and/or store) your data UNLESS it has a reason that makes it legal. This is where it becomes more blurry: figuring out if the company has a valid reason. Some are simple, eg. if required by law => valid reason.
GDPR does not care how the data got “in the hands of” the company; the same rules apply.
Another important thing is the pricipals of GDPR. They sort of unline everything. One principal to consider here is that of data minimization. This basically means that IF you have a valid reason to handle an individuals PII, you must limit the data points you handle to exactly what you need and not more.
So - company proxy breaking TLS and logging everything? Well, the company has valid reason to handle some employee data obviously. But if I use my work laptop to access privat health records, then that is very much outside the scope of what my company is allowed handle. And logging (storing) my health data without valid reason is not GDPR compliant.
Could the company fire me for doing private stuff on a work laptop? Yes probably. Does it matter in terms of GDPR? Nope.
Edit: Also, “automatic” or “implicit” consent is not valid. So the company cannot say something like “if you access private info on you work pc the you automatically content to $company handling your data”. All consent must be specific, explicit and retractable
What if your employer says “don’t access your health records on our machine”? If you put private health information in your Twitter bio, Twitter is not obligated to suddenly treat it as if they were collecting private health information. Otherwise every single user-provided field would be maximally radioactive under GDPR.
Many programmers tend to treat the legal system as if it was a computer program: if(form.is_public && form.contains(private_health_records)) move(form.owner, get_nearest_jail()); - but this is not how the legal system actually works. Not even in excessively-bureaucratic-and-wording-of-rules-based Germany.
Yeah, that’s my point. I don’t understand why the fact that you could access a bunch of personal data via your work laptop in express violation of the laptop owner’s wishes would mean that your company has the same responsibilities to protect it that your doctor’s office does. That’s definitely not how it works in general.
The legal default assumption seems to be that you can use your work laptop for personal things that don't interfere with your work. Because that's a normal thing people do.
I suspect they should say "this machine is not confidential" and have good reasons for that - you can't just impose extra restrictions on your employees just because you want to.
The law (as executed) will weigh the normal interest in employee privacy, versus your legitimate interest in doing whatever you want to do on their computers. Antivirus is probably okay, even if it involves TLS interception. Having a human watch all the traffic is probably not, even if you didn't have to intercept TLS. Unless you work for the BND (German Mossad) maybe? They'd have a good reason to watch traffic like a hawk. It's all about balancing and the law is never as clear-cut as programmers want, so we might as well get used to it being this way.
If the employer says so and I do so anyway then that’s a employment issue. I still have to follow company rules. But the point is that the company needs to delete the collected data as soon as possible. They are still not allowed to store it.
I’ll give an example in more familiar with. In the US, HIPPA has a bunch of rules about how private health information can be handled by everyone in the supply chain, from doctor’s offices to medical record SaaS systems. But if I’m running a SaaS note taking app and some doctor’s office puts PHI in there without an express contract with me saying they could, I’m not suddenly subject to enforcement. It all falls on them.
I’m trying to understand the GDPR equivalent of this, which seems to exist since every text fields
in a database does not appear to require the full PII treatment in practice (and that would be kind of insane).
Drivers are interesting from a safety perspective, because on systems without an IOMMU sending the wrong command to devices can potentially overwrite most of RAM. For example, if the safe wrappers let you write arbitrary data to a PCIe network card’s registers you could retarget a receive queue to the middle of a kernel memory page.
> if the safe wrappers let you write arbitrary data to a PCIe network card’s registers
Functions like that can and should be marked unsafe in rust. The unsafe keyword in rust is used both to say “I want this block to have access to unsafe rust’s power” and to mark a function as being only callable from an unsafe context. This sounds like a perfect use for the latter.
> Functions like that can and should be marked unsafe in rust.
That's not how it works. You don't mark them unsafe unless it's actually required for some reason. And even then, you can limit that scope to a line or two in majority of cases. You mark blocks unsafe if you have to access raw memory and there's no way around it.
It is how it's supposed to work. `unsafe` is intended to be used on functions where the caller must uphold some precondition(s) in order to not invoke UB, even if the keyword is not strictly required to get the code to compile.
The general rule of thumb is that safe code must not be able to invoke UB.
Yes. I was objecting to the parent poster's "can and should be" which sounds like they think people just randomly choose where to use the unsafe decoration.
The situation seems reminiscent of "using File::open to modify /proc/self/mem". It's safe to work with files, except there's this file that lets you directly violate memory safety.
I can't say I got the same feeling. To me, the "Functions like that" lead-in indicated the opposite if anything since it implies some kind of reasoned consideration of what the function is doing.
All logic, all state management, all per-device state machines, all command parsing and translation, all data queues, etc. Look at the examples people posted in other comments.
Yep. Its kind of remarkable just how little unsafe code you often need in cases like this.
I ported a C skip list implementation to rust a few years ago. Skip lists are like linked lists, but where instead of a single "next" pointer, each node contains an array of them. As you can imagine, the C code is packed full of fiddly pointer manipulation.
The rust port certainly makes use of a fair bit of unsafe code. But the unsafe blocks still make up a surprisingly small minority of the code.
Porting this package was one of my first experiences working with rust, and it was a big "aha!" moment for me. Debugging the C implementation was a nightmare, because a lot of bugs caused obscure memory corruption problems. They're always a headache to track down. When I first ported the C code to rust, one of my tests segfaulted. At first I was confused - rust doesn't segfault! Then I realised it could only segfault from a bug in an unsafe block. There were only two unsafe functions it could be, and one obvious candidate. I had a read of the code, and spotted the error nearly immediately. The same bug would probably have taken me hours to fix in C because it could have been anywhere. But in rust I found and fixed the problem in a few minutes.
In normal user-mode rust, not running inside the kernel at all, you can open /dev/mem and write whatever you want to any process's memory (assuming you are root). This does not require "unsafe" at all.
Another thing you can do from rust without "unsafe": output some buggy source code that invokes UB in a language like C to a file, then shell out to the compiler to compile that file and run it.
Sure, but those are non-central to what the program is doing. Writing to the wrong register offset and hosing main memory is a thing that happens when developing drivers (though usually it breaks obviously during testing).
Right, you're not wrong that this is a possible failure mode which Rust's guarantees don't prevent.
I'm just pointing out that "your program manipulates the external system in such a way that UB is caused" is outside the scope of Rust's guarantees, and kernel development doesn't fundamentally change that (even though it might make it easier to trigger). Rust's guarantees are only about what the Rust code does; anything else would be hard or impossible to guarantee and Rust doesn't try to do so.
If drama was going to drive Linux kernel developers away, it would have just been Torvalds working on it alone for 30 years. For better or worse, that community has selected for having very thorough antibodies to conflict.
I agree it doesn’t magically eliminate bugs, and I don’t think rearchitecting the existing Linux kernel would be a fruitful direction regardless. That said, OS services split out into apps with more limited access can still provide a meaningful security barrier in the event of a crash. As it stands, a full kernel-space RCE is game over for a Linux system.
reply