Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, we know the market won't solve this. That's why people are talking about government standards.


I can't help but think that those lazy mathematicians might benefit from a congressional order to clean up that twin prime problem too.

If memory safety was "just the right regulations" easy, it would have already been solved. Every competent developer loves getting things right.

I can imagine a lot more "compliance" than success may be the result of any "progress" with that approach.

The basic problem is challenging, but makes it hard-hard is the addition of a mountain of incidental complexity. Memory safety as a retrofit on languages, tools and code bases is a much bigger challenge than starting with something simple and memory safe, and then working back up to something with all the bells and whistles that mature tool ecosystems provide for squeaking out that last bit of efficiency. Programs get judged 100% on efficiency (how fast can you get this working? how fast does it run? how much is our energy/hardware/cloud bill?), and only 99% or so on safety.

If the world decided it could get by on a big drop in software/computer performance for a few years while we restarted with safer/simpler tools, change would be quick. But the economics would favor every defector so much that ... that approach is completely unrealistic.

It is going to get solved. The payoff is too high, and the pain is too great, for it not to. But not based on a concept of a plan or regulation.


> If memory safety was "just the right regulations" easy, it would have already been solved.

Memory safety is already a solved problem in regulated industries. It's not a hard problem as such. People just don't want to solve it and don't have any incentive to: companies aren't penalised for writing buggy software, and individual engineers are if anything rewarded for it.

> Every competent developer loves getting things right.

Unfortunately a lot of developers care more about being able to claim mastery of something hard. No-one gets cred for just writing the thing in Java and not worrying about memory issues, even though that's been a better technical choice for decades for the overwhelming majority of cases.


> Memory safety is already a solved problem in regulated industries. It's not a hard problem as such.

It's not hard, no, but it is expensive, because those regulations have a battery of tests run by a thirdy party that you will pay money to each time you want to recertify.

I've worked in two regulated industries; the recertification is the expensive part, not the memory errors.


> Memory safety is already a solved problem

Most famously in Rust. Even there it takes work.

The problem is a practical coding work efficiency (and quality) one. You are right that there are no intractable memory problems even in the unsafest least helpful languages.


Regulated industries have overwhelmingly boring and expensive software compared to others. They do things like banning recursion and dynamic arrays lol. Memory safety in every aspect possible just isn't worth it for most applications. And the degree of memory safety that is worth it is a lot less than Rust developers seem to think, and the degree of memory safety granted by Rust is less than they think as well.


Memory safety isn't worth it as long as leaking all your users' data (and granting attackers control over their systems) doesn't cost much. As attacks get more sophisticated and software gets more important, the costs of memory unsafety go up.


What you've said is true but I still think the problem is overblown, and solutions at the hardware level are disregarded in favor of dubious and more costly software rewrite solutions. If something like CHERI was common then it would automatically find most security-related memory usage bugs, and thus lead to existing software getting fixed for all hardware.


IME you can't reliably extract the intent from the C code, much less the binary, so you can't really fix these bugs without a human rewriting the source. The likes of CHERI might make exploitation harder, but it seems to me that ROP-style workarounds will always be possible, because fundamentally if the program is doing things that look like what it was meant to do then the hardware can never distinguish whether it's actually doing what it was meant to do or not. Even if you were able to come up with a system that ensured that standards-compliant C programs did not have memory bugs (which is already unlikely), that would still require a software rewrite approach in practice because all nontrivial C programs/libraries have latent undefined behaviour.


> IME you can't reliably extract the intent from the C code, much less the binary, so you can't really fix these bugs without a human rewriting the source.

I am pretty sure that the parent is talking about hardware memory safety which doesn't require any "human rewriting the source".


> I am pretty sure that the parent is talking about hardware memory safety which doesn't require any "human rewriting the source".

It does though. The hardware might catch an error (or an "error") and halt the program, but you still need a human to fix it.


> but you still need a human to fix it.

The same thing can be said about a Rust vector OOB panic or any other bug in any safe language. Bugs happen which is why programmers are employed in the first place!


> The same thing can be said about a Rust vector OOB panic or any other bug in any safe language. Bugs happen which is why programmers are employed in the first place!

Sure, the point is you're going to need the programmer either way, so "hardware security lets us detect the problem without rewriting the code" isn't really a compelling advantage for that approach.


If a program halts, that is a narrow security issue that will not leak data. Humans need to fix bugs, but that is nothing new. A memory bug with such features would be hardly more significant than any other bug, and people would get better at fixing them over time because they would be easier to detect.


> If a program halts, that is a narrow security issue that will not leak data.

Maybe. Depends what the fallback for the business that was using it is when that program doesn't run.

> Humans need to fix bugs, but that is nothing new. A memory bug with such features would be hardly more significant than any other bug

Perhaps. But it seems to me that the changes that you'd need to make to fix such a bug are much the same changes that you'd need to make to port the code to Rust or what have you, since ultimately in either case you have to prove that the memory access is correct. Indeed I'd argue that an approach that lets you find these bugs at compile time rather than run time has a distinct advantage.


>Perhaps. But it seems to me that the changes that you'd need to make to fix such a bug are much the same changes that you'd need to make to port the code to Rust or what have you, since ultimately in either case you have to prove that the memory access is correct.

No, you wouldn't need to prove that the memory access is correct if you relied on hardware features. Or I should say, that proof will be mostly done by compiler and library writers who implement the low level stuff like array allocations. The net lines of code changed would definitely be less than a complete rewrite, and would not require rediscovery of specifications that normally has to happen in the course of a rewrite.

>Indeed I'd argue that an approach that lets you find these bugs at compile time rather than run time has a distinct advantage.

It is an advantage but it's not free. Every compilation takes longer in a more restrictive language. The benefits would rapidly diminish with the number of instances of the program that run tests, which is incidentally one metric that correlates positively with how significant bugs actually are. You could think of it as free unit tests, almost. The extra hardware does have a cost but that cost is WAAAY lower than the cost of a wholesale rewrite.


> No, you wouldn't need to prove that the memory access is correct if you relied on hardware features. Or I should say, that proof will be mostly done by compiler and library writers who implement the low level stuff like array allocations. The net lines of code changed would definitely be less than a complete rewrite, and would not require rediscovery of specifications that normally has to happen in the course of a rewrite.

I don't see how the hardware features make this part any easier than a Rust-style borrow checker or avoid requiring the same rediscovery of specifications. Checking at runtime has some advantages (it means that if there are codepaths that are never actually run, you can skip getting those correct - although it's sometimes hard to tell the difference between a codepath that's never run and a codepath that's rarely run), but for every memory access that does happen, your compiler/runtime/hardware is answering the same question either way - "why is this memory access legitimate?" - and that's going to require the same amount of logic (and potentially involve arbitrarily complex aspects of the rest of the code) to answer in either setting.


The human might say, sorry my C program is not compatible with your hardware memory safety device. I won't/can't fix that.


That's possible but unlikely. I would be OK with requiring software bugs like that to be fixed, unless it can be explained away as impossible for some reason. We could almost certainly move toward requiring this kind of stuff to be fixed much more easily than we could do the commonly proposed "rewrite it in another language bro" path.


There's no such thing as hardware memory safety, with absolutely no change to the semantics of the machine as seen by the compiled C program. There are going to be false positives.


> There are going to be false positives

Of course, but compare it with rewriting it to a completely different language.


There may be some cases where code would need to be adjusted or annotated to use CHERI well, but that has to be easier than translating to or interfacing with another language.


How many modern apps are running inside a browser, one way or another? The world’s already taken that big drop on performance.


When you can’t convince people it’s better, you need to force them to do it.


Did you forget a /s? It seems that if you can't convince a majority of programmers that your new language is good enough to learn, maybe it actually isn't as good as its proponents claim. It is likely the case that rewriting everything in a new language for marginally less bugs is a worse outcome than just dealing with the bugs.


I agree. I don’t think we need a government computer language force. Terry is a prophet.


Mind you, the government has tried this before with Ada. Not to knock Ada but let's just say that government would ruin everything and stifle the industry. Certainly, any new regulations about anything as broad as how memory is allowed to be managed is going to strangle the software industry.


Ada was an impressive language for its day. But even with Ada, government never mandated its use generally - only for certain government projects, and waivers were available with justification.

And as industry took the lead in language innovation, government removed its Ada requirement.


If this has to be forced, it probably isn't necessary or very beneficial. How much will it cost to conform to these "standards" versus not? Who stands to gain by making non-conformant software illegal? I think it is clearly far too expensive to rewrite all software and retrain all programmers to conform to arbitrary standards. Hardware solutions to improve memory safety already exist and may ultimately be the best way to achieve the goal.

It seems to me that Rust programmers, unhappy with the pace of adoption of Rust, seek to make other languages illegal because they do things different from Rust.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: