That’s not really a “but” to the comment, which was that you need to find one bug and it’s game over. We’ve known for a long time that best practices aren’t enough to prevent memory corruption in large enough C++ code bases, so it’s likely a motivated attacker would eventually find something.
Sure - look i'm not gonna argue about C++ memory safety - i've funded and spent time trying to ensure we are funding figuring out how to get off of C++.
I just assume C++ code is unsafe, because it's really really hard to make it safe.
However, at the same time, the privilege escalation issues would have happened in any language - if you don't implement the check, you don't implement the check.
(and you could make it equally automatic in most languages)
What the missing access check protected was a stream of information that could defeat ASLR. If Zircon was written in a memory-safe language, that would have been the end of the issue. Logic bugs and missing access checks are still possible, but defeating them has fairly well definable consequences. Since Zircon isn’t written in a memory-safe language, the author was able to use that to fully compromise the kernel instead. I don’t mean that you can’t write bugs in memory-safe languages, but in the end the attacker still has to play by your rules. With a memory safety bug, attackers play by no one’s rules.
I admire your optimism, but the notion that missing access checks are somehow less dangerous in a memory safe language is nonsense on its face.
Yes, this particular one enabled a defeat of ASLR, but so what?
Missing access checks enable privilege escalation no matter what the language.
Your claim that "has well-definable consequences" is equally true in C++ as anywhere else.
Whether you miss your access check in rust, or C++, or python, or whatever, the definable consequences are "privilege escalation".
Let's not pretend memory safe languages solve logic problems.
They help with memory safety - that's awesome but not a complete solution.
If we want better verification of access contracts, we'd need a language with contracts or some other verifiable mechanism.
Those exist, and i'd support their use in this sort of case.
No, logic bug consequences are fundamentally different from memory corruption bugs. A bug with an access check for one resource in the kernel means you get access to that specific resource. A missing bounds check in the kernel—anywhere in the kernel—means you get arbitrary code execution. Sure, _sometimes_ you can escalate from one resource to another with an access check issue, but that’s a _sometimes_. One memory corruption bug _anywhere_ defeats the protection of _all resources, all the time_. Not only that, but memory corruption bugs have well-understood exploit recipes that attackers can make once and reuse any time they find a new bug to save work, whereas an attacker has to write a new distinct exploit for most logic bugs.
There is no equivalence to make. Memory corruption bugs are unambiguously, unequivocally better exploitation primitives than logic bugs.
No one is claiming that memory-safe languages solve logic problems. The claim is that most memory corruption bugs are conveniently exploitable and any exploit can reach for anything in the address space, and logic bugs often can’t. You could as well say that you’d rather promote chess pawns to knights instead of queens. Like, that makes sense sometimes, but it’s a bad default.