If your definition of "works" includes out of bounds memory access, use after free, etc., then yes. If your definition does not include those, then it demonstrably does not.
Alternately, maybe there's a spectrum of undesirable behaviors, some of which are preventable by choice of language, some of which aren't, and trying to reduce a complex set of tradeoffs to a simple binary of whether it "just works" only restates the conclusion someone has already come to because you need to actually reason about those tradeoffs to come to an informed decision of where to implicitly draw the line in the first place.
While a package with 10 million all-time downloads is nothing to sneeze at, it's had one memory corruption bug reported in its ~7 year life.
It's being compared to a C library that's held to extremely high standards, yet this year had two integer overflow CVEs and two other memory corruption CVEs.
SQLite is a lot more code, but it's also been around a lot longer.
The point is that matrix transpose should be trivial. But my main point really is that looking at CVEs is just nonsense. In both cases it is is a rather meaningless.
furthermore the issue at core was an integer overflow, which is tricky in all languages and e.g. has poppet up on HN recently in context of "proven correct" code still having bugs (because the prove didn't use finit precision integers)
it's also less tricky in rust then in C due to no implicit casts and debug build checking for integer overflows and tests normally running against debug builds
Projects do sometimes enable it even on release builds for security sensitive code(1).
so if anything the linked issue is in favor of using rust over C while acting as a reminder that no solution is perfect
(1): It comes at a high performance cost, but sometimes for some things it's an acceptable cost. Also you can change such setting per crate. E.g. at a company I worked at a few years ago we did build some sensitive and iffy but not hot parts always with such checks enabled and some supper hot ML parts always with optimizations enabled even for "debug/test" builds.
Bounds checking for matrices is trivial. The point is that once you compete with C and need to do something slightly more complex, mistakes also can happen in Rust. Now, we can have a discussion if it is still safer and I may even agree), but it defeats the "eliminate a whole class of issues" marketing, doesn't it?
And something as simple as a for loop to iterate over an array of elements with an off-by-one error can cause undefined behavior in C. Let's not pretend that there's some universally-agreed-upon hierarchy of what types of bugs are unconscionable and which ones are unfortunate unavoidable facts of life just because certain ones existed in the older language and others didn't.
> Its that there are languages with additional features which make it easier to have a high confidence. If you can remove entire classes of bugs automatically, why not do so?
Which languages remove which classes of bugs entirely? This vagueness is killing me
Safe Rust and Ada SPARK entirely remove classes of bugs like undefined behavior and memory safety issues. The latter will also statically eliminate things like overflow and type range errors.
These are subsets of their respective languages, but all safety critical development in C and C++ relies on even more constrained language subsets (e.g. MISRA or AV++) to achieve worse results.
C and C++ don't have such a subset. That seems pretty relevant, given they're the languages being compared and they're used for the majority of safety critical development.
The standards I mentioned use tricks to get around this. MISRA, for example, has the infamous rule 1.3 that says "just don't do bad things". Actually following that or verifying compliance are problems left completely to the user.
On the other hand, Safe Rust is the default. You have to go out of your way to wrap code in an unsafe block. That unsafe block doesn't change the rules of the language either, it just turns off some compiler checks.
Unfortunately, no. "Memory safe rust" is a more general concept than "Safe Rust". "Safe rust" is a generally understood term for the subset of rust that's everything outside unsafe blocks. Here's an example where it's used in the language docs [0]. "Memory safe rust" also includes all the unsafe code that follows the language rules, which is ideally all of it.
I can see how this would be confusing and probably should have been clarified with emphasis in the original comment. Safety in the sense of "safety critical" isn't a property any programming language can have on its own, so I wouldn't have intended that regardless.
Memory safety doesn't really help that much with functional safety.
Sure, a segfault could potentially make some device fail to do its safety critical operation, but that is treated in the same way a logic bug would be, so it's not really a concern in of itself.
But then again, an unchecked .unwrap() would lead to the same failure mode, so a "safe" crash just just as bad as an "unsafe" one.
Memory safety (as defined by Rust) actually goes a very long way to help with functional safety, mostly because in order to have a memory safe language, you need a number of additional language features that generally aid with correctness.
For example, lifetimes are necessary for memory safety in Rust, but you can use lifetimes much more generally to express things like "while this object exists, this other object is inaccessible", or "this thing is strictly read-only under these particular conditions". That's very useful.
But memory-unsafe code doesn't just segfault, it can corrupt your invariants and continue running, or open a door for an attacker to RCE on the machine. Memory safety is necessary (but not sufficient) to uphold what should be the simplest invariant of any code base, that program execution matches the source code in the first place.
C and C++ don't have such subset defined as part of their standard. Left to users means left to additional tools, which do exist. Rust only has memory safety by default, this is a small part of the problem and it is not clear to me that having this helps with functional safety. (Although I agree that it helps elsewhere).
I'd be happy to explain at length why the existing tools and standards are insufficient if you want. It'd be easier to have that discussion over another medium than HN comment chain though.
If you think a strong and convenient type system helps with functional safety, then Rust helps with functional safety. This is also generally the experience in the industry.
I am not convinced a strong type system helps with functional safety and I am not even deeply impressed by Rust's type system. The scientific literature does even seem even that clear about whether a strong type system substantially reduces software defects in general. I believe in proofs though. I generally believe complexity is bad and both C++ and Rust are too complex for my taste. I also think Rust has severe supply chain issues.
If you stop learning the basics, you will never know when the sycophantic AI happily lures you down a dark alley because it was the only way you discovered on your own. You’ll forever be limited to a rehashing of the bland code slop the majority of the training material contained. Like a carpenter who’s limited to drilling Torx screws.
If that’s your goal in life, don’t let me bother you.
That's not entirely fair. It's relatively easy to learn the basics of regular expressions. But it's also relatively easy, with that knowledge, to write regular expressions that
- don't work the way you want them to (miss edge cases, etc)
- fail catastrophically (ie, catastrophic backtracking, etc) which can lead to vulnerabilities
- are hard to read/maintain
I love regular expressions, but they're very easy to use poorly.
> If they don't work the way you want, you just keep refining it. This is easy if you actually test your regex in real data.
There can be edge cases in both your data and in the regular expression itself. It's not as easy as "write your code correctly and test it". Although that's true of programming in general, regular expressions tend to add an extra "layer" to it.
I don't know if you meant it to be that way, but your comment sounds a lot like "it's easy to program without bugs if you test your code". It's pretty much a given that that's not the case.
I didn’t get the “it’s easy to program without bugs” vibe at all, and OP even mentioned an edge case that took their parser down (BUG!)
Neither the human nor the AI will catch every edge case, especially if the data can be irregular. I think the point they were making is more along the lines of “when you do it yourself, you can understand and refine and fix it more easily.”
If an LLM had done my regular expressions early in my career, I’d have only !maybe! have learned just what I saw and needed to know. I’m almost certain the first time I saw (?:…) I’d have given up and leaned into the AI heavily.
Not yet AFAIK, but unlike the 'inbetween' versions since C99, C23 actually has a number of really useful features, so I expect that a couple of projects will start using C23 features. The main problem is as always MSVC, but then maybe its time to ditch MSVC support since Microsoft also seems to have abandondend it (not just the C compiler, but also the C++ compiler isn't seeing lots of updates since everybody working on Visual Studio seems to have been reassigned to work in the AI salt mines).
(not sure how much Apple/Google even cared about the C frontend before though, but at least keeping the C frontend and stdlib uptodate by far doesn't require as much effort as C++).
Apple used to care before Swift, because Objective-C unlike C++, is a full superset from C.
Most of the work going into LLVM ecosystem is directly into LLVM tooling itself, clang was started by Apple, and Google picked up on it.
Nowadays they aren't as interested, given Swift, C++ on Apple platforms is mostly for MSL (C++14 baseline) and driver frameworks (also a C++ subset), Google went their own way after the ABI drama, and they care about what fits into their C++ style guide.
I know Intel is one of the companies that picked up some of the work, yet other compiler vendors that replaced their proprietary forks with clang don't seem that eager to contribute upstream, other than LLVM backends for their platforms.
From a marketing perspective perhaps, but it's still a supported LTS release of Ubuntu at heart and having two different version numbers would create ambiguity.
Things that should work on that particular Ubuntu LTS should work in Pop_OS! And at least you don't have to cross reference things.
Thankfully they keep important things more up to date with newer kernels/hardware support than the version numbers would suggest, but I think that it's a common point of confusion.
How is ssize_t any better? It's not part of standard C and is only guaranteed to be capable of holding values between -1 and SSIZE_MAX (minimum 32767, no relation to SIZE_MAX).
it's used for rewriting CLI utilities with more color by five or so people
reply