Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me, there's a headline draw, which is the borrow checker. Really great.

But apart from that, Rust is basically a bag of sensible choices. Big and small stuff:

- Match needs to be exhaustive. When you add something to the enum you were matching, it chokes. This is good.

- Move by default. If you came from c++, I think this makes a lot of sense. If you have a new language, don't bring the baggage.

- Easy way to use libraries. For now it hasn't splintered into several ways to build yet, I think most people still use cargo. But cargo also seems to work nicely, and it means you don't spend a couple of days learning cmake.

- Better error handling. There's a few large firms that don't use exceptions in c++. New language with no legacy? Use the Ok/Error/Some/None thing.

- Immutable by default. It's better to have everything locked down and have to explicitly allow mutation than just have everything mutable. You pay every time you forget to write mut, but that's pretty minor.

- Testing is part of the code, doesn't seem tacked on like it does in c++.



> Match needs to be exhaustive.

When I see people mention C++ with MISRA rules, I just think -- why do we need all these extra rules, often checked by a separate static analysis tool and enforced manually (that comes down to audit/compliance requirement), when they make perfect sense and could be done by the compiler? Missing switch cases happens often when an enum value is modified to include one extra entry and people don't update all code that uses it. Making it mandatory at compiler level is an obvious choice.


  -Wswitch ¶
    Warn whenever a switch statement has an index of enumerated type and lacks a case for one or more of the named codes of that enumeration. (The presence of a default label prevents this warning.) case labels that do not correspond to enumerators also provoke warnings when this option is used, unless the enumeration is marked with the flag_enum attribute. This warning is enabled by -Wall.
<https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#inde...>

The compiler can do that... And it's included in -Wall. It's not on by default but is effectively on in any codebase where anyone cares...

Please don't argue about "but I don't need to add a flag in Rust" it's not rust, there's reasons the standard committee finds valid for why and honestly your welcome to implement your own compiler that turns it on by default just like the rust compiler which has no standard because "the compiler is the standard".


MISRA won't be OK with that.

MISRA requires that you explicitly write the default reject. So -Wswitch doesn't get it done even though I agree that if C had (which it did not) standardized this requirement that would get you what you need.

C also lacks Rust's non_exhaustive trait. If the person making a published Goose type says it's non-exhaustive then in their code nothing changes, all their code needs to account for all the values of type Goose as before - but everybody else using that type must accept that the author said it's non-exhaustive, so they cannot account for all values of this type except by writing a default handler.

So e.g if I publish an AmericanPublicHoliday type when Rust 1.0 ships in 2015, and I mark it non-exhaustive since by definition new holidays may be added, you can't write code to just handle each of the holidays separately, you must have a default handler. When I add Juneteenth to the type, your code is fine, that's a holiday you must handle with your default handler, which you were obliged to write.

On the other hand IPAddr, the IP address, is an ordinary exhaustive type, if you handle both IPv6Addr and IPv4Addr you've got a complete handling of IPAddr.


"MISRA requires that you explicitly write the default reject."

You can always use -Wswitch-enum then.


Ugh too late to catch myself, non_exhaustive is an attribute, not a trait.


> Please don't argue about "but I don't need to add a flag in Rust"

Why not? It's a big issue. You say it's "on in any codebase where anyone cares", and I agree with that but in my experience most C++ developers don't care.

I regularly have to work with other people's C++ where they don't have -Wall -Werror. It's never an issue in Rust.

Also I don't buy that they couldn't fix this because it would be a breaking change. That's just an excuse for not bothering. They've made backwards incompatible changes in the past, e.g. removing checked exceptions, changing `auto`, changing the behaviour around operator==. They can just use the standard version, just like Rust uses Editions.

Of course they won't, because the C++ standards committee is still very much "we don't need seatbelts, just drive well like me".


> I regularly have to work with other people's C++ where they don't have -Wall -Werror.

To be fair, -Werror is kind of terrible. The set of warnings is very sensitive to the compiler version, so as soon as people work on the project with more than one compiler or even more than one version of the same compiler, it just becomes really impractical.

An acceptable compromise can be that -Werror is enabled in CI, but it really shouldn't be the default at least in open-source projects.


A common trope that is probably ignored or even unknown to uninformed C/C++ programmers, is that -Werr should be used for debug builds (as you use during development) and never for release builds (as otherwise it will most probably break compilation in future releases of the compiler)


> A common trope that is probably ignored or even unknown to uninformed C/C++ programmers, is that -Werr (...)

Not even that. -Wall -Werror should be limited to local builds, and should never touch any build config that is invoked by any pipeline.


No you definitely want to enforce this in CI.


> No you definitely want to enforce this in CI.

No, not really. It makes absolutely no sense to block builds for irrelevant things as passing unused arguments to a function.


> irrelevant things as passing unused arguments to a function.

That's not irrelevant. I have seen many bugs detected by that warning.


-Werror= and you can error on decide which warnings are errors. No reason to enable it globally


Yes that is the standard practice for open source projects (where it happens at all), but again that's another way in which C++ warnings are not even close to Rust errors.


> I regularly have to work with other people's C++ where they don't have -Wall -Werror.

I think you inadvertently showed why this sort of thing: it's simply bad practice and a notorious source of problems. With -Wall -Werror you can turn any optional nit remark into a blocked pipeline requiring urgent maintenance. I know it because I had to work long hours in a C++ project that suddenly failed to build because a moron upstream passed -Wall -Werror as transitive build flags. We're talking about production pipelines being blocked due to things like function arguments being declared but not used.

Sometimes I wonder if these discussions on the virtues of blindly leaning on the compiler are based on solid ground or are instead opinionated junior devs passing off their skinner box as some kind of operational excellence.


Wall Werror is a nice idea that university professors will tell you about that collides at first contact with the real world where you are including 3rdparty headers that then spit 50 pages of incomprehensible GCC "overflow analysis" warnings.


You can use `-Isystem` for that. It isn't particularly well supported by C++ build systems, but also your assertion that third party headers don't compile with `-Wall -Werror` doesn't match my experience. Usually they're fine.

> GCC "overflow analysis" warnings

I think I've seen this with `fmt`, and it was a GCC compiler bug. Not much you can do about that.


>...honestly your welcome to implement your own compiler that turns it on by default just like the rust compiler which has no standard because "the compiler is the standard".

The C and C++ standards are quite minimal and whether or not an implementation is "compliant" or not is often a matter of opinion. And unlike other language standards (e.g. Java or Ada) there isn't even a basic conformance test suite for implementations to test against. Hence why Clang had to be explicitly designed for GCC compatibility, particularly for C++.

Merely having a "language standard" guarantees very little. For instance, automated theorem proving languages like Coq (Rocq now, I suppose)/Isabelle/Lean have no official language standard, but they far more defined and rigorous than C or C++ ever could be. A formal standard is a useful broker for proprietary implementations, but it has dubious value for a language centered around an open source implementation.


> It's not on by default but is effectively on in any codebase where anyone cares...

Then why is this a MISRA rule by itself? Shouldn't it just be "every codebase must compile with -Wall or equivalent"?


I wouldn't be surprised if you could justify in a review the compiling with -Wall (probably more explicitly) catches this and therefore you can disregard the rule.

Not all compilers have a -Wall equivalent, GCC, Clang and MSVC does but RANDOM_EMBEDDED_CHIP's custom compiler might not and that is a valid target for MISRA compliance.

I doubt every single thing that needs MISRA get's compiled with an industry standard compiler, I wouldn't be surprised that GCC is the exception for most companies targeting MISRA compliance.


but I don't need to add a flag in Rust


MISRA's rules are a real mix in three interesting senses

Firstly, in terms of what the rules require. Some MISRA rules are machine checkable. Your compiler might implement them or, more likely, a MISRA auditing tool you bought does so. Some MISRA rules need human insight in practice. Is this OK, how about that? A good code review process should be able to catch these, if the reviewers are well trained. But a final group are very vague, almost aspirational, like the documentation requirements, at their best these come down to a good engineering lead, at their worst they're completely futile.

Secondly in terms of impact, studies have shown some MISRA rules seem to have a real benefit, codebases which follow these rules have lower defect rates. Some are neutral, some are net negative, code which followed these MISRA rules had more defects.

Thirdly in terms of what they do to the resulting software. Some MISRA rules are reasonable choices in C, you might see a good programmer do this without MISRA prompting just because they thought it was a good idea. Some MISRA rules prohibit absolute insanity. Stuff like initializing a variable in one switch clause, then using it in a different clause! Syntactically legal, and obviously a bad idea, nobody actually does that so why write a whole rule to prohibit it? But then a few MISRA rules require something no reasonable C programmer would ever write, and for a good reason, but it also just doesn't really matter. Mostly this is weird style nits, like if your high school English essay was marked by a NYT copy editor and got a D minus because you called it NASCAR not Nascar. You're weird NYT, you're allowed to be weird but that's not my fault and I shouldn't get penalized.


Because MISRA is also insane and has long bled into a middle managers dream of a style guide? It would make for a terrible language (that ironically isn't much more "secure" "safe" "reliable")


> Better error handling. There's a few large firms that don't use exceptions in c++. New language with no legacy? Use the Ok/Error/Some/None thing.

I think this is still very much a debatable point. There are disadvantages to exceptions, mostly around code size and performance. But they are still the only error handling mechanism that anyone has found that defaults to adding enough context to errors to actually be useful (except of course in C++, because C++ doesn't like having useful constructs).

Rust error handling tends towards not adding any kind of context whatsoever to errors - if you use the default error mechanisms and no extra libraries. That is, if you have a call stack three functions deep that uses `?` for error handling, at the top level you'll only get an error value, you'll have no idea where the value originated from, or any other information about the execution path. This can be disastrous for actually debugging hard to reproduce errors.


I feel like your last point is the exact issue with exceptions, not rust’s errors. Exceptions are like having “?” on every single line.


When an exception happens, you get a stack trace somewhere in your logs (unless you do something really weird). That doesn't always include all the information you'd like (for example, if the error happened in a loop, you don't get info about the loop variable).

In contrast, unless you manually add context to the error (or use a library that does something like this for you, overriding the default ? behavior), you won't get any information about where an error occurred at all.

Sure, with exceptions, you don't know statically where an exception might happen. But at runtime, you do get the exact information. So, if the error is hard to reproduce, you still have information about where exactly it occurred in those rare occasions where it happened.


> When an exception happens, you get a stack trace somewhere in your logs

OK, so, if I write the canonical modern C++ Hello World, execute it against an environment where the "standard output" doesn't exist, where does this stack trace get recorded? Maybe it depends on the compiler and standard library implementation somehow?

My impression is that in reality C++ just ignores the problem and carries on, so actually there was no stack trace, no logging, it just didn't work and too bad. Unsurprisingly people tasked with making things work prefer a language which doesn't do that.


How does any other language deal with POSIX standard I/O streams or the lack thereof? Definitely not a C++ or exceptions problem. Which language lets you compile a "Hello, World!" program and then execute it against a non-POSIX-compatible environment and get the correct output... somewhere?

If you're executing against a POSIX-compatible environment, then stdin, stdout, and stderr are expected to exist and be configured properly if you want them to work[1].

If you're executing against some other environment, like webassembly or an embedded system, then you'll already (hopefully) be using some logging and error handling approach that sends output to the correct place. Doesn't matter if you're using C, C++, .NET, Rust, Zig, etc.

For example, webassembly is an environment without stdio streams. It's your responsibility to make sure there is a proper way to record output, even if it's just a compatibility layer that goes to console.log.

[1]: https://pubs.opengroup.org/onlinepubs/9799919799/functions/s...


The other languages do not (as my parent claimed) write stacktraces to a log somehow. I suspect that in reality they've ommitted to explain that they're expected to write all the C++ code to make that stacktrace and write it to a log, but once you add those steps you're back to parity with Rust, the Rust programmers can write a stacktrace to a log too.

In the specific case of "Hello, World" it's more embarrassing. The Rust Hello World does indeed experience and report errors if there are any, the canonical C just ignores them, as does the C++.


> The Rust Hello World does indeed experience and report errors if there are any, the canonical C just ignores them, as does the C++.

Can you give an example for each of those?



Thank you for providing a reference. After reading the blog post on that page, I'm even less convinced that your point is useful.

I don't think it's a bug if, like in the C example, you don't handle the return value of the function you are calling. The strace shows that the function returned an error, but the code doesn't check it. Not a language flaw.

In fact, in most of the languages that "don't have the bug", the runtime is automagically capturing the issue and aborting the program. Like an exception. Rust just "doesn't have the bug" because the compiler forces you to handle the error. All the .NET languages do the same thing at runtime and force you to handle the I/O error... with an exception handler.

Unfortunately, your talking points just seem like more Rust fanaticism trying to discredit any other language. This happens in every single discussion about any language other than Rust, especially C/C++. I'm not going to engage any further.


I'll go into details about your particular question, but I first want to explain why it's missing the point. The difference in terms of logging between exceptions and Rust error handling (or Haskell, or Go, or C) is unrelated to how you print out the log information. It's related to the fact that the exception object itself collects and carries the stack trace information, which the runtime populates if and when an exception happens, whereas in Rust it's up to the programmer (or some library) to manually collect this information and either print it or add it to a custom error object, at every call site. The fact that uncaught exceptions get printed to stdout is the tiniest little bonus, and irrelevant for most programs: you shouldn't have uncaught exceptions in the first place. The important thing is that whenever you catch an exception, you know for sure that you'll have some useful diagnostic information about where exactly it occurred, regardless of who wrote the code between here and there.

Now on to your specific question.

First of all, I explicitly called out C++ exceptions as not having this useful property. C++ exceptions don't collect a stack trace, and the C++ runtime simply exits with an error code if an exception is thrown without a handler.

Now, moving to any other language with exceptions. What happens by default if executing in an environment without stdout will depend on details of the runtime of that language for that environment.

But let's assume that the runtime is not written to handle this gracefully. Here's the entirety of the code you need to add to your exception-based program to handle a lack of stdout and still get stack traces, in pseudo-code:

  int main() {
     try {
       return oldMain();
     } catch (Exception e) {
       with(File f = openFile("my-log.log")) {
         f.write("Unhandled exception:");
         e.printStackTrace(f);
       }
     }
  }
Where oldMain() is the main() you'd write for the same program if you did have stdout.


You seem to be arguing more for stack traces than for exceptions?

Rust can store backtraces in value objects as well [0]. It's opt-in (capturing a stack trace at the error value's creation may be expensive if that error is eventually handled), but with the anyhow crate you get a decent compromise: a stack trace is captured at the boundary of your program and libraries during the conversion, and then shown only if the error bubbles up to main().

And you get the bonus of storing both the stack trace, and relevant context where needed, e.g. to show values of parameters. Here's how that playground example above fails:

  Error: Second try
  
  Caused by:
      0: Parsing 'forty-two' as number
      1: invalid digit found in string
  
  Stack backtrace:
     0: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
               at ./.cargo/registry/src/index.crates.io-6f17d22bba15001f/anyhow-1.0.94/src/backtrace.rs:27:14
     1: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
               at ./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/result.rs:2009:27
     2: playground::parse_number
               at ./src/main.rs:25:8
     3: playground::parse_and_increment
               at ./src/main.rs:18:18
     4: playground::main
               at ./src/main.rs:7:19
     ...
[0] https://play.rust-lang.org/?version=stable&mode=debug&editio...


Cool, I didn't know about RUST_BACKTRACE = 1. That addresses the core of my comment, yes. I will note that some runtimes (like Java or C#, I believe) don't compute the stack trace unless and until it is requested, which means that exceptions that are caught and handled without being logged shouldn't incur the performance cost - thus removing most of the reason you may want to have a way to disable this behavior.

I did know about anyhow, that was exactly the library I was mentioning. But that requires manually adding context at all places where the error is passed.


Exceptions thrown in both Java and .NET eagerly compute the stack trace and the text associated with it. They are also very expensive compared to the happy path. Historically, manually thrown exceptions in OpenJDK have been cheaper than .NET's (although .NET 9 makes them twice as cheap) while NPE's in OpenJDK are much more expensive than regular Java exceptions or .NET's NREs

In Java, you can disable stack traces altogether that massively reduces the cost (which is what e.g. Crafting Interpreters suggests - it's a good course but the author is both wrong and actively misleading about cost model of implementations covered in parts 1 and 2 because of this) but few codebases do this.


> Easy way to use libraries

This is both a blessing and a curse. Seeing the rust docs require 561 crates makes it clear that rust/cargo is headed down the same path as node/npm

     Downloaded 561 crates (50.7 MB) in 5.21s (largest was `libsqlite3-sys` at 5.1 MB)


By "rust docs" you seem to mean "docs.rs, the website that hosts documentation for all crates in the Rust ecosystem", which is a little bit different than the impression you give.

It's a whole web services with crates.io webhooks to build and update new documentation every time a crates gets updated, tracks state in a database and stores data on S3, etc. Obviously if you just want to build some docs for one crate yourself you don't need any of that. The "rustdoc" command has a much smaller list of dependencies.


Cargo is 10 years old, and it's been working great. It has already proven that it's on a different path than npm.

* Rust has a strong type system, with good encapsulation and immutability by default, so the library interfaces are much less fragile than in JS. There's tooling for documenting APIs and checking SemVer compat.

* Rust takes stability more seriously than Node.js. Node makes SemVer-major releases regularly, and for a long time had awful churn from unstable C++ API.

* Cargo/crates-io has a good design, and a robust implementation. It had a chance to learn from npm's mistakes, and avoid them before they happened (e.g. it had a policy preventing left-pad from day one).

And the number of deps looks high, but it isn't what it seems. Rust projects tend split themselves into many small packages, even when they all are part of the same project written by the same people.

Cargo makes all transitive dependencies very visible. In C you depend on pre-built dynamic libraries, so you just don't see what they depend on, and what their dependencies depend on.

For example, Rust's reqwest shows up as 150 transitive dependencies, but it has fewer supported protocols, fewer features, and less code overall than a 1 dep of libcurl.


Almost all of the things that were wrong with NPM were self inflicted. No name spacing packages by default, allowing packages to be deleted / removed without approval, specifying install ranges and poor lock file implementation and so on.

There's an argument to be made that there are too many packages from too many authors to trust everything. I don't find the argument to be too convincing, because we can play what-if games all day long, and if you don't want to use them, you get to write your own.


The issue is micro-packages. Instead of a few layers between the os and your code, you find yourself with a wide dependency tree, with so many projects that it’s impossible to audit.


An alternative of "now everyone who uses a linked list has their own mostly-the-same, but-just-different-enough" list.c and list.h files that need separate auditing (if you care) isn't better.


If list.c is part of the project, it’s easier because you don’t have to hunt down every dependency’s repository. It’s much easier to audit and trust 5 projects/orgs, than 50 different entities.


When you work on rust, in any IDE you can click through any type and see its implementation, even if its within a dependency. No difference in auditing, except you also get the guarantee of `cargo vet`.


50 different dependencies covers a _lot_ more behaviour than a list.c. The point would be to audit a list package, and have audited it for all users, rather than all users needing to audit their own.


this is good actually


Add to this: trait system vs deep OOP.

Really nice macro system.

First class serde.

First class sync/send

Derives!


> First class serde.

What do you mean? `Serialize` and `Deserialize` are not part of std.


It's true, they're not part of the standard library. Nevertheless, it is conventional to provide implementations for things you reasonably expect your users might want to serialize and deserialize. Standard guidance includes telling you to name a feature flag (if you want one for this) serde and not something else so as to reduce extra work for your users.

Because Rust's package ecosystem is more robust it's less anxious about the strict line between things everybody must have (in the standard library) and things most people want (maybe or maybe not in the standard library). In C++ there's a powerful urge to land everything you might need in the stdlib, so that it's available.

For example the FreeBSD base system includes C++. They're not keen on adding to their base system, so for example they seem disinclined to take Rust, but when each C++ ISO standard bolts in whatever new random nonsense well that's part of C++ so it's in the base system for free. Weird data structure a game dev wants? An entire linear algebra system from Fortran? Comprehensive SI unit systems? It's not up to the FreeBSD gatekeepers, a WG21 vote gets all of those huge requirements into FreeBSD anyway.


This was a conscious decision by Rust folks. Let the language and std libraries be small enough to target anything - and let well-established crates (most written/started by the rust folks) fill in functionality. The main language provided the baseline interfaces, in some cases (see async), but not the machinery (e.g., async runtimes).


FWIW C++ in FreeBSD is a little contentious. The overall system build time is dominated by Clang, with the rest of FreeBSD "a wart on the side." In base, the C++ compiler was pretty much only used for devd (something vaguely like Linux' udev), and devd is written in a pre-C++11 dialect -- no new features. Using more of it isn't exactly encouraged; it's not allowed in the kernel.

There are two significant barriers to Rust in FreeBSD base -- first, cultural: it's just a bunch of greybeards opposed to anything and everything new; and second, technical: Rust just doesn't (or didn't) have compiler backends for the same subset of platforms FreeBSD does (or did). (This situation is improving as FreeBSD finally drops official support for obsolete SPARC, 32-bit ARM, MIPS, and 32-bit PowerPC platforms, but obviously cultural barriers remain.)


"Applying Traits to the Smalltalk Collection Classes", 2003

https://rmod-files.lille.inria.fr/Team/Texts/Papers/Blac03a-...

Traits as CS concept, are part of OOP paradigm.


Traits in Rust are more a variant of Haskell typeclasses than of Smalltalk traits.

The whole FP vs OOP distinction does make little sense these days, as it has mostly been shown that each concept from the one can neatly fit within the other and vice versa.


Traits as CS concept, are part of FP paradigm.

Reverse Uno!


Yes, and someone called Simon Peyton Jones happens to have a talk on how Haskell type classes and classical OOP interfaces interrelate.


Yes, and someone called Gabriella Gonzales happens to have a blog post on how objects are like comonads:

https://www.haskellforall.com/2013/02/you-could-have-invente...

And someone called Samuel the Bloggy Badger happens to have another blog posts on how comonads are really more like neighborhoods:

https://gelisam.blogspot.com/2013/07/comonads-are-neighbourh...

...so it might all just be a scam!


The traits concept mentioned in your link looks very different from Rust traits. It describes something more akin to Java interfaces.


Java interfaces are based on Objective-C protocols.

The only big difference is how implementation is mapped into the trait specification.


And that's the problem isn't it? Rust traits are based on GHC type classes, not at all from either Java or Objective-C or Smalltalk.


Thankfully this fellow Simon Peyton Jones has a talk about how they map into OOP paradigm.

"Classes, Jim, But Not as We Know Them — Type Classes in Haskell: What, Why, and Whither"

https://www.microsoft.com/en-us/research/publication/classes...

"Adventure with Types in Haskell"

https://www.youtube.com/watch?v=6COvD8oynmI

https://www.youtube.com/watch?v=brE_dyedGm0

On the first lecture he discusses how Haskell relates to OOP in regards of subtyping and generic polymorphism and how although different on the surface they share those CS concepts in their own ways.


No. Did you read the contents of the links you shared? The name of the slides in your first link is "Classes, Jim, but not as we know them". And let me quote from the slides in your first link:

From slide 40:

> So the links to intensional polymorphism are closer than the links to OOP.

From the first bullet of slide 43:

> No problem with multiple constraints

> f :: (Num a, Show a) => a -> ...

From the second bullet:

> Existing types can retroactively be made instances of new type classes (e.g. introduce new Wibble class, make existing types an instance of it):

> class Wibble a where

> wib :: a -> Bool

> instance Wibble Int where

> wib n = n+1

From slide 46:

> In Haskell you must anticipate the need to act on arguments of various type

> f :: Tree -> Int

> vs

> f’ :: Treelike a => a -> Int

> (in OO you can retroactively sub-class Tree)

From slide 50:

> In Java (ish):

> inc :: Numable -> Numable

> from any sub-type of Numable to any super-type of Numable

> In Haskell:

> inc :: Num a => a -> a

> Result has precisely same type as argument

I appreciate you sharing informative links even though they prove you wrong. I haven't seen this set of slides before but I find it a very good concise explanation of why Haskell classes are not traditional OOP classes or interfaces.


I didn't say they were exactly 100% the same thing, and from those videos starting at 1:01:00, I advise the section "Two approaches to polymorphism", including the overlapped set of features.


We are commenting on an article titled "Great things about Rust that aren't just performance" and it's clear to me that one of the great things being mentioned is how Rust approaches polymorphism that's different from the typical way in Java or Objective-C. So it is more important to highlight the differences rather than the similarities.

Think about it: if the Rust trait system were highly similar to Java interfaces, why would people rave about it?


There are shades of OOP, and while you're technically correct I think the meaning of my post is clear.


> Rust is basically a bag of sensible choices.

Mostly yes. In C/C++, the defaults are usually in the less safe direction for historical reasons.


It's not about less safe, the C++ defaults are usually just wrong. It's so well known that Phil Nash had to make clear whether he was giving the same talk about how all the defaults are wrong at CppCon or a different talk, otherwise who knows.

For some cases you can make an argument that the right default would have been safer. For mutability, for avoiding deductions, these are both sometimes footguns. But in other cases the right default isn't so much safer as just plain better, the single argument constructors should default to explicit for example, all the functions which qualify as constexpr might as well be constexpr by default, there's no benefit remaining for the contrary.

My favourite wrong default is the memory ordering. The default memory ordering in C++ is Sequentially Consistent. This default doesn't seem obviously wrong, what would have been better? Surely we don't want Relaxed? And we can't always mean Release, or Acquire, and in some cases the combination Acquire-Release means nothing, so that's bad too. Thus, how can Sequentially Consistent be the wrong default? Easy - having a default was wrong. All the options were a mistake, the moment the committee voted they'd already fucked up.


> Match needs to be exhaustive. When you add something to the enum you were matching, it chokes. This is good.

There’s a reason why ML and Haskell compilers generally have that as a warning by default and not an error: when you need a pipeline of small transformations of very similar languages, the easiest way to go is usually declare one tree type that’s the union of all of them, then ignore the impossible cases at each stage. This takes the problem entirely out of the type system, true, but an ergonomic alternative for that hasn’t been invented, as far as I know. Well, aside from the micropass framework in Scheme, I guess, but that requires exactly the kind of rich macros that Rust goes out of its way to make ugly. (There have been other attempts in the Haskell world, like SYB, but I haven’t seen one that wouldn’t be awkward.)


Move by default. If you came from c++, I think this makes a lot of sense. If you have a new language, don't bring the baggage.

Came from C++ and this is my least favorite part of the language ergonomics.


It actually doesn't come from C++ and what C++ has is worse, the history is interesting.

The move assignment semantic you see in Rust was also retrospectively termed "destructive" move because after the assignment A = B not only is the value from B now in A - that value is gone from B, B was in some sense "destroyed". If we write code which does A = B and then print(B) it won't compile! B is gone now.

Programmers actually really like that, it feels natural (with appropriate compiler support of course) and it doesn't have unexpected horrors to be uncovered.

In C++ they couldn't make that work (without destroying compatibility with existing C++ 98 code) so they invented their own C++ 11 "move" which is this more fundamental move plus making a new hollow object to go in B. This new hollow object allows the normal lifecycle of C++ 98 objects to happen as before - B goes out of scope, it gets destroyed.

So in C++ A = B; print(B) works - but it's not defined to do anything useful, you get some ready to clean up object, if B was a string maybe it's the empty string, if B was a remote file server then... maybe it's an "empty" remote file server? That's awkward.

It's worth understanding that the nicer Rust move isn't a novelty, or something people had no idea they wanted when C++ 11 was standardized, the "destructive" move already existed and was known to be a good idea - but C++ couldn't figure out a way to deliver it.


I think the main motivation for adding move semantics to c++11 was performance. Ie. Eliminate superfluous copy constructors when passing a std::string temporary into a function.

Std::move, std::forward are neat, though somewhat cumbersome compared to Rust. C++ scope, lifetime plus the fact that std::move doesn't actually move are real footguns.

There have been attempts to add destructive moves (Circle) but it's a long way from Rust's ergonomics.

I concur with op that default move semantic is where rust shines.


Why? It makes you use smart pointers correctly from the start. Any big c++ codebase would do this anyway, except it isn't as error prone.


> Move by default. If you came from c++, I think this makes a lot of sense.

> Immutable by default.

In C++, these two fight each other. You can't (for the most part) move from something that's immutable.

How does Rust handle this? I assume it drops immutability upon the move, and that doesn't affect optimizations because the variable is unused thereafter?


In Rust, when you move out of a variable, that variable is now effectively out-of-scope; trying to access it will result in a compile error.

Mutability in Rust is an attribute of a location; not a value, so you can indeed move a value from an immutable location into a mutable one, thus "dropping immutability". (But you can only move out of a location that you have exclusive access to -- you can't move out of an & reference, for example -- so the effect is purely local.)


Yeah that sounds about like what I expected. Thanks!


Rust moves aren't quite the same as C++ moves, you can think of them more like a memcpy where the destructor (if there is one) doesn't get run on the original location. This means you can move an immutable object, the object itself doesn't have to do anything to be moved.


You can't refer to any old location so there is no observable mutation. For example you can't move if a reference exists.


Not 100% sure but sounds like you want Pin<>?


> There's a few large firms that don't use exceptions in c++

Google: https://google.github.io/styleguide/cppguide.html#Exceptions


Just make sure you read the whole darn thing:

> Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project.

> ...Things would probably be different if we had to do it all over again from scratch.

It's quite ironic to cite the Google C++ Style Guide as somehow supporting the case against exceptions. It's saying the opposite: we would probably use exceptions, but it's too late now, and we can't.

Somehow people miss this...


I can't remember the last time I worked on a C++ code base at any company that used exceptions. This is for good reason; making some types of systems-y code exception-safe can be tricky to get right and comes with a performance cost. For many companies the juice is not worth the squeeze.


> This is for good reason; making some types of systems-y code exception-safe can be tricky to get right and comes with a performance cost. For many companies the juice is not worth the squeeze.

Those types of systems-y code can avoid exceptions if they want. Nobody said exceptions are a panacea. The alternative error models have their own performance and other problems, and those can manifest differently to other types of codebases.


exceptions in C++ are a foot gun. Even the top C++ gurus/leaders know this and are trying to find some new solution

https://www.youtube.com/watch?v=os7cqJ5qlzo


Thanks for the 1-hour video. Could you link to the timestamp of the strongest argument(s) you see in the video that are relevant in the current discussion (i.e. the existing error models we're talking about in Rust and C++, rather than a hypothetical future one)?

Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc. In a discussion like this, those two are probably the worst examples of exceptions. They're the most severe exceptions, and the one the fewest people care to actually catch, and the ones that error codes are possibly the worst at handling anyway. (Do you really want an error returned from push_back?) The most common stuff is I/O errors, permission errors, format errors, etc. which aren't well represented by resource exhaustion at all, much less memory exhaustion.

P.S. W.r.t. "the top C++ gurus/leaders" - Herb is certainly talented, but I should note that the folks who wrote Google's style guide are... not amateurs. They have been involved in the language development and standardization process too. And they're just as well aware of the benefits and footguns as anyone.


The general problem cited with exceptions is that they're un-obvious control flow. The impact it has is clearer in Rust, because of the higher bar it sets for safety/correctness.

As a specific example, and this is something that's been a problem in the std lib before. When you code something that needs to maintain an invariant, e.g. a length field for an unsafe operation, that invariant has to be upheld on every path out of your function.

In the absence of exceptions, you just need to make sure your length is correct on returns from your function.

With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function, but it needs to deal with fixing up your invariant wherever the exception occurred (e.g. of the fix-up operation that needs to happen is different based on where in your function the exception occurred).

To avoid that you can wrap every call that can cause an exception so you can do the specific cleanup that needs to happen at that point in the function... But at that point what's the benefit of exceptions?


> With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function [...] To avoid that you can wrap every call [...]

That's the wrong way to handle this though. The correct way (in most cases) is with RAII. See scope guards (std::experimental::scope_exit, absl::Cleanup, etc.) if you need helpers. Those are not "way harder" to deal with, and whether the control flow out of the function is obvious or not is completely irrelevant to them -- in fact, that's kind of their point.

In fact, they're better than both exception handling and error codes in at least one respect: they actually put the cleanup code next to the setup code, making it harder for them to go out of sync.


None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.


> None of those are easier than not needing to do it at all though; if your functions exits are only where you specify, you can cleanup only once on those paths.

Huh? I don't get it. This:

  stack.push_back(k);
  absl::Cleanup _ = [&] { assert(stack.back() == k); stack.pop_back(); }
  if (foo()) {
    printf("foo()\n");
    return 1;
  }
  if (bar()) {
    printf("bar()\n");
    return 2;
  }
  baz();
  return 3;
is both easier, more readable, and more robust than:

  stack.push_back(k);
  if (foo()) {
    printf("foo()\n");
    assert(stack.back() == k);
    stack.pop_back();
    return 1;
  }
  if (bar()) {
    printf("bar()\n");
    assert(stack.back() == k);
    stack.pop_back();
    return 2;
  }
  baz();
  assert(stack.back() == k);
  stack.pop_back();
  return 3;
as well as:

  stack.push_back(k);
  auto pop_stack = [&] { assert(stack.back() == k); stack.pop_back(); }
  if (foo()) {
    printf("foo()\n");
    pop_stack();
    return 1;
  }
  if (bar()) {
    printf("bar()\n");
    pop_stack();
    return 2;
  }
  baz();
  pop_stack();
  return 3;
and unlike the others, it avoids repeating the same code three times.

(Ironically, I missed the manual cleanups before the final returns in the last two examples right as I posted this comment. Edited to fix now, but that itself should say something about which approach is actually more bug-prone...)


I can't parse this super well on mobile, but what invariant is this maintaining? I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.

The gnarliest scenario I recall was a ring-buffer implementation that relied on a field always being within the valid length, and a single code path not performing a mod operation, which was only observably a problem after a specific sequence of reserving, popping, and pushing.

EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?


> I can't parse this super well on mobile, but what invariant is this maintaining.

The stack length (and contents, too). It pushes, but ensures a pop occurs upon returning. So the stack looks the same before and after.

> I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.

That is exactly what the code is doing.

> EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?

Both. First it manipulates the stack (pushing onto it), then it does some stuff. Then before returning, it validates that the last element is still the one pushed, then pops that element, returning the stack to its original length & state.

> The gnarliest scenario I recall was a ring-buffer implementation that [...]

That sounds like the kind of thing scope guards would be good at.


Then I think the counter-example is where function calls that can't fail are interspersed. Those are the cases where with exceptions (outside checked exceptions) you have to assume they could fail, and in a language without exceptions you can rely on them not to fail, and skip adding any code to maintain the invariant between them.

E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.


I still don't follow, I'm sorry.

> E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.

I have no idea what you mean here. Everything in the comment would be exactly the same even if stack.push_back() was guaranteed to succeed (maybe due to a prior stack.reserve()). And those calls aren't occurring in sequence, one is occurring upon entrance and the other upon exit. Perhaps you're confused what absl::Cleanup does? Or I'm not sure what you mean.

I think you're going to have to give a code example if/when you have the chance, to illustrate what you mean.

But also, even if you find "a counterexample" where something else is better than exceptions just means you finally found found a case where there's a different tool for a (different) job. Just like how me finding a counterexample where exceptions are better doesn't mean exceptions are always better. You simply can't extrapolate from that to exceptions being bad in general, is kind of my whole point.


Apologies, I believe I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block, but in the presence of exceptions you have to assume they (and all calls) can fail.

The problem re. there being a counter-example to exceptions (as implemented in C++) is that they're not opt-in or out where it makes sense. At least as I understand it, there's no way for foo/bar/baz to guarantee to you that they can't throw an exception, so you can rely on it (e.g. in a way that if this changes, you get a compiler error such that something you were relying on has changed). noexcept just results in the process being terminated on exception right?


> I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block

First, I think you're making an incorrect assumption -- the assumption that "if (foo())" means "if foo() failed". That's not what it means at all. They could just as well be infallible functions doing things like:

  if (tasks.empty()) {
    printf("Nothing to do\n");
    return 1;
  }
or

  if (items.size() == 1) {
    return items[0];
  }
Second, even ignoring that, you'd still need the cleanup block! The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.

Finally, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.

What you're doing is simplifying code by making very strong and brittle -- not to mention unguaranteed in almost all cases -- assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code. In that context, putting them together seems "unnecessary", yeah. But point-in-time programming is not software engineering. The situation is radically different when you factor in what can go wrong during updates and maintenance.


> Moreover, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.

In a language without exceptions, I'm also assuming that a function conveys whether it can fail via it's prototype; in Rust, changing a function from "returns nothing" to "returns a Result" will result in a warning that you're not handling it

> What you're doing is simplifying code by making very strong assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code.

But this is where the burden of exceptions is most pronounced; if you code as if everything can fail, there's no "additional" burden, you're paying it all the time. The case you're missing is in the simpler side, where it's possible for something to not fail, and that if that changes, your compiler tells you.

It can even become quite a great boon, because infallibility is transitive; if every operation you do can't fail, you can't fail.


No. I've mentioned this multiple times but I feel like you're still missing what I'm saying about maintainability. (You didn't even reply to it at all.)

To be very clear, I was explaining why, even if you somehow have a guarantee here that absolutely nothing ever fails, this code:

  stack.push_back(k);
  absl::Cleanup _ = [&] { assert(stack.back() == k); stack.pop_back(); }
  foo();
  bar();
  baz();
  return 3;
is still better than this code w.r.t. maintainability and robustness:

  stack.push_back(k);
  foo();
  bar();
  baz();
  assert(stack.back() == k);
  stack.pop_back();
  return 3;
The reason, as I explained above, is the following:

>> The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.

Fallibility is absolutely irrelevant to this point. It's about not splitting the source of truth into two separate spots in the code. This technique kills multiple birds at once, and handling errors better in the aforementioned cases is merely one of its benefits, but you should be doing it regardless.

Do you see what I mean?


I do, but I'm still expecting things to be more complicated than that example.

For instance, this is the the scenario I expect to be harder to manage with exceptions & cleanup:

  this.len += 1;
  foo();
  this.len += 1;
  bar();
  this.len += 1;
  baz();
  return ...;

Without infallibility, you need a separate cleanup scope for each call you make. With this, the change to the private variable is still next to the operation that changes it, you just don't need to manage another control flow at the same time.

EDIT: sorry, had the len's in the wrong spot before


> I do, but I'm still expecting things to be more complicated than that example.

They're not. I've done this all the time, in the vast majority of cases it's perfectly fine. It sounds like you might not have tried this in practice -- I would recommend giving it a shot before judging it, it's quite an improvement in quality of life once you're used to it.

But in any large codebase you're going to find occasional situations complicated enough to obviate whatever generic solution anyone made for you. In the worst case you'll legitimately need gotos or inline assembly. That's life, nobody says everything has a canned solution. You can't make sweeping arguments about entire coding patterns just because you can come up with the edge cases.

> Without infallibility, you need a separate cleanup scope for each call you make.

So your goal here is to restore the length, and you're assuming everything is infallible (as inadvisable as that often is)? The solution is still pretty darn simple:

  absl::Cleanup _ = [&, old_len = len] { len = old_len; };
  foo();
  this.len += 1;
  bar();
  this.len += 1;
  baz();
  this.len += 1;
  return ...;
No need for a separate cleanup for every increment.


We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).


> We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).

Your parenthetical is kind of my point though. It's rare to need mid-function cleanups that somehow contradict the earlier ones (because logically this often doesn't make sense), and when that is legitimately necessary, those are also fairly trivial to handle in most cases.

I'm happy to just agree to disagree and avoid providing more examples for this so we can lay the discussion to rest, so I'll leave with this: try all of these techniques -- not necessarily at work, but at least on other projects -- for a while and try to get familiar with their limitations (as well as how you'd have to work around them, if/when you encounter them) before you judge which ones are better or worse. Everything I can see mentioned here, I've tried in C++ for a while. This includes the static enforcement of error handling that you mentioned Rust has. (You can get it in C++ too, see [1].) Every technique has its limitations, and I know of some for this, but overall it's pretty decent and kills a lot of birds with one stone, making it worth the occasional cost in those rare scenarios. I can even think of other (stronger!) counterarguments I find more compelling against exceptions than the ones I see cited here, but even then I don't think they warrant avoiding exceptions entirely.

If there's one thing I've learned, it's that (a) sweeping generalizations are wrong regardless of the direction they're pointed at, as they often are (this statement itself being an exception), and (b) there's always room for improvement nevertheless, and I look forward to better techniques coming along that are superior to all the ones we've discussed.

[1] https://godbolt.org/z/c9KM6dj95


>Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc.

There are specific scenarios that a major issue, yes. But as the title of the video implies, the problem with exceptions runs far deeper. Imagine being a C++ library author who wants to support as many users as possible, you simply couldn't use exceptions even if you wanted to, and even if most of your users are using exceptions. The end result is that projects that use exceptions have to deal with two different methods of error handling, i.e. they get the worst of both worlds (the binary footprint of exceptions, the overhead of constantly checking error codes, and the mental overhead of dealing with it all).

C++ exceptions are a genuinely useful language feature. But I wish the language and standard library wasn't designed around exceptions. C++ has managed to displace C almost everywhere except embedded and/or kernel programming, and exceptions are a big reason for that.


> Imagine being a C++ library author who wants to support as many users as possible, you simply couldn't use exceptions even if you wanted to

I'm pretty sure that (much) less than 50% of the C++ code out there is "a C++ library that wants to support as many users as possible" -- I imagine most code is application code, not even C++ library code in the first place. It's perfectly fine to throw e.g. a "network connection was closed" or "failed to write to disk" exception and then catch it somewhere up the stack.

> The end result is that projects that use exceptions have to deal with two different methods of error handling. i.e. they get the worst of both worlds

No, that's not true. You might get a bit of marginal overhead to think about, but it's not the worst of both whatsoever. If you want to use exceptions and your library doesn't use them, all you gotta do is wrap the foo() call in CheckForErrors(foo()), and then handle it (if you want to handle it at all) at the top level of your call chain. It's not the worst of both worlds at all -- in fact it's literally less work than simply writing

  std::expected<Result, std::error_code> e = foo();
and on top of that you get to avoid the constant checking of error codes and modifying every intermediate caller, leaving their code much simpler and more readable.

And of course if you don't want to use exceptions but your library does use them, then of course you can do the reverse:

  std::expected<Result, std::error_code> e = CallAndCatchError(foo()).
Nobody is claiming every error should be an exception. I'm just saying you're exaggerating and extrapolating the arguments too far. A sane project would have a mix of different error models, and that would very much still be the case if none of the problems you mentioned existed at all, because they're different tools solving different problems.


> Do you really want an error returned from push_back?

For most people, no, you definitely want it to just work or explode, which is indeed what happens in normal Rust, and, not coincidentally, the actual effect when this exception happens in your typical C++ application after it is done with all the unwinding and discovers there is no handler (or that the handler was never tested and doesn't actually somehow cope).

But, sometimes that is what you wanted, and Linus has been very clear it's what he wants in the kernel he created.

For such purposes Rust has Vec::try_reserve() and Vec::push_within_capacity() which let us express the idea that we'd like more room and to know if that wasn't possible, and also if there was no room left for the thing we pushed we want back the thing we were trying to push - which otherwise we don't have any more.

There is no analogous C++ API, std::vector just throws an exception and good luck to you AFAIK.


> For such purposes Rust has Vec::try_reserve() and Vec::push_within_capacity() [...] There is no analogous C++ API, std::vector just throws an exception and good luck to you AFAIK.

https://godbolt.org/z/6xE6jr3zr ?


I guess this is an attempt at Vec::push_within_capacity ? Your function takes a reference and then tries to copy the referenced object into the growable array. But of course nobody said this object can be copied - after all we want it back if it won't fit so perhaps it's unique or expensive to make.


> I guess this is an attempt at Vec::push_within_capacity?

Sure, yes. It's trivial to change to try_reserve if that's what you want. (There are other solutions for that as well, but they're more complicated and better for other situations.)

> Your function takes a reference and then tries to copy the referenced object into the growable array. But of course nobody said this object can be copied - after all we want it back if it won't fit so perhaps it's unique or expensive to make

Just add extend it to allow moves then? It's pretty trivial. (Are you familiar with move semantics in C++?)


But how? I did attempt this before I replied, but of course after not long I had inexplicable segfaults and we're not in a thread about those problems with C++

I can't see how to make that work, but I also can't say for sure it's impossible all I can tell you is that I was genuinely trying and all I got for my trouble was a segfault that I don't understand and couldn't fix.

Edited to add: In case it helps the signature we want is:

    pub fn push_within_capacity(&mut self, value: T) -> Result<(), T>
If you're not really a Rust person, this takes a value T, not a reference, not a magic ultra-hyper-reference, nor a pointer, it's taking the value T, the value is gone now, which just isn't a thing in C++, then it's returning either Ok(()) which signifies that this worked, or Err(T) thus giving back the T because we couldn't push it.


I'm sorry I don't think I understand the problem you're trying to illustrate. I'm not sure why you're emphasizing value vs. reference, but even if that's what you want, this works just fine: https://godbolt.org/z/P8EGPYWW5


Well the good news is that now I realise the biggest problem in my previous attempt was that I forgot C++ types which can't be copy constructed also by default can't be moved, so I'd actually made it impossible to use my example type. I still don't know why I had segfaults, but I don't care now.

I agree that your new code does roughly what you'd do in C++ if you wanted this, but you get to the same place as before -- if for example you try commenting out your allocation failure boolean, the code just blows up now.

There are lots of APIs like this which make sense in Rust but not in C++ because if you write them in Rust the programmer is going to handle edge cases properly, but in C++ the programmer just ignores the edge cases so why bother.


> I agree that your new code does roughly what you'd do in C++ if you wanted this, but you get to the same place as before -- if for example you try commenting out your allocation failure boolean, the code just blows up now. There are lots of APIs like this which make sense in Rust but not in C++ because if you write them in Rust the programmer is going to handle edge cases properly, but in C++ the programmer just ignores the edge cases so why bother.

Er... doesn't this blow up in Rust? https://godbolt.org/z/eaaq43voT

  pub fn main() {
    let mut vec = Vec::new();
    return vec.push_within_capacity(1).unwrap();
  }


Almost, it panics because we didn't handle the error case. Of course this won't pass review because we explicitly just said "I won't handle this" and the reviewer can see that - whereas the C++ programmer wordlessly allowed this. Subtle, isn't it.

"But I can write correct C++" is trivially true because it's a Turing Complete language, and at the same time entirely useless unless you're playing "Um, actually".


> Almost, it panics because we didn't handle the error case. Of course this won't pass review because we explicitly just said "I won't handle this" and the reviewer can see that - whereas the C++ programmer wordlessly allowed this. Subtle, isn't it. "But I can write correct C++" is trivially true because it's a Turing Complete language, and at the same time entirely useless unless you're playing "Um, actually".

I'm sorry, what? How in the world did you go from "exceptions are worse than error codes" to "that's why Linus doesn't like C++, he wants to write push_within_capacity() in C++ without exceptions and it's impossible" to "oh but your version doesn't move" to "oh I guess moving is possible too... but if you modified it to be buggy then it would crash" to "oh I see Rust would crash too... but it's OK because Rust programmers wouldn't actually let .unwrap() through code review"?? Aren't there .unwrap() calls in the standard library itself, never mind other libraries? So next we have "Oh I guess .unwrap() actually does through code review... but it's OK because Rust programmers wouldn't write such bugs, unlike C++ programmers"?


I don't remember telling you "Exceptions are worse than error codes" as these both seem like bad ideas from people with either a PDP/11 or no imagination or both. Result isn't an error code. std::expected isn't an error code either.

Among the things Linus doesn't like about C++ are its quiet allocations and its hidden control flow, both of which are implicated here - I think those are both bad ideas too, but in this case I'm just the messenger, I didn't write an OS kernel (at least, not a real one people use) so I don't need a way to handle not being able to push items onto a growable array.

The problem isn't that "if you modified it to be buggy then it would crash" as you've described, the problem is that only your toy demo works, once we modify unrelated things like no longer setting that global to true the demo blows up spectacularly (Undefined Behaviour) whereas of course the Rust just reported an error.

> Aren't there .unwrap() calls in the standard library itself

Unsurprisingly an operating system kernel does not use std, only core and some of alloc. So we're actually talking only about core† and alloc, not the rest of std. There are indeed a few places where core calls unwrap(), cases where we know that'll do what we meant so if you wrote what you meant by hand Clippy (at least if we weren't in core) would say you should just write unwrap here instead.

† As a C++ person you can think of core as equivalent to the C++ standard library "freestanding" mode. This is more true in the very modern era because reformists got a lot of crucial improvements into this mode whereas for years it had felt abandoned. So if you mostly work with say C++ 17, think "freestanding" but actually properly maintained.

We can't write unwrap here because it's not what we meant, so that's why it shouldn't pass review.


> exceptions in C++ are a foot gun

How are they a foot gun? It's not like C++ is the only language with exceptions. So what is particularly dangerous about C++ exceptions?

> trying to find some new solution

C++23 already has std::expected (= result type).


> Testing is part of the code, doesn't seem tacked on like it does in c++.

Or most languages! Many could easily imitate it too. I'd love a pytest mode or similar framework for python that looked for doc tests and has a 'ModTest' or something class.


So when are we going to get a proper application (not systems) programming language with all these nice things about Rust?


agree on all these though i ended up using golang for faster development




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: