> I can't parse this super well on mobile, but what invariant is this maintaining.
The stack length (and contents, too). It pushes, but ensures a pop occurs upon returning. So the stack looks the same before and after.
> I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.
That is exactly what the code is doing.
> EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?
Both. First it manipulates the stack (pushing onto it), then it does some stuff. Then before returning, it validates that the last element is still the one pushed, then pops that element, returning the stack to its original length & state.
> The gnarliest scenario I recall was a ring-buffer implementation that [...]
That sounds like the kind of thing scope guards would be good at.
Then I think the counter-example is where function calls that can't fail are interspersed. Those are the cases where with exceptions (outside checked exceptions) you have to assume they could fail, and in a language without exceptions you can rely on them not to fail, and skip adding any code to maintain the invariant between them.
E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.
> E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.
I have no idea what you mean here. Everything in the comment would be exactly the same even if stack.push_back() was guaranteed to succeed (maybe due to a prior stack.reserve()). And those calls aren't occurring in sequence, one is occurring upon entrance and the other upon exit. Perhaps you're confused what absl::Cleanup does? Or I'm not sure what you mean.
I think you're going to have to give a code example if/when you have the chance, to illustrate what you mean.
But also, even if you find "a counterexample" where something else is better than exceptions just means you finally found found a case where there's a different tool for a (different) job. Just like how me finding a counterexample where exceptions are better doesn't mean exceptions are always better. You simply can't extrapolate from that to exceptions being bad in general, is kind of my whole point.
Apologies, I believe I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block, but in the presence of exceptions you have to assume they (and all calls) can fail.
The problem re. there being a counter-example to exceptions (as implemented in C++) is that they're not opt-in or out where it makes sense. At least as I understand it, there's no way for foo/bar/baz to guarantee to you that they can't throw an exception, so you can rely on it (e.g. in a way that if this changes, you get a compiler error such that something you were relying on has changed). noexcept just results in the process being terminated on exception right?
> I meant if the foo/bar/baz calls couldn't fail. If there's no exceptions, you don't need the cleanup block
First, I think you're making an incorrect assumption -- the assumption that "if (foo())" means "if foo() failed". That's not what it means at all. They could just as well be infallible functions doing things like:
if (tasks.empty()) {
printf("Nothing to do\n");
return 1;
}
or
if (items.size() == 1) {
return items[0];
}
Second, even ignoring that, you'd still need the cleanup block! The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.
Finally, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.
What you're doing is simplifying code by making very strong and brittle -- not to mention unguaranteed in almost all cases -- assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code. In that context, putting them together seems "unnecessary", yeah. But point-in-time programming is not software engineering. The situation is radically different when you factor in what can go wrong during updates and maintenance.
> Moreover, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.
In a language without exceptions, I'm also assuming that a function conveys whether it can fail via it's prototype; in Rust, changing a function from "returns nothing" to "returns a Result" will result in a warning that you're not handling it
> What you're doing is simplifying code by making very strong assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code.
But this is where the burden of exceptions is most pronounced; if you code as if everything can fail, there's no "additional" burden, you're paying it all the time. The case you're missing is in the simpler side, where it's possible for something to not fail, and that if that changes, your compiler tells you.
It can even become quite a great boon, because infallibility is transitive; if every operation you do can't fail, you can't fail.
No. I've mentioned this multiple times but I feel like you're still missing what I'm saying about maintainability. (You didn't even reply to it at all.)
To be very clear, I was explaining why, even if you somehow have a guarantee here that absolutely nothing ever fails, this code:
The reason, as I explained above, is the following:
>> The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.
Fallibility is absolutely irrelevant to this point. It's about not splitting the source of truth into two separate spots in the code. This technique kills multiple birds at once, and handling errors better in the aforementioned cases is merely one of its benefits, but you should be doing it regardless.
Without infallibility, you need a separate cleanup scope for each call you make. With this, the change to the private variable is still next to the operation that changes it, you just don't need to manage another control flow at the same time.
EDIT: sorry, had the len's in the wrong spot before
> I do, but I'm still expecting things to be more complicated than that example.
They're not. I've done this all the time, in the vast majority of cases it's perfectly fine. It sounds like you might not have tried this in practice -- I would recommend giving it a shot before judging it, it's quite an improvement in quality of life once you're used to it.
But in any large codebase you're going to find occasional situations complicated enough to obviate whatever generic solution anyone made for you. In the worst case you'll legitimately need gotos or inline assembly. That's life, nobody says everything has a canned solution. You can't make sweeping arguments about entire coding patterns just because you can come up with the edge cases.
> Without infallibility, you need a separate cleanup scope for each call you make.
So your goal here is to restore the length, and you're assuming everything is infallible (as inadvisable as that often is)? The solution is still pretty darn simple:
We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).
> We may have to agree to disagree. I'm trying to convey a function that would need a different cleanup to occur after each call if they were to fail, e.g. reducing the len by one (though that is the same here too).
Your parenthetical is kind of my point though. It's rare to need mid-function cleanups that somehow contradict the earlier ones (because logically this often doesn't make sense), and when that is legitimately necessary, those are also fairly trivial to handle in most cases.
I'm happy to just agree to disagree and avoid providing more examples for this so we can lay the discussion to rest, so I'll leave with this: try all of these techniques -- not necessarily at work, but at least on other projects -- for a while and try to get familiar with their limitations (as well as how you'd have to work around them, if/when you encounter them) before you judge which ones are better or worse. Everything I can see mentioned here, I've tried in C++ for a while. This includes the static enforcement of error handling that you mentioned Rust has. (You can get it in C++ too, see [1].) Every technique has its limitations, and I know of some for this, but overall it's pretty decent and kills a lot of birds with one stone, making it worth the occasional cost in those rare scenarios. I can even think of other (stronger!) counterarguments I find more compelling against exceptions than the ones I see cited here, but even then I don't think they warrant avoiding exceptions entirely.
If there's one thing I've learned, it's that (a) sweeping generalizations are wrong regardless of the direction they're pointed at, as they often are (this statement itself being an exception), and (b) there's always room for improvement nevertheless, and I look forward to better techniques coming along that are superior to all the ones we've discussed.
The stack length (and contents, too). It pushes, but ensures a pop occurs upon returning. So the stack looks the same before and after.
> I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.
That is exactly what the code is doing.
> EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?
Both. First it manipulates the stack (pushing onto it), then it does some stuff. Then before returning, it validates that the last element is still the one pushed, then pops that element, returning the stack to its original length & state.
> The gnarliest scenario I recall was a ring-buffer implementation that [...]
That sounds like the kind of thing scope guards would be good at.