Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mental models around Ok-Wrapping (vorner.github.io)
83 points by ansible on April 10, 2020 | hide | past | favorite | 48 comments


I should probably write a blog post about this or something.

My big criticism of Ok-Wrapping is that it hides complexity in a way that's at odds with the rest of the language. Rust's big benefit to me is that it surfaces complexity. It makes me deal with it right now, at the time I write code. Even though async munges the type of my function, it feels different because it's explicit and has to be done at the point that I conceive of the function.


Well the other end of the spectrum of surfacing complexity of errors would be idiomatic Golang code. But then half your code becomes "if err != nil" error handling.

In many scenarios error handling can be automated because the essence of what you typically do when there are errors is to short-circuit the rest of the function and surface the error the user. If you are doing it the same way each time, you might as well abstract it.

In some cases you may want to make things explicit and force the programmer to deal with them. But if it is just boilerplate, then it becomes noise, and an excessive amount of noise prevents the programmer from thinking at a higher level of abstraction.

Due to limits of human cognition / mental bandwidth (7 +/- 2 concepts at a time), abstraction compresses the number of concepts you need to concern yourself with, and that means you can focus more on the problem domain and less on boilerplate.

The key to this though is learning new concepts and abstractions. Each abstraction compresses multiple thoughts into 1, freeing up cognitive bandwidth. The more concepts and abstractions you know, the more efficient and higher level thoughts you can have.

But that requires time to learn and master. You might also have lots of junior programmers on a team, in which case they are not familiar with many higher level abstractions and they will not be able to work on a codebase, so you have to keep that in balance.


For cases where you're confident a Result won't be an error, it has a .unwrap() method that allows you to skip handling the error case and escalates it to a panic instead. You can even pass a custom error-message string.


> I should probably write a blog post about this or something.

Please do. I did. Shameless self-plug:

https://obdurodon.silvrback.com/error-handling

Maybe you'll find something of interest there, even if it's something to disagree with. That's fine; contrast improves visibility. Having written that just before the current surge of interest in the subject, I'm finding many of the discussions quite fascinating.


Its interesting, but I'm not sure that your proposal still qualifies as concise. Nor am I sure that ADT-based error handling (Option/Result/etc) belongs should be considered silently dropped. Option certainly sees some abuse, but on the whole, `unwrap()` is a red flag that draws my attention. And, in Rust, with pattern matching and `?` to propagate errors, I see errors handled more often than not.


Rust's expressiveness of complexity is mirrored by its complexity of expression.

I don't think that Ok-Wrapping hides complexity. I think it's a filled in pothole for sum type semantics in Rust. I don't think the best solution is sugar, but completely redoing the syntax. And that will likely never be possible in stable Rust.


Out of curiosity, what kind of syntax do you think would be better?


Anonymous sum types. Most enum variants I write are single use, or very few use. We have anonymous product/record types with tuples (function returns all-of) but not variants (function returns one-of). Something like fn foo() -> (i32 | f32 | String). I'm not the first person to suggest this idea, and it's been done in other languages.

As well, having to match against Enum::Variant0, EnumVariant::1, etc, for all arms of a match statement instead of inferring that it would be Variant0, Variant1 is tedious.

These changes would be difficult to implement and I know some people prefer to have all of the explicitness in their code. But one of the most common complaints outside the community is that prototyping in Rust is slow. Not just because of the borrow checker, that's easy once you learn it. It's the over-emphasis on explicitness at the expense of ergonomics.


Excellent analysis of the debate. I already had an opinion on the subject, but this article helped illuminate to me why exactly I felt the way I did.

> If the function returns Err(NotAuthorized), I wouldn’t say it failed. I’d say it’s doing its job very well indeed.

I worked in Java for a couple of years. Java loves its throw/catch-style error handling. But the mindset around exceptions in that sort of language is that they're exceptional. They're bad, they're ugly, you really want to just sweep them under the rug if you can.

But you inevitably reach certain cases where your API throws an exception as a normal course of doing business - trying to open a file that doesn't exist, for example - and now exceptions become a core part of your business logic. Yet you can't really work with them like normal return values, you have to use this special, gross, control-flow-breaking syntax that's really only designed for putting out a fire, logging something, maybe cleaning up some locked resources.

> but that would be an irregularity in otherwise quite regular and well-behaving language

One of Rust's greatest strengths is that its extremely rich type system allows things that might receive special status in other languages to become "just types". I am in love with this trait (heh) of Rust. I didn't know how to articulate it before, but chipping away at that mental model would simply, above all, be deeply un-Rusty.


I don't think the monadic model is any different, it's a generalisation that applies to either perspective. Because we can actually have the same debate over async functions: some people like to think about a function that returns Future<f64> and it would seem like a type error to say an async function returns 42.0, other people see it as a function that returns 42.0 but has some extra "async-ness" (which is the same kind of thing as being a fallible function). And if you think about it that's, in a certain sense, the same disagreement (and we can and do have the same debate in the context of a generic monad).


Though both Result and Future types can be described as a monadic shape, it only works if you fix the Error type. Even then, you've lost the value in the Result entirely. The interesting and useful thing about a Result is the part of it that breaks from the monadic mold, the error case.


Nonsense. Yes Results can only be composed monadically if they have the same error type, but that's true for any other way of composing Results too - if you want to use Rust's ? operator then you need to make sure all the results you're going to use it on share (or are at least are convertible to) the same error type.

Of course for a specific monad to be useful you eventually have to do something specific with it, but again that's completely normal. The point of using the monad abstraction is not to forget that there is a difference between Result and Future, it's to be able to write the common part in a common way - just as, even though you might use a generic List implementation to store lists of integers and lists of strings, you will eventually want to do something with the contents of the list that is specific to either integers or strings.


How does the error case break the monadic model? All of the monad rules still apply, I think.


The error needs to be a monoid to be able to flatten the Result, if it is not, it will break, you will lose information. E.g. if you have:

      R<E1,R<E2,A>> -- flatten --> R<E1 + E2, A>
R stands for result, E1,E2 are errors.

You need to think up a reasonable implementation of +. This is of course easily solvable, you can just add them together. Keep a list of errors in your result instead of one error and you are done. Just keep the +, it is free :p


This is completely wrong. Result is either success or failure, so an R<E1, R<E2, A>> can never contain both E1 and E2 at the same time.

Validation types work the way you describe and require a monoidal "error" side, but famously do not form monads (though they are applicative functors).


A true that, long day and have been working with shapeless in scala, where type level and value level mixes up :)


While compelling, flattening doesn't make something a monad, does it? Not snark, if that's a consequence of one of the three laws, I'm just unaware of it. I know what monads are but I'm not practised with them.

But of left identity, right identity, and associativity, what is violated?


So, I was a bit wrong with the monoid, I meant to add them on typelevel by using a sumtype. Sorry, sometimes bit confused after work.

No flattening doesn't make something a monad. Return and flattening are the interesting operations of a monad, that together with the laws makes the monad.

>But of left identity, right identity, and associativity, what is violated?

Well, there is not something violated perse, but you cannot write a flatten operation for something like this:

     R<R<u32, E1>, E2>
However you could write it for this:

         R<R<u32, E1 + E2>, E1 + E2>
(That was what I should have written above btw) Because now the error types are the same.


I created a Result style interface in Typescript and made the decisions to fix the Error to a single type for this very reason.

Result<T> instead of Result<T,E> so that it was easier to compose.

This made Kleisli composition easier since I don't have to worry about mismatched types on the Error.

You can see the code at https://github.com/brennancheung/wasmtalk/blob/master/src/fp...


I've found this kind of pattern much nicer in an application context. Rust sort of has this going on with anyhow[1] for applications and thiserror[2] for libraries (or just writing your own Error type).

[1]: https://crates.io/crates/anyhow

[2]: https://crates.io/crates/thiserror


Result doesn’t break from the monadic mold.


It does in that it has two type variables. In Haskell Either is not a monad - (Either a) is.


The number of variables doesn't matter, as long as they are the same type is the part that matters.

<T, E> can be thought of as a pair, tuple, cartesian product. Likewise you can expand it to as many values as you want.

If type C is just (T, E) then it is really just Result<C> at that point.

The first T can vary but as long as everything after that is fixed to the same type, you can have as many variables/types as you want and it will still be monadic.


You’re reducing two type variables down to one. The number of type variables matter. The kind of the type matters. Look at the definition of Monad for Either. You’ll see.


For any E, Result<E, ?> forms a valid monad. So yes technically Result is a family of monads rather than a single monad, just as technically the monad is not the type itself but the combination of the type and two functions defined on that type, but that's irrelevant pedantry most of the time.


>I have an admission to make. I’ve not yet seen an explanation of what a monad is that I could understand.

Thanks for the honesty. However, that lack of understanding is perhaps preventing you from seeing how monads subsume/generalise many common patterns.


> I’ve not yet seen an explanation of what a monad is that I could understand. Monads are just too abstract for me to grasp, let alone reasoning about.

I had a similar thought about the Option monad when I was learning Scala at the time. My thought at the time was "why are you making it so damn hard for me to get access to underlying value?". At the time I kept trying to "unwrap" the value and continue using it.

Hang in there. An intuitive understanding of monads is necessary to understand how to work with monads like Result fluidly.

For learning monads the best way is to just read tons of different sources and tutorials on them. A lot of them won't make any sense. You might get a tiny hint each time and then it will just click.

Part of the reason monads are hard to understand is because they are typically taught only at the abstract level. You need to see concrete examples first, otherwise it's just a floating abstraction that you can't ground to existing knowledge.

I recommend finding some Javascript tutorials on monads because in the beginning I think a focus on types creates too much noise. Ignoring types will make the intuition stand out more.

You can also get most of the intuition behind the monad pattern by concentrating on just "functor composition".

Imagine:

fnReturningArray().map(doA).map(doB).map(doC)

Then,

fetchPromiseResult.then(doA).then(doB).then(doC)

What do they have in common?

Now you can create "combinators" to make it simpler:

pipe(doA, doB, doC)(fnReturningArray())

The "pipe" function is a generic utility that will work with any type/abstraction that follows the "functor" pattern.

Once you get familiar with the functor / monad intuition you can use and even build you own higher level combinators.

This is a large motivation behind the Result / Either monad. It creates a common pattern that higher order combinators can reuse. In essence, from the constraint of following the functor / monad laws, you get tons of higher order abstractions for free.


The simplest way to gain some degree of understanding for me came through using C#'s linq, and then being told that "SelectMany" is basically the definition of a Monad.


Once you know it you see it everywhere. It's kind of amazing how common of a pattern it is without people even knowing it explicitly.


To me all you need to know about monads is that it’s a wrapper type that contains values and that implements a map function that allows you to unwrap the value, apply a function to it and wrap it again.

There’s more to it, but just understanding it that much takes you a long way.


> An intuitive understanding of monads is necessary to understand how to work with monads like Result fluidly.

I think it helps but is hardly necessary.


Here's a minimalistic way to resolve this.

First let's observe that Option and Result types can be roughly generalized by lists. Option is just a list of length 1.

Next, make a language where each function returns a 'list of equation solutions'. This handles cases when there are no solutions as well as cases when there are multiple solutions:

  sqrt(-2) -> []
  sqrt(0)  -> [0]
  sqrt(4)  -> [2, -2]
  4/2 -> [2]
  1/0 -> []
  fibs(5) -> [0, 1, 2, 3, 5]
  idx_of(0, [1, 0, 0, 1]) -> [1, 2]
  webserver(reqs...) -> resps...
Instead of lists, streams of results can be produced lazily as the algorithm progresses to successively finds solutions.

What always puzzled me was the disconnect between pure functional programming and theoretical computer science, both obsessed with functions, compared to real life computing where it is all about having a 'function' which consumes some stream of inputs and produces another stream of outputs. In many cases the streams are unbounded (servers) - which is contrary to terminating functions.

I wonder why we keep implementing languages with functions as primitives when we should have iterators, generators and stream transformers as primitives somehow instead.

Also IO monad in pure functional languages feels like just a hack around functions not being able to capture streaming nature of real world problems.


You were scooped in 1985... https://link.springer.com/chapter/10.1007%2F3-540-15975-4_33

Note that Philip Wadler was active in the Haskell committee. Early versions of Haskell did use a system like that. It was too error-prone. Monadic IO turned out to be theoretically equivalent, but without the easy ways to break it. See https://www.microsoft.com/en-us/research/wp-content/uploads/... for more details.


That's known in Haskell as the List monad. There's a function to transform Maybe (like Rust's Option) to lists. In Haskell, lists can be infinitely long.

Regarding the IO monad, Simon Peyton Jones has an article explaining why they went with that rather than streams. Sadly I don't remember what it was, but it had something to do with proper interleaving of input and output. (Btw, in a way, the IO monad is a stream of commands to the RTS.)


This demonstrates how to represent a failed computation (empty list), but doesn't show how to distinguish a failed computation and a successful one with no solution. Or how to represent errors. What does Result look like?


That will just make all of the logical mistakes you make in your program return an an empty list.

We keep implementing languages this way because it solves our problems; not the problems of architectural astronauts.


It's because corecursion is harder to reason about than recursion, requiring a slightly different set of fundamental tools and vocabulary, which aren't taught, because it would have to be on top of the current set.


There is already precedent of somewhat magical syntax changing the return type, namely 'async' making the function return a future<your stated return type>. While this approach adds one more non-obvious thing for a newcomer to learn, it shouldn't make it much more difficult to reason about your code once you know what to expect.


I might agree more if async was just doing the Future type wrapping. However, because it is doing much more, it feels fundamentally different (hence the keyword).

I probably haven't interacted enough with larger rust projects, but I just don't see how "OK(123)" could ultimately cause all that much friction when working with the language.


Oh, I'm totally fine with Ok(123). The current syntax with Result is explicit and flexible even if slightly verbose. It's especially nice that Result is an enum like any other.


I don't think async is a good comparison. Compiling async functions is inherently magical, since it involves the compiler mucking with your control flow graph. This makes it less of a surprise when magic shows up in your type signature. On the completely opposite hand, what makes Results cool is that they require no magic beyond enum/match. (Yes, it's nice that there's sugar like '?', but you don't need it to write code that uses Results correctly)


I think it all depends on how your function is used in the end:

- if all you're doing is using the value with an added "?" for error handling, like "a = sqrt(b)?;" then Ok-wrapping is perfect, I love it.

- if you need to "match(check_credentials(user, pass)) {Privileges...}" then simply put, don't Ok-wrap and declare your function as usual.

Ok-wrapping sounds really good but shouldn't be used everywhere, like you still use "match" instead of "if let" sometimes. It'd be sad if this idea was aborted just because it's not universal.


I took a look at fehler but the thing that kept me from forming an opinion is I have no idea what things look like at the call site - and couldn't find examples in the docs.

If the ONLY thing fehler does is Ok-wrapping on the annotated function, then I can see the value in it, but it doesn't seem like a big deal. Useful, but not worth being added to the language when the proc macro is so complete. (I also see the argument against it - I personally enjoy seeing Result<foo,bar> rather than the Ok-wrapped version, but I see this as a _preference_.)

If the caller is just a wrapper around a function doing Ok-wrapping, the caller must either also Ok-wrap or return Result<_,_>. It's at this point I feel like I'm missing something because he made an ergonomic argument about NOT needing to update things in many places, which seems to contradict this.


OK wrapping is factually a huge maintenance improvement 0(n) edits which become 0(1)

I wonder what are the differences with checked exceptions in java which are self documenting in the signature and must be checked.

To me the ideal exception system would be normal exceptions as a basis, + autogenerated exceptions types in the method signature (I wonder if intellij idea can do that) and ability to have checked exception on-demand

It's ridiculous that Java force checked exceptions and that other languages do not allow them. Every language should at least allow them optionally, it would be a net improvement.


> I still haven’t figured how to map things like cached_results: HashMap<u32, Result<String, Error>> into the fallible function model. In this case, there’s no function that could be fallible.

Oh you can definitely make a generically cached version of a fallible function, by either:

- Re-throwing the same exception at future lookups, or a clone if preferred (requires storing the exception)

- Treating the exception case as an uncacheable value (always logically consistent with caches that allow items to disappear at any time)


> Therefore, to do such thing one has to mentally switch the models to something else and „convert“ the fallible function’s result into a „frozen“ representation.

This is where the concept of lifting comes in.

Take for example a simple function that squares a number.

function sq(n) { return n * n }

Instead of taking something out of a representation (category in math speak), you can instead "lift" your function to be able to work _inside_ the representation.

Here's an example:

const nums = [1, 2, 3]

Now you could write a "for" loop, extract the value from the "array representation" to just the "int" type expected by your "sq" function, call the function, and the push the resultant value into a new array, but there's an easier way.

const squaredNums = nums.map(sq)

Or what about dealing with async values?

fetchNumAsync().then(sq)

Or what about if the value is optional?

maybeNum.map(sq)

Instead of converting from one representation to another back and forth over and over, "lift" your function to be able to work within that representation.

"map" and "then" do the work of lifting a function that doesn't understand that representation and makes it so a much more general function can still do meaningful work within that representation.

This may be a bit advanced, but there's also a special form of composition called "Kleisli composition" that lifts functions that straddle 2 different "representations".

Let's say you have some functions as follows:

processA(value: A) => Result<B,E>

processB(value: B) => Result<C,E>

processC(value: C) => Result<D,E>

How would you go from A -> B -> C -> D?

You could call the function and unwrap it each time and then pass it to the next function.

let b = processA(value)

if b.isErr() { /* abort early */ }

let c = processB(b.unwrap())

...

Or you could do something like:

pipe(processA, processB, processC)(value)

The pipe function in this context would be performing "Kleisli composition" and straddling the 2 different representations for you so you can remove that boilerplate.

In essence, for each function in the pipeline it would check if the returned value was an error, and if it was it would just return that error right away. Otherwise, it would unwrap the result and pass it to the next function.

If you understood that then you have the intuition behind a monad.

The only real difference between a "functor" and a "monad" is that the functor takes functions that return value, and a monad takes functions that return a wrapped value.

Functors lift functions that work within the same "representation".

Monads lift functions that work across 2 different "representations".


Ok wrapping is the answer to not using NULL


I thought Nil was the answer to not using NULL ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: