Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One thing that needs to be emphasized with “durable execution” engines is they don’t actually get you out of having to handle errors, rollbacks, etc. Even the canonical examples everyone uses - so you’re using a DE engine to restart a sales transaction, but the part of that transaction that failed was “charging the customer” - did it fail before or after the charge went through? You failed while updating the inventory system - did the product get marked out or not? All of these problems are tractable, but once you’ve solved them - once you’ve built sufficient atomicity into your system to handle the actual failure cases - the benefits of taking on the complexity of a DE system are substantially lower than the marketing pitch.


The key to a durable workflow is making each step idempotent. Then you don't have to worry about those things. You just run the failed step again. If it already worked the first time, it's a no-op.

For example, stripe lets you include an idempotency key with your request. If you try to make a charge again with the same key, it ignores you. A DE framework like DBOS will automatically generate the idempotency key for you.

But you're correct, if you can't make the operation idempotent, then you have to handle that yourself.


I kind of feel that using examples where idempotent keys are implemented in a SAAS in a engine is side stepping the issue not because NIH (it’s the right thing to do) but it glosses over the complexity of implementing idempotency for the part you need to yourself , and I’ll bet most people have a kinda promise to be idempotent unless edge case


Temporal plus idempotency keys solves probably the majority of infrastructure normally needed for production systems


Except to run temporal at scale on prem you’ll need 50x the infra you had before.


Indeed, one of the main selling points of DBOS. All the functionality of Temporal without any of the infrastructure.


Ah I don't know if I would agree with that. Temporal does a lot of stuff; we just don't happen to need most of it and it's really heavyweight on the database side (running low 500 or so workflows/second of their own 'hello world' style echo benchmark translates to 100k database ops/second..

DBOS is tied to Postgres, right? That wouldn't scale anywhere near where we need either.

Sadly there aren't many shortcuts in this space and pretending there are seems a bit hip at the moment. In the end, mostly everyone who can afford to solve such problems are gonna end up writing their own systems for this.


> DBOS is tied to Postgres, right? That wouldn't scale anywhere near where we need either.

I would challenge that assumption. We have 50 years of experience scaling Postgres. It can scale pretty far, and then you can shard it for even more. Or you can use one of the new flavors of Postgres compatible database that has unlimited horizontal scaling.

> In the end, mostly everyone who can afford to solve such problems are gonna end up writing their own systems for this.

Hard disagree (granted, I'm the CEO of one of the companies selling a solution in this space). If done right with a good DX and lightweight enough, ideally everyone will use DE by default, and will use one of the frameworks provided. Most likely one of the new style frameworks that you see in this blog post and that DBOS uses, that don't use an external coordinator and black box binary with a shim.

DBOS uses in process coordination with a pure language library, which makes it far more performant with a lot less hardware. It's not an apples to apples comparison.


> they don’t actually get you out of having to handle errors

I wrote a durable system that recovers from all sorts of errors (mostly network faults) without writing much error handling code. It just retries automatically, and importantly the happy path and the error path are exactly the same, so I don’t have to worry that my error path has much less execution than my happy path.

> but the part of that transaction that failed was “charging the customer” - did it fail before or after the charge went through?

In all cases, whether the happy path or the error path, the first thing you do is compare the desired state (“there exists a transaction exists charging the customer $5”) with the actual state (“has the customer been charged $5?”) and that determines whether you (re)issue the transaction or just update your internal state.

> once you’ve built sufficient atomicity into your system to handle the actual failure cases - the benefits of taking on the complexity of a DE system are substantially lower than the marketing pitch

I probably agree with this. The main value is probably not in the framework but rather in the larger architecture that it encourages—separating things out into idempotent functions that can be safely retried. I could maybe be persuaded otherwise, but most of my “durable execution” patterns seem to be more of a “controller pattern” (in the sense of a Kubernetes controller, running a reconciling control loop) and it just happens that any distributed, durable controller platform includes a durable execution subsystem.


In my one encounter with one of these systems it induced new code and tooling complexity, orders of magnitude performance overhead for most operations, and made dev and debug workflows much slower. All for... an occasional convenience far outweighed by the overall drag of using it. There are probably other environments where something like this makes sense but I can't figure out what they are.


> All for... an occasional convenience far outweighed by the overall drag of using it

If you have any long-running operation that could be interrupted mid-run by any network fluke (or the termination of the VM running your program, or your program being OOMed, or some issue with some third party service that your app talks to, etc), and you don’t want to restart the whole thing from scratch, you could benefit from these systems. The alternative is having engineers manually try to repair the state and restart execution in just the right place and that scales very badly.

I have an application that needs to stand up a bunch of cloud infrastructure (a “workspace” in which users can do research) on the press of a button, and I want to make sure that the right infrastructure exists even if some deployment attempt is interrupted or if the upstream definition of a workspace changes. Every month there are dozens of network flukes or 5XX errors from remote endpoints that would otherwise leave these workspaces in a broken state and in need of manual repair. Instead, the system heals itself whenever the fault clears and I basically never have to look at the system (I periodically check the error logs, however, to confirm that the system is actually recovering from faults—I worry that the system has caught fire and there’s actually some bug in the alerting system that is keeping things quiet).


The system I used didn't have any notion of repair, just retry-forever. What did you use for that? I've written service tree management tools that do that sort of thing on a single host but not any kind of distributed system.


Repair is just continuous retrying some reconciliation operation, where “reconciliation” means taking the desired state and the current state and diffing the two to figure out what actions need to be performed. In my case I needed to look up what the definition of a “workspace” was (from a database or similar) in terms of what infrastructure should exist and then query the cloud provider APIs to figure out what infrastructure did exist and then create any missing infrastructure, delete any infrastructure that ought not exist, and update any infrastructure whose state is not how it ought to be.

> I've written service tree management tools that do that sort of thing on a single host but not any kind of distributed system.

That’s essentially what Kubernetes is—a distributed process manager (assuming process management is what you are describing by “service tree”).


I'm not sure which one you used, but ideally it's so lightweight that the benefits outweigh the slight cost of developing with them. Besides the recovery benefit, there is observability and debugging benefits too.


I don't want to start a debate about a specific vendor but the cost was very high. Leaky serialization of call arguments and results, then hairpinning messages across the internet and back to get to workers. 200ms overhead for a no-op call. There was some observability benefit but it didn't allow for debugger access and had its own special way of packaging code so net add of complexity there too. That's not getting into the induced complexity caused by adding a bunch of RPC boundaries to fit their execution model. All that and using the thing effectively still requires understanding their runtime model. I understand the motivation, but not the technical approach.


Regardless of the vendor, it sounds like you were using the old style model where there is a central coordinator and a shim library that talks to a black box binary.

The style presented in this blog post doesn't suffer from those downsides. It's all done with local databases and pure language libraries, and is completely transparent to the user.


Yeah, the system in the blog post retargeted at Postgres would be a step up from what I've used. I'm still skeptical of the underlying model of message replay for rehydration because it makes reasoning about the changes to the logic ("flows" in the post's terminology) really hard. You have to understand what the runtime is doing as well as how all the previous versions of the code worked, the implications for all the possible states of the cached step results, and how those logs will behave when replayed through the current flow code. I think in all worlds where transactions are necessary a central coordinator is necessary, whether it's an RDMS under a traditional app or something fancier under one of these durable execution things.

In the end I'm left wondering what the net benefit is over say an actor framework that more directly maps to the notion of long-lived state with occasional activity and is easier to test.

All that said some of the vendors have raised hundreds of millions of dollars so someone must believe in the idea.


Temporal


Yep fully agreed the main thing is to break apart the systems so any retries don’t lead to issues like you mentioned.

I do still think there is sufficient amount of boilerplate to potentially justify some engine like this.


> One thing that needs to be emphasized with “durable execution” engines is they don’t actually get you out of having to handle errors, rollbacks, etc.

I think this is a gross misrepresentation of what durable executions are. DEs were never expected to magically eliminate the need to handle errors. What DEs do is provide an high-level abstraction of the same pattern that is recurrent on all workflow engines, and they provide a simpler way for developers to implement rollback and compensation steps when workflows fail.

If you are designing and implementing a transaction with a DE, you still need to design and implement a transaction. DEs simplify much of the logic, but you still need to design and implement a transaction. There is no silver bullet.

> Even the canonical examples everyone uses - so you’re using a DE engine to restart a sales transaction, but the part of that transaction that failed was “charging the customer” - did it fail before or after the charge went through? (...)

That's immaterial to the discussion on DEs. You, as a software engineer, still need to design and implement a transaction. DEs greatly simplify your job, but you still need to analyze failure modes and perform the necessary compensation steps.

> All of these problems are tractable, but once you’ve solved them - once you’ve built sufficient atomicity into your system to handle the actual failure cases - the benefits of taking on the complexity of a DE system are substantially lower than the marketing pitch.

I completely disagree, but you do you. Some durable execution engines greatly simplify tracking state and implementing activities and rollback logic. Some cloud providers even provide services that allow you to implement long-running workflows with function-as-a-service components that provide out-of-the-box support for manual approvals. If you feel you are better off rolling your own support, good for you. Meanwhile, everyone around you is delivering the same value with much less work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: