Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Generally when people argue in favor of a monolith over micro-services, it's not for completely/mostly isolated business functions (i.e. BI pipelines vs. CMS CRUD), it's more for when responding to a single request of group of requests already implicates many services that must all work together to respond to the request(s); in this case, you're still smoked if any one of those services handling a part of the request chokes, you're in fact multiplying your opportunities for failure if you're using micro-services.

Monoliths should be stateless (if achievable) and have no concept of partial success in cases where you would like atomicity unless everything is truly idempotent (easier said than achieved). If those criteria are met then callers just need to retry in the event of failure which can be set up for basically free in most frameworks.

If you're pushing fatal recurring bugs into production, then that is a separate problem wider than the scope of a monolith vs. micro.



If you have a guaranteed way to avoid pushing fatal bugs to production it doesn’t MATTER what your architecture is. You’re in some fantasy land of rainbows and kittens where everything works fine first time every time.

For those of us in the real world who can’t afford perfection, the ability to isolate the impact of the inevitable bugs that do sneak through has some appeal.

As does the fact that exhaustively testing a microservice in a realistic timeframe is a much more tractable problem than exhaustively testing a monolith, which reduces the risk that such bugs will ship in the first place.

Bugs are less likely to ship. And when they do they will have a more limited blast radius. And when they’re detected they can be mitigated more quickly.

Those all sound like great benefits to me.


> If you have a guaranteed way to avoid pushing fatal bugs to production it doesn’t MATTER what your architecture is.

Bugs aside, the architecture does matter, and it matters a lot.

Whether it is a single coarse grained deployment (i.e. a monolith) or a fine grained deployment (modular services or microservices), a solution has a number of technical interfaces. The tehcnical interfaces broadly fall into low and high data volume (or transaction rate) categories. The high data volume interfaces might have a sustained high data flow rate, or they can have spikes in the processing load.

A coarse grained architecture that deploys all of the technical interfaces into a single process address space has a disadvantage of being difficult or costly (usually both) to scale. It does not make sense to scale the whole thing out when only a subset of the interfaces require an extra processing capacity, especially when the demand for it is irregular but intense when it happens. Most of the time, a sudden data volume increase comes at the expense of the low volume data interfaces being suffocated by virtue of high volume interfaces devouring all of the CPU time allotted to the solution as a whole. Low data volume interfaces might have lower processing rates, yet they might perform a critical business function nevertheless, an interruption to which causing either cascading or catastrophic failures that will severely impair the business mission.

The hardware (physical or virtual) resource utilisation is much more efficient (costs wise as well) when the architecture is more fine grained, and the scaling becomes a configuration time activity which is even more true for stateless system designs. Auto-«healing» is a bonus (a service instance has died, got killed off and a new instance has spun up – no-one cares and no-one should care).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: