Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For us at my present and former workplaces, the decision of using microservices didn't depend solely on number of developers. We needed something very scalable, something that different teams can work on without stepping on each other's foot, something that survives even if part of it fails temporarily, something that auto-heals.

We did it with developers in the tens and we didn't have much issues with this approach. In fact, at one of my workplaces we had much more issues with a monolithic app than with the microservice based app we replaced it with.



How the hell does microservices 'auto-heals' ? Do they autogenerate bug fixes and patch themselves ?


Simple, if a service is not answering for a certain amount of time, a new Kubernetes pod is brought up and the old one is killed. :)


That also works with monoliths ... Usually you would make your monolith stateless and distribute the incoming requests / events across many instances that can be spawned / killed depending on volume of requests and health status of instances.


When you kill a monolith you kill a random selection of inflight tasks from every part of your application.

So a rare bug in your mailing list signup workflow that hangs the process and causes it to be killed causes a random selection of inflight webpage requests, payment transactions, message handlers and business processes to fail. And if those failures aren’t all cleanly handled, your mailing list signup bug could propagate into a much wider issue.

Whereas if you have a ‘mailing list service’ that has its own processes that can be killed and respawned, that bug only takes our mailing list processing. Which is good because the bug was probably made by the team who owns mailing list processing. And they can roll back their code and be on their way, with nobody else needing to know or care.


Generally when people argue in favor of a monolith over micro-services, it's not for completely/mostly isolated business functions (i.e. BI pipelines vs. CMS CRUD), it's more for when responding to a single request of group of requests already implicates many services that must all work together to respond to the request(s); in this case, you're still smoked if any one of those services handling a part of the request chokes, you're in fact multiplying your opportunities for failure if you're using micro-services.

Monoliths should be stateless (if achievable) and have no concept of partial success in cases where you would like atomicity unless everything is truly idempotent (easier said than achieved). If those criteria are met then callers just need to retry in the event of failure which can be set up for basically free in most frameworks.

If you're pushing fatal recurring bugs into production, then that is a separate problem wider than the scope of a monolith vs. micro.


If you have a guaranteed way to avoid pushing fatal bugs to production it doesn’t MATTER what your architecture is. You’re in some fantasy land of rainbows and kittens where everything works fine first time every time.

For those of us in the real world who can’t afford perfection, the ability to isolate the impact of the inevitable bugs that do sneak through has some appeal.

As does the fact that exhaustively testing a microservice in a realistic timeframe is a much more tractable problem than exhaustively testing a monolith, which reduces the risk that such bugs will ship in the first place.

Bugs are less likely to ship. And when they do they will have a more limited blast radius. And when they’re detected they can be mitigated more quickly.

Those all sound like great benefits to me.


> If you have a guaranteed way to avoid pushing fatal bugs to production it doesn’t MATTER what your architecture is.

Bugs aside, the architecture does matter, and it matters a lot.

Whether it is a single coarse grained deployment (i.e. a monolith) or a fine grained deployment (modular services or microservices), a solution has a number of technical interfaces. The tehcnical interfaces broadly fall into low and high data volume (or transaction rate) categories. The high data volume interfaces might have a sustained high data flow rate, or they can have spikes in the processing load.

A coarse grained architecture that deploys all of the technical interfaces into a single process address space has a disadvantage of being difficult or costly (usually both) to scale. It does not make sense to scale the whole thing out when only a subset of the interfaces require an extra processing capacity, especially when the demand for it is irregular but intense when it happens. Most of the time, a sudden data volume increase comes at the expense of the low volume data interfaces being suffocated by virtue of high volume interfaces devouring all of the CPU time allotted to the solution as a whole. Low data volume interfaces might have lower processing rates, yet they might perform a critical business function nevertheless, an interruption to which causing either cascading or catastrophic failures that will severely impair the business mission.

The hardware (physical or virtual) resource utilisation is much more efficient (costs wise as well) when the architecture is more fine grained, and the scaling becomes a configuration time activity which is even more true for stateless system designs. Auto-«healing» is a bonus (a service instance has died, got killed off and a new instance has spun up – no-one cares and no-one should care).


The original statement was "service is not answering for a certain amount of time". If the instance of your monolith is not responding you're probably already in a bad state and can reasonably kill it.


What are you monitoring your monolith for? For microservices you can monitor specific metrics related to the exact function, and perform health checks, scaling events accordingly.

For monoliths you cant be as specific. “Is the response a 500” doesn’t really cut it. “Average request latency” for scaling doesn’t cut it when some of your queries are reads and then some are completely unrelated mass joins.


Sure, but "if the instance of your monolith is not responding" probably means the app is down. That's only going to be true for a small subset of the microservices.


In a past job, the benefit of microservices was that some of the operations performed by the system were fare more CPU intensive than others - by having them in their own service that could be scaled independently led to lower overall hardware requirements, and made keeping latency of the other services sensible much easier.


You can scale monoliths independently too. Depending on the language that means paying some additional memory overhead for unused code but practically it's small compared to the typical amount of ram on a server these days.


This post reminds me of exactly the balance I've been toying with. One particular service I work with has ~6 main jobs it does that are all kind of related in some way but still distinct from each other. That could've been designed as 6 microservices, but there are services that do other things as well - it's not all contained in giant one monolith so it's somewhere in the middle.

The software is going to be deployed at different locations with different scaling concerns. In some places, it's fine to just run 1 instance where it does all 6 jobs continuously. At other places, I anticipate adding parameters or something so it can run multiple instances of a subset of the jobs, but not necessarily all the jobs on every instance.


No, this is not possible always.

- consider case where the task is CPU intensive but not so critical as to eat into other parts of the code

- consider the case where the task needs some data loaded for it to work. I don't think it is a good idea to have that data loaded into the monolith.


Please elaborate on that. Without spawning a new monolith, how do you scale it. Add more resources?


You do spawn a new monolith. You make one group the CPU intensive one and route that traffic there. Same concept as a microservice except that it comes with a bunch of dead code. But the dead code is not that resource expensive these days.


I see, so basically we are applying scaling, but instead of scaling the bottleneck as its own part we scale everything.

I somewhat fail to see how that saves much effort; routing setup sounds like a hassle.

What we‘re using at my work is just a mono repo with all services in it, which works pretty well, and we‘re like 7 BE devs


You don't have to write an API layer and get type checking among some other benefits. Is it a ton of savings? No, but I'd describe it as a significant amount of effort and lower complexity.


To be fair, with libraries the API layer can be essentially zero code, and with type checking. That's how it worked when I was at my previous gig.


You run another instance of it.


So you have to take care of routing etc.

I see how it works, and I completely agree that to start out, so going from PoC to first business implementation, a monolith is the way to go (unless the goal from the start is 100 million concurrent users I guess).

But after that initial phase, does it really matter if you use one or the other? You can overengineer both and make them a timesink, or you can keep both simple. I do agree on things like network latency adding up, but being able to completely isolate business logic seems like a nice gain. But Im also not talking about real micro level (i.e. auth login and registration being different services), but more macro (i.e. security is one, printing another(pdf,csv,word etc), BI another one


Interestingly Meta went with a django monolith for their new app and their goal was definitely in the order of 100 million concurrent users.


Which is perfectly fine - 100 million concurrent users aren't the same for app x and app y, as the business logic the backend run isn't also the same.

Not saying it can't handle everything as well. Just saying the modularity of microservices makes it, in my pov, easier to handle large complex real time systems.

Maybe that's also something that comes with experience - as a rather "newish" guy (professional SE, so one level above Jr), it makes it easier to work on our project.


That's just for the interface layer though.


>So you have to take care of routing etc.

The routing of just load balancing is much simpler than the routing of exectution jumping between many microservices.

>You can overengineer both and make them a timesink

I agree, but a microservice architecture starts you out at a higher complexity.

>but being able to completely isolate business logic seems like a nice gain

That can also be done by having that business logic live in its own library.


> The routing of just load balancing is much simpler than the routing of exectution jumping between many microservices

Not necessarily at all, i.e. using GRPC it's all self discovered.

> I agree, but a microservice architecture starts you out at a higher complexity.

Definitely

> That can also be done by having that business logic live in its own library.

That's true, having it in it's own library is certainly a possibility -> but then it's also not that far off micro/macro services anyway, except you deploy it as one piece. And basically this is my argument: If you're having it all as libraries, and you all work in a mono repo anyway, the only real difference between micro/mono is the deployment, and that with micro you _could_ independently scale up whatever the current bottleneck is, which we've used plenty of times


The same logic holds true for micro services. Statelessness is the key. But most micro services implementations end up being distributed monoliths


Doing microservices from the start is fine if you know what to expect. Having worked with massive monoliths, there are cons that people don't consider longer-term and the longer you dig yourself in the harder it is to pull yourself out.

Honestly I think the realistic advice should be to go monolith if you or part of your team aren't experienced with microservices or if your app is simple / you'd be overengineering it otherwise.

If you're starting a SaaS company, you can envision the moving pieces, and will be growing your team quickly microservices properly in the beginning can have a lot of benefits.

Just feels like another one of those dogmas people just mindlessly scream on the internet all day without considering all the cost/benefit analysis for each particular case.


> something that different teams can work on without stepping on each other's foot

In other words, different teams don't want to talk to one another. That's not really a good reason for having 'micro' services.

You can also self-heal monoliths. In fact, it's much easier to do that with a monolith.


Talking and synchronizing dependencies, tools, and releases are different things.


Talking is a superset of

> synchronizing dependencies, tools, and releases




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: