Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can scale monoliths independently too. Depending on the language that means paying some additional memory overhead for unused code but practically it's small compared to the typical amount of ram on a server these days.


This post reminds me of exactly the balance I've been toying with. One particular service I work with has ~6 main jobs it does that are all kind of related in some way but still distinct from each other. That could've been designed as 6 microservices, but there are services that do other things as well - it's not all contained in giant one monolith so it's somewhere in the middle.

The software is going to be deployed at different locations with different scaling concerns. In some places, it's fine to just run 1 instance where it does all 6 jobs continuously. At other places, I anticipate adding parameters or something so it can run multiple instances of a subset of the jobs, but not necessarily all the jobs on every instance.


No, this is not possible always.

- consider case where the task is CPU intensive but not so critical as to eat into other parts of the code

- consider the case where the task needs some data loaded for it to work. I don't think it is a good idea to have that data loaded into the monolith.


Please elaborate on that. Without spawning a new monolith, how do you scale it. Add more resources?


You do spawn a new monolith. You make one group the CPU intensive one and route that traffic there. Same concept as a microservice except that it comes with a bunch of dead code. But the dead code is not that resource expensive these days.


I see, so basically we are applying scaling, but instead of scaling the bottleneck as its own part we scale everything.

I somewhat fail to see how that saves much effort; routing setup sounds like a hassle.

What we‘re using at my work is just a mono repo with all services in it, which works pretty well, and we‘re like 7 BE devs


You don't have to write an API layer and get type checking among some other benefits. Is it a ton of savings? No, but I'd describe it as a significant amount of effort and lower complexity.


To be fair, with libraries the API layer can be essentially zero code, and with type checking. That's how it worked when I was at my previous gig.


You run another instance of it.


So you have to take care of routing etc.

I see how it works, and I completely agree that to start out, so going from PoC to first business implementation, a monolith is the way to go (unless the goal from the start is 100 million concurrent users I guess).

But after that initial phase, does it really matter if you use one or the other? You can overengineer both and make them a timesink, or you can keep both simple. I do agree on things like network latency adding up, but being able to completely isolate business logic seems like a nice gain. But Im also not talking about real micro level (i.e. auth login and registration being different services), but more macro (i.e. security is one, printing another(pdf,csv,word etc), BI another one


Interestingly Meta went with a django monolith for their new app and their goal was definitely in the order of 100 million concurrent users.


Which is perfectly fine - 100 million concurrent users aren't the same for app x and app y, as the business logic the backend run isn't also the same.

Not saying it can't handle everything as well. Just saying the modularity of microservices makes it, in my pov, easier to handle large complex real time systems.

Maybe that's also something that comes with experience - as a rather "newish" guy (professional SE, so one level above Jr), it makes it easier to work on our project.


That's just for the interface layer though.


>So you have to take care of routing etc.

The routing of just load balancing is much simpler than the routing of exectution jumping between many microservices.

>You can overengineer both and make them a timesink

I agree, but a microservice architecture starts you out at a higher complexity.

>but being able to completely isolate business logic seems like a nice gain

That can also be done by having that business logic live in its own library.


> The routing of just load balancing is much simpler than the routing of exectution jumping between many microservices

Not necessarily at all, i.e. using GRPC it's all self discovered.

> I agree, but a microservice architecture starts you out at a higher complexity.

Definitely

> That can also be done by having that business logic live in its own library.

That's true, having it in it's own library is certainly a possibility -> but then it's also not that far off micro/macro services anyway, except you deploy it as one piece. And basically this is my argument: If you're having it all as libraries, and you all work in a mono repo anyway, the only real difference between micro/mono is the deployment, and that with micro you _could_ independently scale up whatever the current bottleneck is, which we've used plenty of times




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: