Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That would take forever - far too long in our case. We have to be able to make changes on the clients of our code because the teams responsible for them are busy achieving their goals.

IMO versioning is something you do when you cannot do the work of upgrading yourself - it's a response you take which is non-optimal and you do it when it's unavoidable.



> That would take forever - far too long in our case.

It doesn't. It just does not force everyone to rush both clients and services to prod without having a fallback plan. Also dubbed as competent engineering.

> We have to be able to make changes on the clients of our code because the teams responsible for them are busy achieving their goals.

Yes. That's what you want, isn't it? Otherwise as a service maintainer you'd be blocked for no reason at all.

And let's not even touch the topic of rolling back changes.

There is no way around it. Once you start to think about any of the rationale behind this monorepo nonsense, you soon realize it's a huge mess: lots of easily avoidable problems created by yourself for yourself, that otherwise would be easy to avoid.


I'm afraid I haven't really understood the argument.

To put the opposite view: Versioning could be done with individual functions in the code for example but we don't do that - we just update all the places that call it.

We usually start to do versioning where there are boundaries - such as with components that we bring in from external sources or with projects that are maintained by separate teams.

A version is usually a response a situation where you cannot update the clients that use the api yourself.

So monorepos can be seen as a way of just saying "actually I can update a lot of the points-of-use of this API and I should instead of creating a new version and waiting for someone else to finally use it."


> To put the opposite view: Versioning could be done with individual functions in the code for example but we don't do that - we just update all the places that call it.

I think you're a bit confused. There are code repositories, and there are units of deployment. Those are not the same things.

Storing multiple projects in a single repository does not magically make them a single unit of deployment, even when you deploy them all with a single pipeline. When you have multiple projects, you always have multiple units of deployment.

In your example, you focused on a scenario that does not apply: individual functions that are a part of the same unit of deployment. Your example breaks down when you pack your hypothetical function in separate modules that you deploy and consume independently. Even if you do not explicitly tag a version ID to a package, implicitly your module has different releases with different versions of your code delivered at different points in time. If one of these deliveries has a breaking change them your code breaks. Explicitly specifying a version ID, such as adding git modules pointing at a specific commit, is a technique to preserve compatibility.

Where it is very obvious your example fails is when you look at scenarios involving distributed applications with independent units of deployment. This means things like a SPA consuming a backend service, or even a set of producers and consumers. Even if they are all deployed by the same pipeline, you either have forced downtime or you will always have instances of different versions running in parallel.

The SPA+backend is a rather obvious example: even if you do a deployment of a backend and frontend at the exact same time as an atomic transaction that ensures both are available at the precise same tick, don't you still have users with the browsers loaded with instances of the old SPA? They will continue to have it open until they hit F5, aren't they? If you released a breaking change to the backend, what do you think will happen to users still using the old SPA? Things will break, won't they?

Atomic deployments do not exist, however. Looking at services, you have no way to ensure you can deploy new versions of services at precisely the same tick. This means even with monorepos you will always have different deployments of those services running in parallel. Monorepo proponents fool themselves into believing this is not a problem because they count on experiencing problems at best only briefly during deployments, and that the system eventually reaches steady state. This means things like erratic responses, distributed transactions failing, perhaps only a few corrupt records going into the db, etc. If everyone pretends these problems are normal then there is no problem to fix.

Except this negates each and every single hypothetical advantages of a monorepo, and rejects any argument supporting it. Once you realize you're not actually eliminating problems and you are only buying forced downtime, no matter how small, and taking that hit out of willful ignorance. As a tradeoff, you're buying yourself operational problems and lack of flexibility due to the misuse of revision control systems.

And all for what? Because someone heard Google uses monorepos?


> Storing multiple projects in a single repository does not magically make them a single unit of deployment, even when you deploy them all with a single pipeline. When you have multiple projects, you always have multiple units of deployment.

It makes it easier to turn them into the same unit of deployment. There's nothing you cannot do some other way of course.

You're right about atomic deployments being difficult and sometimes one can control that risk by the order in which you change things. In a monorepo it's slightly easier to record some kind of script, makefile, dependency system that says "deploy this before that".

With browsers -for sure your user level API has to be stable and cannot change in a sudden incompatible way. When people have seen fit to have layers of APIs underneath it though, one can still have a lot of change thats theoretically hidden from users but still changes lots of APIs.


> It makes it easier to turn them into the same unit of deployment.

They are not the same unit of deployment. That's an impossibility.

This critical mistake is at the core of this monorepo nonsense. It's a cargo cult, where people believe that storing code for multiple projects in the same source code revision contol system somehow magically turns distributed systems into a monolith and solves deployment issues. It does not.

> You're right about atomic deployments being difficult and sometimes one can control that risk by the order in which you change things.

No. That is false. Atomicity in a distributed transaction is not achieved by shuffling operations around. Specially those you cannot control.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: