Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is that why containers started? I seem to recall them taking off because of dependency hell, back in the weird time when easy virtualization wasn't insanely available to everyone.

Trying to get the versions of software you needed to use all running on the same server was an exercise in fiddling.



I think there were multiple reasons why containers started to gain traction. If you ask 3 people why they started using containers, you're likely to get 4 answers.

For me, it was avoiding dependencies and making it easier to deploy programs (not services) to different servers w/o needing to install dependencies.

I seem to remember a meetup in SF around 2013 where Docker (was it still dotCloud back then?) was describing a primary use-case was easier deployment of services.

I'm sure for someone else, it was deployment/coordination of related services.


The big selling points for me were what you said about simplifying deployments, but also the fact that a container uses significantly less resource overhead than a full blown virtual machine. Containers really only work if your code works in user space and doesn't need anything super low level (eg TCP network stack), but as long as you stay in user space it's amazing.


The main initial drive for me was that it let me separately run many things without a) trying to manage separate dependency sets, and b) Sharing RAM - Without having to physically allocate large amounts of memory to virtual machines; on an 8GB machine at a couple per VM that doesn’t let you get far.


"making it easier to deploy" is a rather... clinical description for fixing the "but it works on my machine!" issue. We could go into detail on how it solved that, but imo it comes down to that.


There's a classic joke where it turns out the solution to "it works on my machine" was to ship my machine


my view of docker, as a who thought it was a shallow wrapper on linux namespaces, is that it was a good fit for the average IT shop to solve the deployment friction

no more handmade scripts(or worse fully manual operations) stupid simple dockerfile scripts.. any employee would be able to understand and groups can organize around it

docker-compose tying services into their own subnet was really a cool thing though


Still not the case, today anyone would understand them. At least in this part of the country where I live


This matches my recollection. Easily repeatable development and test environments that would save developers headaches with reproduction. That then lead logically to replacement of Ansible etc for the server side with the same methodology.

There were many use cases that rapidly emerged, but this eclipsed the rest.

Docker Hub then made it incredibly easy to find and distribute base images.

Google also made it “cool” by going big with it.


iirc full virtualization was expensive ( vmware ) and paravirtualization was pretty heavyweight and slow ( Xen ). I think Docker was like a user friendlier cgroups and everyone loved it. I can't remember the name but there was a "web hosting company in a box" software that relied heavily on LXC and probably was some inspiration for containerization too.

edit: came back in to add reference to LXC, it's been probably 2 decades since i've thought about that.


LXD?


Heh til LXD is not a typo. Thanks :)


On a personal level, that's why I started using them for self hosting. At work, I think the simplicity of scaling from a pool of resources is a huge improvement over having to provision a new device. Currently at an on-prem team and even moving to kubernetes without going to cloud would solve some of the more painful operational problems that send us pages or we have to meet with our prod support team about.


Yes, totally agree that's a contributor too. I should expand that by namespaces I mean user, network, and mount table namespaces. The initial contents of those is something you would have to provide when creating the sandbox. Most of it is small enough to be shipped around in a JSON file, but the initial contents of a mount table require filesystem images to be useful.


There are two answers to “why x happened”.

You’re talking about the needs it solves, but I think others were talking about the developments that made it possible.

My understanding is that Docker brought features to the server and desktop (dependency management, similarity of dev machine and production, etc), by building on top of namespacing capabilities of Linux with a usability layer on top.

Docker couldn’t have existed until those features were in place and once they existed it was an inevitability for them to be leveraged.


And what was the reason for the dependency hell?

Was it always so hard to build the software you needed on a single system?


Because our computers have global state all over the place, and people like it, as it simplifies a lot of things.

You could see that history repeat itself in Python - "pip install something" is way easier to do that messing with virtualenvs, and even works pretty well as long as number of package is small, so it was a recommendation for a long time. Over time, as number of Python apps on same PC grew, and as the libraries gained incompatible versions, people realized it's a much better idea to keep all things isolated in its own virtualenv, and now there are tools (like "uv" and "pipx") which make it trivial to do.

But there are no default "virtualenvs" for regular OS. Containers get closest. nix tries hard, but it is facing uphill battle - it goes very much "against the grain" of *nix systems, so every build script of every used app needs to be updated to work with it. Docker is just so much easier to use.

Golang has no dynamic code loading, so a lot of times it can be used without containers. But there is still global state (/etc/pki, /etc/timezone, mime.types , /usr/share/, random Linux tools the app might call on, etc...) so some people still package it in docker.


No. Back before dynamic objects, for instance, it was easier-of course, there were other challenges at the time.


So perhaps the Linux choice of dynamic by default is partly to blame for dependency hell, and thus the rise of cloning entire systems to isolate a single program?

Ironically one of the arguments for dynamic linking is memory efficiency and small exec size ( the other is around ease of centrally updating - say if you needed to eliminate a security bug ).


See...there's the thing; dynamic linking was originally done by Unixen in the '80s, way before Linux, as a way to cope w/ original X11 on machines that had only 2-4MB of RAM.

X was (in)famous for memory use (see the chapter in the 'Unix-Hater's Handbook'); and shared libs was the consensus as to how to make the best of a difficult situation, see:

http://harmful.cat-v.org/software/dynamic-linking/


According to your link ( great link BTW ) Rob Pike said dynamic linking for X was a net negative on memory and speed and only had a tiny advantage in disk space.

My preference is to bring dependencies in at the source code level and compile them in to the app - stops the library level massive dependency trees ( A need part of B but because some other part of B needs C our dependency tool brings in C, and then D and so on ).


This seems to have worked out well for the plan9 guys. It's just not a popular approach nowadays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: