Right but those things are not unrelated. Back in the day if you suggested to the average FOSS developer that maybe it should just be possible to download a zip of binaries, unzip it anywhere and run it with no extra effort (like on Windows), they would say that that is actively bad.
You should be installing it from a distro package!!
What about security updates of dependencies??
And so on. Docker basically overrules these impractical ideas.
It’s still actively bad. And security updates for dependencies is easy to do when the dependencies developer is not bundling those with feature changes and actively breaking the API.
I was replying to a comment comparing the distribution of self-contained binaries to Linux package management. This is a much more straightforward question
Containers are a related (as the GP comment says) thing, but offer a different and varied set of tradeoffs.
Those tradeoffs also depend on what you are using containers for. Scaling by deploying large numbers of containers on a cloud providers? Applications with bundled dependencies on the same physical server? As a way of providing a uniform development environment?
> Those tradeoffs also depend on what you are using containers for. Scaling by deploying large numbers of containers on a cloud providers? Applications with bundled dependencies on the same physical server? As a way of providing a uniform development environment?
Those are all pretty much the same thing. I want to distribute programs and have them work reliably. Think about how they would work if Linux apps were portable as standard:
> Scaling by deploying large numbers of containers on a cloud providers?
You would just rsync your deployment and run it.
> Applications with bundled dependencies on the same physical server?
Just unzip each app in its own folder.
> As a way of providing a uniform development environment?
Just provide a zip with all the required development tools.
> Those are all pretty much the same thing. I want to distribute programs and have them work reliably.
Yes, they are very similar in someways, but the tradeoffs (compared to using containers) would be very different.
> You would just rsync your deployment and run it.
If you are scaling horizontally and not using containers you are already probably automating provisioning and maintenance of VMs, so you can just use the same tools to automate deployment. You would also be running one application per VM so you do not need to worry about portability.
> Just unzip each app in its own folder.
What is stopping people from doing this? You can use an existing system like Appimage, or write a windows like installer (Komodo used to have one). The main barrier as far as I can see is that users do not like it.
> Just provide a zip with all the required development tools.
vs a container you still have to configure it and isolation can be nice to have in a development environment.
vs installing what you need with a package manager, it would be less hassle in some cases but this is a problem that is largely solved by things like language package managers.
Most Linux apps do not bundle their dependencies, don't provide binary downloads, and aren't portable (they use absolute paths). Some dependencies are especially awkward like glibc and Python.
It is improving with programs written in Rust and Go which tend to a) be statically linked, and b) are more modern so they are less likely to make the mistake of using absolute paths.
Incidentally this is also the reason Nix has to install everything globally in a single root-owned directory.
> The main barrier as far as I can see is that users do not like it.
I don't think so. They've never been given the option.
> Most Linux apps do not bundle their dependencies, don't provide binary downloads, and aren't portable (they use absolute paths).
That is because the developers choose not to, and no one else chooses to do it for them. On the other hand lots of people package applications (and libraries) for all the Linux distros out there.
> I don't think so. They've never been given the option.
The options exist. AppImage does exactly what you want. Snap and Flatpak are cross distro, have lots of apps, and are preinstalled by many major distros.
Sure, it is not as edgy Arch or something, but unless you have your own mirror, your stuff can be broken at any time.
To be fair, they are _usually_ pretty good about that, the last big breakage I've seen was that git "security" fix which basically broke git commands as root. There is also some problems with Ubuntu LTS kernel upgrades, but docker won't save you here, you need to use something like AMI images.
The irony is that the majority of docker images are built from the same packages and break all the same. But in your eyes, `apt install package` is bad but `RUN apt install package` inside a `Dockerfile` somehow makes it reproducible. I suspect you are confusing "having an artifact" with "reproducible builds" [1]. Having a docker image as an artifact is the same as having tar/zip with your application and its dependencies or having a filesystem snapshot or having VM image like AMI/OVM/VMDK. You can even have a deb file with all your dependencies vendored in.
If you are considering bare-metal servers with deb files, you compare them to bare-metal servers with docker containers. And in the latter case, you immediately get all the compatibility, reproducibility, ease of deployment, ease of testing, etc... and there is no need for a single YAML file.
If you need a reliable deployment without catching 500 errors from Docker Hub, then you need a local registry. If you need a secure system without accumulating tons of CVEs in your base images, then you need to rebuild your images regularly, so you need a build pipeline. To reliably automate image updates, you need an orchestrator or switch to podman with `podman auto-update` because Docker can't replace a container with a new image in place. To keep your service running, you again need an orchestrator because Docker somehow occasionally fails to start containers even with --restart=always. If you need dependencies between services, you need at least Docker Compose and YAML or a full orchestrator, or wrap each service in a systemd unit and switch all restart policies to systemd. And you need a log collection service because the default Docker driver sucks and blocks on log writes or drops messages otherwise. This is just the minimum for production use.
Yes, running server farms in production is complex, and docker won't magically solve _every one_ of your problems. But it's not like using deb files will solve them either - you need most of the same components either way.
> If you need a reliable deployment without catching 500 errors from Docker Hub, then you need a local registry.
Yes, and with debs you need local apt repository
> If you need a secure system without accumulating tons of CVEs in your base images, then you need to rebuild your images regularly, so you need a build pipeline.
presumably you were building your deb with build pipeline as well.. so the only real change is that pipeline now has to has timer as well, not just "on demand"
> To reliably automate image updates, you need an orchestrator or switch to podman with `podman auto-update` because Docker can't replace a container with a new image in place.
With debs you only have automatic-updates, which is not sufficient for deployments. So either way, you need _some_ system to deploy the images and monitor the servers.
> To keep your service running, you again need an orchestrator because Docker somehow occasionally fails to start containers even with --restart=always. If you need dependencies between services, you need at least Docker Compose and YAML or a full orchestrator, or wrap each service in a systemd unit and switch all restart policies to systemd.
deb files have the same problems, but here dockerfiles have an actual advantage: if you run supervisor _inside_ docker, then you can actually debug this locally on your machine!
No more "we use fancy systemd / ansible setups for prod, but on dev machines here are some junky shell scripts" - you can poke the things locally.
> And you need a log collection service because the default Docker driver sucks and blocks on log writes or drops messages otherwise. This is just the minimum for production use.
What about deb files? I remember bad old pre-systemd days where each app had to do its own logs, as well as handle rotations - or log directly to third-party collection server. If that's your cup of tea, you can totally do this in docker world as well, no changes for you here!
With systemd's arrival, the logs actually got much better, so it's feasible to use systemd's logs. But here is a great news! docker has "journald" driver, so it can send its logs to systemd as well... So there is feature parity there as well.
The key point is there are all sorts of so-called "best practices" and new microservice-y way of doing things, but they are all optional. If you don't like them, you are totally free to use traditional methods with Docker! You still get to keep your automation, but you no longer have to worry about your entire infra breaking, with no easy revert button, because your upstream released broken package.
> "Dockerfile is simple", they promised. Now look at the CNCF landscape.
> with debs you need local apt repository
No, you don't need an apt repository. To install a deb file, you need to scp/curl the file and run `dpkg`.
>presumably you were building your deb with build pipeline as well
You don't need to rebuild the app package every time there is a new CVE in a dependency. Security updates for dependencies are applied automatically without any pipeline, you just enable `unattended-updates`, which is present out of the box.
> With debs you only have automatic-updates, which is not sufficient for deployments.
Again, you only need to run `dpkg` to update your app. preinst, postinst scripts and systemd unit configuration included in a deb package should handle everything.
> deb files have the same problems
No, they don't. deb files intended to run as a service have systemd configuration included and every major system now runs systemd.
> but here dockerfiles have an actual advantage: if you run supervisor _inside_ docker, then you can actually debug this locally on your machine!
Running a supervisor inside a container is an anti-pattern. It just masks errors from the orchestrator or external supervisor. Also usually messes with logs.
> No more "we use fancy systemd / ansible setups for prod, but on dev machines here are some junky shell scripts" - you can poke the things locally.
systemd/ansible are not fancy but basic beginner-level tools to manage small-scale infrastructure. That tendency to avoid appropriate but unfamiliar tools and retreat into more comfortable spaces reminds me of the old joke about a drunk guy searching for keys under a lamp post.
> What about deb files? I remember bad old pre-systemd days where each app had to do its own logs, as well as handle rotations - or log directly to third-party collection server.
Everything was out of the box - syslog daemon, syslog function in libc, preconfigured logrotate and logrotate configs included in packages.
There are special people who write their own logs bypassing syslog and they are still with us and they still write logs into files inside containers.
There are already enough rants about journald, so I'll skip that.
> but you no longer have to worry about your entire infra breaking, with no easy revert button, because your upstream released broken package.
Normally, updates are applied in staging/canary environments and tested. If upstream breaks a package - you pin the package to a working version, report the bug to the upstream or fix it locally and live happily ever after.
You should be installing it from a distro package!!
What about security updates of dependencies??
And so on. Docker basically overrules these impractical ideas.