I would argue that the traditional way to install applications (particularly servers) on UNIX wasn’t very compatible with the needs that arose in the 2000s.
The traditional way tends to assume that there will be only one version of something installed on a system. It also assumes that when installing a package you distribute binaries, config files, data files, libraries and whatnot across lots and lots of system directories. I grew up on traditional UNIX. I’ve spent 35+ years using perhaps 15-20 different flavors of UNIX, including some really, really obscure variants. For what I did up until around 2000, this was good enough. I liked learning about new variants. And more importantly: it was familiar to me.
It was around that time I started writing software for huge collections of servers sitting in data centers on a different continent. Out of necessity I had to make my software more robust and easier to manage. It had to coexist with lots of other stuff I had no control over.
It would have to be statically linked, everything I needed had to be in one place so you could easily install and uninstall. (Eventually in all-in-one JAR files when I started writing software in Java). And I couldn’t make too many assumptions about the environment my software was running in.
UNIX could have done with a re-thinking of how you deal with software, but that never happened. I think an important reason for this is that when you ask people to re-imagine something, it becomes more complex. We just can’t help ourselves.
Look at how we reimagined managing services with systemd. Yes, now that it has matured a bit and people are getting used to it, it isn’t terrible. But it also isn’t good. No part of it is simple. No part of it is elegant. Even the command line tools are awkward. Even the naming of the command line tools fail the most basic litmus test (long prefixes that require too many keystrokes to tab-complete says a lot about how people think about usability - or don’t).
Again, systemd isn’t bad. But it certainly isn’t great.
As for blaming Python, well, blame the people who write software for _distribution_ in Python. Python isn’t a language that lends itself to writing software for distribution and the Python community isn’t the kind of community that will fix it.
Point out that it is problematic and you will be pointed to whatever mitigation that is popular at the time (to quote Queen “I've fallen in love for the first time. And this time I know it's for real”), and people will get upset with you, downvote you and call you names.
I’m too old to spend time on this so for me it is much easier to just ban Python from my projects. I’ve tried many times, I’ve been patient, and it always ends up biting me in the ass. Something more substantial has to happen before I’ll waste another minute on it.
> UNIX could have done with a re-thinking of how you deal with software, but that never happened.
I think it did, but the Unix world has an inherent bad case of "not invented here" syndrome, and a deep cultural reluctance to admit that other systems (OSes, languages, and more) do some things better.
NeXTstep fixed a big swath of issues (in the mid-to-late 1980s). It threw out X and replaced it with Display Postscript. It threw out some of the traditional filesystem layout and replaced it with `.app` bundles: every app in its own directory hierarchy, along with all its dependencies. Isolation and dependency packaging in one.
(NeXT realised this is important but it has to be readable and user-friendly. It replaces the traditional filesystem with something more readable. 15Y later, Nix realised the same lesson, but forgot the 2nd, so it throws out the traditional FHS and replaces it with something less readable, which needs software to manage it. The NeXT way means you can install an app with a single `cp` command or one drag-and-drop operation.)
Some of this filtered back upstream to Ritchie, Thompson and Pike, resulting in Plan 9: bin X, replace it with something simpler and filesystem-based. Virtualise the filesystem, so everything is in a container with a virtual filesystem.
But it wasn't Unixy enough so you couldn't move existing code to it. And it wasn't FOSS, and arrived at the same time as a just-barely-good-enough FOSS Unix for COTS hardware was coming: Linux on x86.
(The BSDs treated x86 as a 2nd class citizen, with grudging limited support and the traditional infighting.)
I can’t remember NeXTStep all that well anymore, but the way applications are handled in Darwin is a partial departure from the traditional unix way. Partial, because although you can mostly make applications live in their own directory, you still have shared, global directory structures where app developers can inflict chaos. Sometimes necessitating third party solutions for cleaning up after applications.
But people don’t use Darwin for servers to any significant degree. I should have been a bit more specific and narrowed it down to Linux and possibly some BSDs that are used for servers today.
I see the role of Docker as mostly a way to contain the “splatter” style of installing applications. Isolating the mess that is my application from the mess that is the system so I can both fire it up and then dispose of it again cleanly and without damaging my system. (As for isolation in the sense of “security”, not so much)
> a way to contain the “splatter” style of installing applications
Darwin is one way of looking at it, true. I just referred to the first publicly released version. NeXTstep became Mac OS X Server became OS X became macOS, iOS, iPadOS, watchOS, tvOS, etc. Same code, many generations later.
So, yes, you're right, little presence on servers, but still, the problems aren't limited to servers.
On DOS, classic MacOS, on RISC OS, on DR GEM, on AmigaOS, on OS/2, and later on, on 16-bit Windows, the way that you install an app is that you make a directory, put the app and its dependencies in it, and maybe amend the system path to include that directory.
All single-user OSes, of course, so do what you want with %PATH% or its equivalent.
Unix was a multi-user OS for minicomputers, so the assumption is that the app will be shared. So, break it up into bits, and store those component files into the OS's existing filesystem hierarchy (FSH). Binaries in `/bin`, libraries in `/lib`, config in `/etc`, logs and state in `/var`, and so on -- and you can leave $PATH alone.
Make sense in 1970. By 1980 it was on big shared departmental computers. Still made sense. By 1990 it was on single-user workstations, but they cost as much as minicomputers, so why change?
The thing is, the industry evolved underneath. Unix ended up running on a hundred million times more single-user machines (and VMs and containers) than multiuser shared hosts.
The assumptions of the machine being shared turned out to be wrong. That's the exception, not the rule.
NeXT's insight was to only keep the essential bits of the shared FSH layout, and to embed all the dependencies in a folder tree for each app -- and then to provide OS mechanisms to recognise and manipulate those directory trees as individual entities. That was the key insight.
Plan 9 virtualised the whole FSH. Clever but hard to wrap one's head around. It's all containers all the way down. No "real" FSH.
Docker virtualises it using containers. Also clever but in a cunning-engineer's-hacky-kludge kind of way, IMHO.
I think GoboLinux maybe made the smartest call. Do the NeXT thing, junk the existing hierarchy -- but make a new more-readable one, with the filesystem as the isolation mechanism, and apply it to the OS and its components as well. Then you have much less need for containers.
I agree with tou that the issue is packaging. And to have developers trying to package software is the issue IMO. They will come up with the most complicated build system to handle all scenarios, and the end result will be brittle and unwieldy.
There’s also the overly restrictive dependency list, because each deps in turn is happy to break its api every 6 months.
The traditional way tends to assume that there will be only one version of something installed on a system. It also assumes that when installing a package you distribute binaries, config files, data files, libraries and whatnot across lots and lots of system directories. I grew up on traditional UNIX. I’ve spent 35+ years using perhaps 15-20 different flavors of UNIX, including some really, really obscure variants. For what I did up until around 2000, this was good enough. I liked learning about new variants. And more importantly: it was familiar to me.
It was around that time I started writing software for huge collections of servers sitting in data centers on a different continent. Out of necessity I had to make my software more robust and easier to manage. It had to coexist with lots of other stuff I had no control over.
It would have to be statically linked, everything I needed had to be in one place so you could easily install and uninstall. (Eventually in all-in-one JAR files when I started writing software in Java). And I couldn’t make too many assumptions about the environment my software was running in.
UNIX could have done with a re-thinking of how you deal with software, but that never happened. I think an important reason for this is that when you ask people to re-imagine something, it becomes more complex. We just can’t help ourselves.
Look at how we reimagined managing services with systemd. Yes, now that it has matured a bit and people are getting used to it, it isn’t terrible. But it also isn’t good. No part of it is simple. No part of it is elegant. Even the command line tools are awkward. Even the naming of the command line tools fail the most basic litmus test (long prefixes that require too many keystrokes to tab-complete says a lot about how people think about usability - or don’t).
Again, systemd isn’t bad. But it certainly isn’t great.
As for blaming Python, well, blame the people who write software for _distribution_ in Python. Python isn’t a language that lends itself to writing software for distribution and the Python community isn’t the kind of community that will fix it.
Point out that it is problematic and you will be pointed to whatever mitigation that is popular at the time (to quote Queen “I've fallen in love for the first time. And this time I know it's for real”), and people will get upset with you, downvote you and call you names.
I’m too old to spend time on this so for me it is much easier to just ban Python from my projects. I’ve tried many times, I’ve been patient, and it always ends up biting me in the ass. Something more substantial has to happen before I’ll waste another minute on it.