> Absolutely gross solution to a problem I've never faced in over a decade of professional development. .... None of this solves C's only REAL problem (in my opinion) which is the lack of dependency management.
I'd juxtapose these two sentences.
As I understand things. Those who know Nix swear by it. You can declare a development environment which will provide the toolchain and the libraries you need to build your software.
Some things do seem inelegant about Docker containers. e.g. Building the images with Dockerfiles feels fragile. Running containers means high friction to accessing the build environment from the host machine.
Those downsides aside, AFAIU the VSCode devcontainers aim to provide that "wow it just works" experience that the Nix people love, without having to pay the steep learning curve of learning Nix.
> None of this solves C's only REAL problem (in my opinion) which is the lack of dependency management.
I thought this too for a long time, but the more I'm exposed to languages with "proper" dependency management the more I appreciate the C way of just copying external library sources into the project (and I only consider libraries which make this easy, e.g. if they come with their own complex build system files they already lost - just give me a bunch of headers and source files, and maybe a readme with a list of configuration defines).
> the more I'm exposed to languages with "proper" dependency management the more I appreciate the C way of just copying external library sources into the project
What's cumbersome about copying a couple of files and adding them to your build scripts? At least that way you have complete control over the entire update and build process, you can use the build system of your choice, and you know exactly that there are no 'surprises' lurking in your dependency tree. It also nudges you to avoid libraries that come with complex build requirements (which will always pay off long term).
Updating third party dependencies becomes much more cumbersome, as copy pasting isn’t really the most reliable way to update stuff. You lose all traceability with upstream, and their code becomes much harder to distinguish from your own. It also increases the size of your repositories by a lot.
For me guix failed at simply being too slow to use, but that was years ago.
nix on the other hand, I loved the concept and idea, but it was just too much of a 'stop sign' followed by 'we don't do that kind of user activity here' to be usable. I tried for a few weeks to bend our wills together, but the systems' will won and I walked away.
I suspect my experience with guix would have been remarkably similar if I hadn't been put off by the speed - I love scheme, but guile has always seemed the second slowest implementation ever, and I suspected that was the cause for it taking so long to do anything in guix.
Guix still isn't particularly blazing, but Guile is definitely not the bottle neck these days. Guile is actually a very fast Scheme implementation now. The problem with Guix's speed right now is that the binary substitute servers have slow networking, so your options end up being either to compile software from source, or crawl through a slow download.
This is only usually a problem if you want to download something large or do a large update though. If you're just downloading a small program or you update your system frequently, it's quite reasonable to use.
As far as the "stop sign", I have never really run into that with Guix, and I did run into that with Nix. The fact that Scheme - unlike the Nix language - is not purely functional, I think encourages users to do what they want, even if it goes against the spirit of the functional package manager.
I'm not sure what you mean by "slow". Can you please clarify?
Please don't take this as "you're wrong" or to somehow invalidate your experience. In the two years of running Guix as my main system, the only time I've thought something was slow was in building Chromium from source (which took all night to compile). Everything else I never noticed. Certainly not downloads. Those were fine for me both in the US and in Europe. Your experience contrasts so sharply with mine, I wonder what the difference is?
I don't have exact numbers on hand, and it's not always, but sometimes when I'm downloading binary substitutes it will crawl at 100-300 kb/s. I know my experience isn't unique because I also see people complaining about it on the mailing lists.
Sometimes downloads are fine though. I think it's an issue of load on the servers that will cause downloads to be slow sometimes.
I'd juxtapose these two sentences.
As I understand things. Those who know Nix swear by it. You can declare a development environment which will provide the toolchain and the libraries you need to build your software.
Some things do seem inelegant about Docker containers. e.g. Building the images with Dockerfiles feels fragile. Running containers means high friction to accessing the build environment from the host machine.
Those downsides aside, AFAIU the VSCode devcontainers aim to provide that "wow it just works" experience that the Nix people love, without having to pay the steep learning curve of learning Nix.