Hacker Newsnew | past | comments | ask | show | jobs | submit | more fredrikholm's commentslogin

Single file dependencies are amazing. I've never understood why it's so unpopular as a distribution model.

They can be a bit clunky in some languages (eg. C), but even then it's nothing compared to the dystopian level of nightmare fuel that is a lot of dependency systems (eg. Maven, Gradle, PIP). Free vendoring is a nice plus as well.

Lua does it right; happy to see some Fennel follow in that tradition.


The main reason you don't see it that often is because of the "what if some extremely common library that we depend on indirectly 63 times at a total of 11 different versions discovers that four of those versions have a major security vulnerability" problem.

For hobby projects, vulnerable dependencies are usually a minor annoyance that's often less annoying than dealing with more elaborate dependency systems.

For big professional projects, not being able to easily answer "are we currently running any libraries with major known vulnerabilities" makes this approach a non-starter.


Cheers!

You mean in terms of having one centralized source of truth? I find this exact same problem with dependency systems as well; every project has their own dependency tracking file, and unless there is a deliberate attempt at uniting these into one trackable unit, you get this exact mess either way.

If the problem is automation (limiting human factors), then I'd say that whatever issue exists with single file dependencies is a tooling issue more than anything else. There's nothing about having one file that makes this any harder. In fact I'd say the opposite is true (eg. running checksums).

The one thing that dependency systems have going for them, is homogenized standardization. I'd rather go install x than whatever Ninja-generating-CMake-files-generating-Makefiles-that-generates-files-that-produce-a-final-file carnival rides that linger from the past. Perhaps its because of those scars that I like single dependency files even in / especially in larger projects.


Clarification question: When you say "dependency systems," are you referring to PIP/NPM/go get/Cargo or just the C/C++ clusterfuck?

Because under the hood "go install" (and NPM/PIP/Cargo) makes sure that if you indirectly depend on a library, you only have one version* of that library installed, and it's pretty easy to track which version that is.

That's the key difference: With "go install" you only have one version of your indirect dependencies, with single-file libraries, you have one version for each thing that needs them and have no real way of tracking indirect dependencies.

I'm not going to try to defend the C/C++ clusterfuck. Switching to single-file libraries would improve that because any change would improve that.

* Sometimes one version of each major release


I think this comment shows the difference between web development and other industries.

This problem is not existent in languages that traditionally use single-file libraries because projects rarely get to the point where they use 63 libraries, let alone have 63 indirect dependencies to a library.

Also: popular single-file libraries in the tradition of stb won't even have dependencies to other libraries of the sort, only to the standard library.


If your dependency graph is simple enough that you don't have any indirect dependencies, then it really doesn't matter what you use for dependency management.

Modern dependency management tools are designed for more complicated cases.


I would argue that it's the opposite. It's "more complicated cases" that are designed to fit modern dependency management.


If the goal of your dependency system is to reduce the number of dependencies that library authors include, isn't encouraging people to make single-file libraries counterproductive?


I don't see why it would be.


How would that be different if you have a source file split up into multiple files?

Having a list of what version you're using of a single file library seems like an easy problem to solve. If nothing else you could put the version number in the file name and in a debug build print off the file name.


I'm not comparing single-file vs multiple files, I'm comparing single-file vs NPM/PIP/go get/Cargo

Let's say you depend on foo, which depends on 10 other libraries including bar, all of which depend on a library called baz. Then one day someone discovers an exploit for baz.

With npm, you only have one version of baz installed, and can easily check if it's a vulnerable version.

With single-file libraries, baz was built into a single file. Then bar was built into a single file containing baz. Then foo was built into a single file containing bar and other libraries, all which included baz.

Now your library contains baz 10 times, all of which might be at different versions, and none of which you can easily check. (You can check your version of foo easily enough, but baz is the library with the exploit)


Like the other person said, you're mixing up single file libraries with having no package manager or dependency management.

That being said in C and C++ the single file libraries typically have no dependencies, which is one of the major benefits.

Dependencies are always something that a programmer should be completely clear about, introducing even one new dependency matters and needs to be thought through.

If someone is blasting in new dependencies that themselves have new dependencies and some of these overlap or are circular, and nothing is being baked down to standing alone it's going to end in heart break and disaster no matter what. That basically the opposite of modularity, a web of dependencies that can't be untangled. This applies to libraries and it applies on a smaller scale to things like classes.


If the goal of your dependency system is to discourage people from adding dependencies, then isn't supporting single-file libraries counterproductive because it makes it easier to add new dependencies?


By this twisted logic, you think dependencies should be more problematic and have dependencies of their own so they are as painful as possible so people don't add them? That's your scenario now and you have lots of dependencies.

Any dependency needs to be considered, there is no way around that. There is no reason to make it more painful just to make a programmer's life more difficult and single file libraries, especially those that have no dependencies themselves are the best case scenario.


> I'm not comparing single-file vs multiple files, I'm comparing single-file vs NPM/PIP/go get/Cargo

You are talking about your own thing. Everyone else is talking about the number of files, not the distribution and update mechanism.

The packaging system could support single file and still be able to track versions and upgrades. The JVM ecosystem is effectively single file for many deps esp now that jars can contain jars.


The comment I replied to was "They can be a bit clunky in some languages (eg. C), but even then it's nothing compared to the dystopian level of nightmare fuel that is a lot of dependency systems (eg. Maven, Gradle, PIP). Free vendoring is a nice plus as well."

How is that not talking about PIP?


I actually find them to be extremely popular in C and C++, and also in some Lua communities.

They just have an extremely vocal opposition.

This is not too dissimilar to static builds and unity builds, which also "make your life easier" but people will write tirades about how dangerous they are.

I wonder if C++ modules (which I'm loving) will also get the same treatment, since they make a lot of clunky tooling obsolete.


I think SQLite's amalgamation is one of the reasons SQLite is so popular for embedding.


I like to combine the two and put a lot of single file libraries into one compilation unit. It compiles fast and puts lots of dependencies in one place that doesn't need to be recompiled often.


It takes a tiny bit of lateral thinking in C.

What if "build" hacked all the source into a single text file, instead of hacking it all into a single archive of object files?

Roughly, write static in front of most functions, don't reuse names between source files, cat them together in some viable order.

Now you can do whatever crazy codegen nonsense you want in the build and the users won't have to deal with it. The sqlite amalgamation build is surprising, using the result is trivial.


I suspect much of this is a historical result of c's compilation model. Since c compilers define a compilation unit as a file, there is no good way to do incremental compilation in c without splitting the project into multiple files. In an era of much slower computers, the incremental compilation was necessary so this was an understandable choice.

For me, today this split is almost always a mistake. Having everything in one file is superior the vast majority of the time. Search is easy and it is completely clear where everything is in the project. Most projects written by a single individual will be fewer than 10K lines which many compilers can clean compile in less than one second. And I have reached the stage of my programming journey where I would rather not ever work on sprawling hundred K + line projects written by dozens or hundreds of authors.

If the single file gets too unwieldy, splitting it in my opinion usually makes the problem worse. It is like cleaning up by sweeping everything under the rug. The fact that the file got so unwieldy was a symptom that the design got confused and the single file no longer is coherent. Splitting it rarely makes it more coherent.

To make things more concrete and simple. For me the following structure is strictly better (in lua like the op)

     foo.lua:
       local bar = function() print "bar" end
       return function()
         print "foo"
         bar()
       end
 
compared to

    bar.lua:
      return function() print "bar" end
    foo.lua:
      local bar = require "bar"
      return function()
        print "foo"
        bar()
      end
In the latter, I now both have to keep track of what and where bar is and switch files to see its contents or rely on fancy editors tricks. With the former, I can use vim and if I want to remind myself of the definition bar, it is as easy as `?bar =`. I end up with the same code either way, but it is much easier to view in a single file and I can take advantage of lua's scoping rules to keep module details local to the module even from other modules defined in the same file.

For me, this makes it much easier to focus and I am liberated from the problem of where submodules are located in the project. I can also recursively keep subdividing the problem into smaller and smaller subproblems that do not escape their context so that even though the file might grow large, the actual functions tend to be reasonably small.

That this is also the easiest way to distribute code is a nice bonus.


CL for performance and 'Lispiness'.

Scheme if you really enjoy recursion.

Clojure if you need the JVM and/or employment (this is me).

Racket if you drink the DSL kool aid.

Fennel (same author as Janet) for Lispy Lua.

Janet for Lispy C.


This is overly simplified to the point of being wrong.

Water usage spent on watering crops used to then raise livestock (eg. alfalfa, soy) account for some 70% or more of total water usage in regions where this type of farming is done.

In arid regions (MENA, Saudi Arabia, Iran, California etc), a lot of that water is aquifer water. Aquifers take centuries, sometimes millennia to fill up.

The consequences of emptying these are rivers drying up, native flora dying off, topsoil being eroded and so on. In some cases, Tehran and Mexico City being prime examples, the depletion is sufficient to cause structural changes in the ground leading to literal collapses of land.

Growing food with an order of magnitude less inefficiencies means an order of magnitude less of these consequences.


Soy in particular is a huge water consumer, however that is often the proposed alternative to meat consumption.

I'm not sure how you wouldn't agree with me that cows are a scapegoat based on this fact alone.


Because the numbers are an order of magnitude different? [0]

Water needed to produce 4 oz of soy beans: 64 gallons

Water needed to produce 4 oz of beef: 463 gallons

Your points about the carbon cycle are well-taken but you're ignoring the trophic pyramid for some reason. [1] Or at least I find it hard to believe you could know about one without knowing about the other.

[0] https://watercalculator.org/water-footprint-of-food-guide/

[1] https://en.wikipedia.org/wiki/Ecological_pyramid


Because you can eat the soy directly and remove the additional land loss, water use, soil compaction, acidification, storage cost, transport cost, cooling cost, butchering cost, shipping cost etc. that comes with introducing another link in the food chain.

Feed conversion ratio for beef is something like 1:6-10, and that's ignoring everything above.


And it's been successfully replicated in vastly different places like India and the Netherlands.


Where in India? I’d love to explore if it’s nearby where I live


Afforestt has worked all over the world, but based in India: https://www.afforestt.com/results

In Kerala, Crowd Foresting also does a lot: https://www.crowdforesting.org/


At which scale are you looking at? We have used Miyawaki to create a microclimate around our house, perhaps 120m^2, in a tier-3 town in Andhra. I have heard someone saying the minimum is around 10m2, in order to have room for proper diversity.


My suspicion is that all type of work is this; a universal issue where quality and forethought are at odds with quantity and good enough (where good enough trends towards worse over time).

Before SE I had a bunch of vastly different jobs and they all suffered from something akin to crab bucket mentality where doing a good job was something you got away with.

I've had jobs where doing the right thing was something you kept to yourself or suffer for it.


I see error handling as the biggest culprit here.

When I use dynamically typed languages, it's not necessarily the lack of types that make me write code quicker, it's the fact that I can call anything from anywhere with a complete disregard for whether I should.

When I'm working in statically typed languages, esp. those with explicit error handling (Go, Rust, Haskell, Odin, Zig etc), I find myself spending a lot of time thinking about edge cases, bugs etc because I am forced to do so and find myself thinking in terms of engineering rather than exploring.


The worst part of stories like this is how much potential there is in gaslighting you, the negative person, on just how professional and wonderful this solution is:

  * Information hiding by exposing a closed interface via the API
  * Isolated, scalable, fault tolerant service
  * Iterable, understandable and super agile
You should be a team player isophrophlex, but its ok, I didn't understand these things either at some point. Here, you can borrow my copy of Clean Code, I suggest you give it a read, I'm sure you'll find it helpful.


I suspect that this is the more common opinion, especially when the desired outcome is real world use.

Recursive descent is surprisingly ergonomic and clean if one gets the heuristics right. Personally I find it way easier than writing BNF and its derivatives as you quickly get into tricky edge cases, slow performance and opaque errors.


Yeah, grammars seem easy at first, but they're full of arcane knowledge like writing your tokens with the right affinity and greediness, ensuring back tracking is performant, and that you'll get good error messages.

Same with parser combinators. Not until a bunch of trial and error do you build up the intuitions you need to use them in production, I think.

Despite two decades of using those, I've found it much simpler to write my own scanning or RD parser.


D was originally designed to require only one token lookahead, but it turns out to get the aesthetic I wanted, that arbitrary lookup was needed. That turned out to be very handy when adapting it to compile C code.

Much of the parser code, if you compare the code with the BNF grammar, is a 1:1 correspondence. Super easy to do.


Here's the GCC implementation of `std::function`:

https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-...

I'm with OP here.


> Programmers with shallow understanding and lacking experience of Lisp (or straight bias against it) are missing out so much, it really is sad.

Most people I know of that tried Lisp and didn't like it did so by writing Lisp without structural editing, into a text file, starting a REPL, loading the entire file into the REPL and calling the main function from the REPL.

It's difficult to explain just how absurdly slow and cumbersome that is, as its the primary way developers work in other languages to the point that what you're saying sounds like hyperbole.

For those that haven't experienced that difference:

Structural editing when you have a homoiconic language that only has expressions is like combining expert level vim skills with vim macros and expert level IDE snippets/refactor shortcuts but with a fraction of the expertise as the tooling is simply less complicated.

REPL driven development is state retaining hot reload as a first class citizen, which makes regular compile cycles and the shenanigans you need to do to isolate your current code and make it run (eg. flags, rewriting main, writing tests) appear absurd in comparison.

By the time you add the features of Lisp systems on top (like condition restarts, runtime compilation, ASM inspection etc), most other languages feel like a big step back.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: