Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The simplicity of single-file Golang deployments (amazingcto.com)
189 points by KingOfCoders on March 22, 2023 | hide | past | favorite | 225 comments


The fact that it produces a single static binary is one of the nicest things about golang.

This used to be easy with C (on BSD & Linux) a long time ago, but then everything started to depend on various shared libs, who then depend on other libs, then things started to even dlopen libs behind your back so they didn't even show up in ldd, etc. Sigh.


> The fact that it produces a single static binary is one of the nicest things about golang.

Not only that it can also cross compile for different architecture and operating systems.


I made a cool program for Go projects that will compile all the supported OS and ARCH combos for your code. Please try it :) I use it for everything I make now.

Just do `release --name "mycoolprogram" --version "0.1.0"`

and it will output all of your labeled release binaries for every platform your code supports.

check it out! [0] You can see it at work here for this simple markdown blog generator I made, which sports 40 different platform combos [1]

[0] - https://github.com/donuts-are-good/release.sh

[1] - https://github.com/donuts-are-good/bearclaw/releases/latest


looks a heck of a lot simpler than goreleaser! I love goreleaser but it gets outrageously complicated.


Thanks for checking it out :) I'm not going to pretend I haven't been refreshing the page waiting for bugs and comments


We used that to make a simple "VPN diagnostics" app for our helpdesk (app checked connectivity and config of the machine then displayed the summary page). Only thing that needed to be written per-os is how to call a browser to display the report


interesting. could you share what specifically the report had?


So I always assumed this was the case for other compiled languages. Is this something special in Golang and/or recently created languages?


I think the main thing that's different about Go compared to more old-school languages is that the Go binary distribution includes the cross-compilers right out of the box. So anybody who can run "go build" can also cross-compile their application -- and, importantly, all of its library dependencies -- for any architecture that Go supports.

On the other hand, if you want to cross-compile a C program using GCC, you need to separately build and install a complete gcc+binutils toolchain for every individual arch that you want to target. And you have to handle the dependency management yourself, which may be tricky if your dependencies' build scripts weren't designed with cross-compilation in mind.


Golang is special because pure Go programs tend to have a closed ecosystem with its own linker (that typically don't link to libc or other C libs) so it makes static linking very easy. Other languages can certainly do static linking, but it tends to be a bit challenging to set it up on different OS's due to the need to have all needed C libs as static libs and then instruct the linker appropriately.


No. Some bytecode compiled languages are compiled and require at least an archive to be unzipped or at least the virtual machine pre installed.

Java is an example -- you'll want your your system java deployment to match the stuff stashed in your jar.

Erlang is compiled but usually releases bundle the VM up with the project, so you typically deliver a full directory with the VM packed in there.


On top of this, with java, the jvm is itself dynamically linked to libc most of the time.


Not dynamically linking to libc is pretty unusual. The libc-equivalent parts are in the Go runtime, statically linked.


From what I have seen in major OSS projects like systemd and PostgreSQL, nothing seems to support static linking, to the point where some contributors get annoyed when you ask for it.

Seems like the C/C++ ecosystem will stay dynamically linked, even with a lot of the industry shifting towards statically linked, fat binaries as disk space is pretty cheap.

I wonder how much simpler Linux packaging would be if everything was statically linked...


Packaging would be simpler, but you probably have to update the whole OS and all your installed apps when a security update for a common library is released.


Not wrong; at the same time, I'm becoming less convinced this is relevant. Seems many security updates have to do with code paths that are not used in a large number of applications. Similarly, many of them are fixes on features that even more apps didn't want/need, but they got when they updated to get the last round of security fixes. :(

The docker world is a neat example of this. The stories of ${absurdly high percent} of containers having vulnerabilities only gets to claim that because of "unpatched libraries." If you reduce it to the number of containers with exploitable vulnerabilities, it is still non-zero, but not nearly as high.


The problem is having to wait for all those applications to push updates when a vulnerability is found in a common library


The solution to this is to make the build process as simple as possible. Then pushing updates has a much lower maintenance cost.

I can be confident I will be able to build >90% of go programs by cloning the repo and calling go build.


That is the thing, though. For most applications, those vulnerabilities in a common library are not relevant. That is what I was talking about in saying that it is in code that isn't used by them.

You are, of course, right that it has the possibility of happening. But by and large, it doesn't happen there any more often than it does in the dynamically linked cases. And, by and large, the dynamically linked cases are often a lot more complicated to deploy. (Granted, deployment for either should be doable nowadays.)

The dream, of course, is you just patch the library and call it a day. The reality seems to usually be a circus check on every application that you have deployed to see if they are impacted anyway.


Funnily enough I was looking at an issue in my one of personal projects last night about asking me to statically link libstdc++ - it's something I was interested in doing because I spent way too long spinning my wheels on an issue caused by loading an outdated libstdc++.so.6, but I read that statically linking can cause issues if your code loads another library that dynamically links libstdc++... which I'm pretty sure my project does (it's a pass-through between an executable and another library). So I want to, but it sounds like if I do it might work but be fragile at best.


Docker images are static linking for the modern era.


Simpler but small updates, like say openssl, become massive distro updates. There's a reason why everyone went with shared libs when they became stable.


A system with shared libraries also needs an update for the security fixes. There is no avoiding the update step. However, the only difference is the size of the update. If update process is robust, size of the update shouldn't matter, isn't it?


The matter isn't the binary size of the update. It is the size of the crowd of people and parties involved in the update: For a bunch of statically linked applications, you need to involve all the application people/vendors. For a bunch of dynamically linked applications depending on a shared library, you (ideally) just need to involve the one library's people/vendor.

And "involve" might mean: wait for a bunch of unresponsive external parties to produce a new binary each for a security issue they might not care about. Of course on very different timelines, etc.


It does matter. If there is a problem with openssl, just the openssl maintainers have to push an update and everything on your system is secure.

If everything is statically linked, you need to wait for every maintainer of every single program on your system to rebuild and push an update. You're basically guaranteed that there is always _something_ missing important patches


IME, using musl, compiling static binaries written in C is as easy as it was before glibc changes and as it has always has been on NetBSD. I compile static binaries written in C every day on Linux. I never encountered any problems compiling static binaries on BSD.


That’s great if your only dependency is libc. It gets progressively harder the more dependencies you need, and it becomes downright untenable the moment you run into some dependency with a dlopen-based plugin architecture.


Many of the static binaries I use rely on dependencies other than libc. Most are small programs. Out of personal preference, I generally try to avoid large programs with numerous dependencies. Exception might be something like ffmpeg. Static binary for the author is 21M but package from repository is 84M so the static binary uses less space. ffmpeg has many dependencies besides libc.


Can you link something substantial, like Firefox, statically on NetBSD?


Compiling Firefox on NetBSD (dynamically) takes longer than compiling the kernel, maybe even longer than compiling an entire base system. It's been a long time since I tried it but it just took far too long so I lost interest. Granted, the computers I use have limited resources. Anyway, I gave up on graphical browsers many years ago. It seems I prefer "unsubstantial" programs.


Graphical web browsers from "tech" companies (and their partners, like Mozilla) are too large, too complex, too difficult for the user to control. IMHO.


very true, for Go though if you need CGO it's hard to make a single executable, otherwise it is great.

I run a few Go apps, all are single executables, upgrading to new releases has never been easier.

if you have a network oriented application, nothing beats golang as far as release|maintenance is concerned.


In the malware reverse engineering scene, there are a lot of forks of the upstream "debug" go library, because it allows loading, parsing, compiling and executing libraries from disk (rather than in-kernel or in-userspace) because it is independent of dlopen.

And there's also "purego" as an implementation that directly generates shellcode.

Maybe those will help you, too?

I am just mentioning these because for my use cases those approaches worked perfectly, CGO free.

[1] https://github.com/Binject/debug

[2] https://github.com/ebitengine/purego


very useful,thanks


> CGO_ENABLED=1 CC=musl-gcc go build --ldflags '-linkmode external -extldflags=-static'

Unless you cant use musl for some reason, but that’s a glibc problem, not Go


Some people are writing alternative via WebAssembly.

For instance wazero which I'm really excited about.

https://tetrate.io/blog/introducing-wazero-from-tetrate/?utm...


This is generally only true with CGO_ENABLED=0.

I've found many times that a go binary, even if it has no cgo or cgo dependencies, will randomly require glibc on the target system to execute if you don't explicitly disable cgo in your build.


You can still do that with musl, can’t you? Can make a static c binary with that


[flagged]


Please don't start random language fights.


That's no language fight. Rust doesn't want batteries included. That's highlighting a difference between the languages.


Directly addressing the topic of a upvoted article is starting random language fights?


Where does the word Rust appear in the article?


In the comments about every article including Go, generally.


I can't speak for Go-centric articles in general, but for this one in particular, the only one (as of now, after 14 hours of comments) bringing up Rust is this single person with (apparently) a weird chip on their shoulder.


The article is about Go, and while the article mentions other languages for comparison, Rust is not one of them. So traceroute66's comment comes across as starting a Rust vs. Go fight where not appropriate.


I think the comparison is totally apt and illustrates why Rust and Go get compared very often. They are both languages that compile to a binary, rather than running in a managed environment.


Doesn’t come across as starting a fight to me. Not even close.


Not even, they just expressing an opinion.


If only they would decide what to do with plugin package, not use SCM paths for packages and decide to eventually support enumerations instead of iota dance (Pascal style would be enough.

Maybe 10 more years.


    type Kind enum {
        Simple,
        Complex,
        Emacs,
    }

    const kindStrings [Kind]string = {
        "simple",
        "complex",
        "emacs",
    }

    func (k Kind) String() string {
        return kindStrings[k]
    }

    func t() {
        var a = Kind.Emacs - Kind.Simple  // a has type int and value 2
        var b = Kind.Simple + Kind.Emacs  // type error
        var c = Kind.Simple + 1           // type error
        var d = len(Kind)                 // d has type int and value 3

        for k := range Kind {
            fmt.Printf("%v\n", k) // prints what you expect it to do
        }

        // not sure about legality or runtime behaviour of the followng
        var t = Kind.Emacs
        t++
        t = Kind(42)        
    }
Would be nice, not gonna lie.


How is this any different than Go's existing enumerations, aside from the enumeration also creating an implicit list structure which, while kind of neat, isn't really a property of enumerations.

The parent is likely lamenting that Go doesn't enforce compile-time value constraints on enumerated sets like some languages do, but many other languages don't ether. Not even Typescript does. If Typescript doesn't find it important to have such value constraints, Go most certainly never will.


> Go's existing enumerations

Go doesn't have enumerations. So the first difference would be that my enums would actually exist.

> implicit list structure which, while kind of neat, isn't really a property of enumerations

It absolutely is, or people wouldn't have been regularly writing enums like

    typedef enum {
        KindSimple,
        KindComplex,
        KindEmacs,
        Kind_NUM_VALUES
    } Kind;
which they do about half the time.

> likely lamenting that Go doesn't enforce compile-time value constraints

Yes, that's my complaint as well. Which is why that "not sure about legality" part in my example: you want to be able to enumerate the enum (duh), but with last_value_of_enum++ being illegal, writing for-loop with "<" is illegal too, that's why there is support for it in for-range.

With incrementing (used in loops almost exclusively) taken care of, the rest of arithmetics on enums is meaningless in general except maybe in case of subtraction (when you use enums as keys/indices for a fixed-sized array) which is why I allow it — but it produces an int, of course.

As for what should happen in Kind(42) example — perhaps it could work like type-assertions?

    k, err := Kind(42)
    if err != nil {
        return err
    }


> Go doesn't have enumerations. So the first difference would be that my enums would actually exist.

That's obviously not true. That's what the iota dance is for.

    type Kind int

    const (
        KindSimple Kind = iota
        KindComplex
        KindEmacs
        Kind_NUM_VALUES   
    )
Go doesn't have a literal enum keyword, if that's what you mean, but enumerations aren't defined by having a specific keyword or specific syntax. Enumerations are a more general concept, defined as a set of named constants. The above is functionally identical to your example in C, among a host of other languages.

> Yes, that's my complaint as well.

Fine, but that's not a property of enumerations. If one wants support for value constraints, surely one should ask for that and not for something the language has had since the beginning?


Mmm. The original comment asked for "Pascal-style enums". Those are different from C enums: they are incompatible with integers and other enums, they don't support arithmetic operations (you have to use succ()/pred() built-ins) and they contain precisely the named values (calling e.g. succ() one too many times is supposed to be a runtime-detected range error). Plus, enums being ordinal types, you could use them as array indices in type definitions, like "array[Kind] of real" (so effectively, enum range checks and array boundary checks implemented by the same mechanism).

So that's what I went with, because I actually liked Pascal-style enums but thought they could be somewhat improved, so what you've read is my ideas (Go is surprisingly close to Oberon and both lack enums).


The original comment asked for enumerations. Despite him not realizing, Go has those. It has a relatively unique syntax for defining enums, sure, but enumerations are not defined by specific syntax. They are a higher level concept that transcends any specific syntax implementation.

The original comment also hinted at wanting other features that some other languages have, including Pascal, although not stating which features specifically. I expect he was referring to wanting some kind of constraint system, which is the most common feature I hear requested of Go. But that's beyond the scope of enumerations.


Regarding the “iota dance,” I wrote my 2 cents on what could be a slightly more robust approach a couple of days ago: https://preslav.me/2023/03/17/create-robust-enums-in-golang/

P.S generic type sets make it even better. I’ll write an update to my post these days.


From another point of view "usable" isn't a word I'd associate with Go, especially not when there is an "unlike Rust" in the phrase. I've not had any issues of the sort when building out Rust applications. At this point the only time I touch Go is if I need to modify something else someone has made-- which I avoid at all costs


Maybe AI will make it more palatable but I just can't get into Rust.

I can probably easily understand borrowing. It's mostly an issue of controlling pointer aliasing wrt mutability, especially in a multithreaded context I guess.

But that gottdarn syntax... And goroutines are too nice for the type of code I use.

I'm too spoiled with Go I guess.


That attitude assumes that the go standard library is the global maximum for Getting Stuff Done, and because of backwards compatibility guarantees ensures that it’s difficult to innovate on.

I think having a small standard library is actually a good thing because it encourages exploration of the space of possibilities.

For example Go’s stdlib http.HandlerFunc sucks, people instead opt for leaving it behind entirely in favour of Gin or trying to work around it with bad patterns like interface smuggling.


Using a crate is the opposite if reinventing the wheel. Also are we talking about go? The language without a set implementation?


Parent is saying those crates themselves are reinventing the wheel, not talking about using a crate


So by that logic Golang reinvented the wheel by creating a new programming language when java already existed


Yes. Is reinventing the wheel always a bad thing, though? Not if you ask me, not if the new wheel has novel characteristics and performs better in some aspects.


Rust's philosophy (unlike Python's, for example) consists in not including tons of stuff in the standard library, this way you can choose the best implementation for your specific use case from one of the many crates (which are super easy to install, by the way). There is no "always the best" implementation for a specific functionality, nor a legacy-but-still-officially-supported-in-std implementation that nobody uses anymore but still needs to be maintained with its own namespace.

I don't see this as negative or "reinventing the wheel". Reinventing the wheel would be writing your own implementation, which doesn't happen if you can choose from many high-quality crates.


"a legacy-but-still-officially-supported-in-std implementation that nobody uses anymore but still needs to be maintained with its own namespace."

The cardinality of the set of "nobody uses anymore" is usually in tens of millions.


If something is used by tens of millions, be sure that it will be updated even if legacy. Just not officially by the people who maintain the language.


> legacy-but-still-officially-supported-in-std implementation

There is massive value in this.


Not officially supported doesn't mean not updated or not supported in general.


Ah yes, the #1 thing golang is known for: a comprehensive stdlib. Like including a max function for integers, right?


When I was stuck doing a web application in Java 15 years ago, I hated everything about it except for the deployment story, which boiled down to a single .war file being pushed to the server.

When we upgraded to Perl, I liked that system so we designed deployment around "PAR" files in a similar way, bundling all of the dependencies together with the application in the CI build process, and I wrote a tiny bit of infrastructure that essentially moved a symlink to make the new version live.

Google uses MPM and hermetic packaging of configuration with binaries: https://sre.google/sre-book/release-engineering/#packaging-8...

The way I see it, Docker is basically this same thing, generalized to be independent of the language/application/platform. As a practical matter, it still fundamentally has the "one file" nature.

I don't see what's special or better about compiling everything into a single binary, apart from fetishizing the executable format. In any system at scale, you still have to solve for the more important problems of managing the infrastructure. "I can deploy by scping the file from my workstation to the server" is kind of a late 90s throwback, but golang is a 70s throwback, so I guess it fits?


> I don't see what's special or better about compiling everything into a single binary, apart from fetishizing the executable format.

When you distribute your software to other people, it cuts the step of installing the correct interpreter... at the cost of requiring the correct computer architecture.

It is very likely a gain.


Exactly - I primarily write Java and fat-jars are great when developing apps for environments I control. But if I want to send an app to a friend it's a few additional steps to make sure they have the correct version of Java, paths are setup correct, etc. This isn't always trivial if they already have a different version of Java and want things to play nice side by side.

Just bundle everything into a native executable, so many little annoyances just disappear. From what I understand Java does have facilities to bundle the runtime now but I haven't had the opportunity to really play with it yet.


You can use Graal Native Image https://www.graalvm.org/22.0/reference-manual/native-image/ to produce a single native executable. Example of a Java Micro-service framework that has first class support for this is Quarkus (https://quarkus.io/). See https://quarkus.io/guides/building-native-image

For a plain Java app:

"Use Maven to Build a Native Executable from a Java Application" https://www.graalvm.org/22.2/reference-manual/native-image/g...


> From what I understand Java does have facilities to bundle

jlink, jpackage, and if you need something complex you can use conveyor: https://hydraulic.software/index.html


Or even if you need something simple :-) Sending little apps to friends is easy now just like it once was, the difference is that the friends don't need to know what runtime you use. Also not only for Java: Electron, Flutter and anything else works too.

We have an internal version of Conveyor that can be used to push servers as well. It bundles the jvm, makes debs with auto-scanned dependencies, systemd integration is automatically set up, it's easy to run the server with the DynamicUser feature for sandboxing and it uploads/installs the packages for you via ssh. We use it for our own servers. There seems to be more interest these days in running servers without big cloud overheads, so maybe we should launch it? It's hard to know how much demand there is for this sort of thing, though it gets rid of the hassle of manually configuring systemd and copying files around.


> But if I want to send an app to a friend it's a few additional steps to make sure they have the correct version of Java, paths are setup correct, etc

My friends would do full stop and reverse at "install Java" step. It will just not fly with 99% of people. It's not 1999 anymore.


For 20 years that AOT compilers for Java exist, even if only available in commercial JDKs at enterprise prices (PTC, Aonix, Aicas, Excelsior, J/Rockit, Websphere RT).

That alternative would be jlink, and GraalVM / OpenJ9 as free beer AOT.


GCJ was a thing for a while, but was never really production-ready.


> It is very likely a gain.

Distributing python has to be some of the worst experiences I’ve had in the field.


>When you distribute your software to other people, it cuts the step of installing the correct interpreter... at the cost of requiring the correct computer architecture

Not even, as it's trivial to cross compile on Golang. Then you just offer 3-4 arch binaries, and they download the one that matches their platform.


Also if you had some catastrophe where you're no longer able to build overnight or if you have to replace 100% of your infrastructure, you're still able to operate because you have a single compiled binary to ship.

It eliminates whole classes of business risk. The more hosts in your fleet the more risk eliminated as well.


> it cuts the step of installing the correct interpreter... at the cost of requiring the correct computer architecture.

Obviously this depends on the product but I'd give anything to worry about interpreters over the correct computer architecture in the M1/M2 Intel Embedded world.


You could take it a step further and make the user download a small stub Actually Portable Executable (https://justine.lol/ape.html) which downloads the real binaries.


Not convinced, not if you built for e.g. Java 8. I think there's a decent chance there are more people running something other than x86_64 nowadays (and that number's only going up) than people who don't have a JVM installed.


> I don't see what's special or better about compiling everything into a single binary, apart from fetishizing the executable format.

Indeed. If you think of the docker image itself as an executable format like PE or ELF, this becomes clearer. Rather than targeting the OS API, which has completely the wrong set of security abstractions because it's built around "users", it defines a new API layer.

> "I can deploy by scping the file from my workstation to the server"

I kind of miss cgi-bin. If we're ever to get back to a place where random "power users" can knock up a quick server to meet some computing need they have, easy deployment has to be a big part of that. Can we make it as easy to deploy as to post on Instagram?


> Indeed. If you think of the docker image itself as an executable format like PE or ELF, this becomes clearer.

But I don’t, because a docker image will not run without docker. A standalone, executable file can be distributed and deployed all by itself. A docker image cannot.


A standalone executable doesn't remain a standalone executable for very long, though.

You need something to handle its lifecycle and restart it when it dies. You need something to handle logging. You need something to jail it and prevent it from owning your system when it has a bug. You need something to pass it database credentials. You need something to put a cpu/mem limit on it. Not to mention that most executables aren't standalone but depend on system libraries.

A lot of that can be handled by systemd these days. But now you have a single standalone executable, its mandatory companion config files, and all its dependencies. Docker was designed to create a platform where the only dependency is Docker itself, and it does that job reasonably well.


That used to be apache and mod_* (mod_php, mod_python, mod_ruby, ...)

I've always wondered if it is possible to extend this system. Mod_docker?


"By itself" where? You're going to toggle it in on the front panel?

If you want to run an executable you have to have some kind of service (in the broad sent) set up to receive it and run it. Same with a docker image. Those services are, if anything, more readily available and standardised for docker than they are for executables.


AWS Lambda is basically cgi-bin. Except, of course, they re-branded it as an exciting new technology, which I think was a clever move on their part.


Correct me if im wrong, but cgi-bin things executed from scratch each time IIRC.

Whereas lambda will stay running for a period of receiving requests.


That's correct - and a system called "fastcgi" was built to give persistent execution and avoid startup time.


> I kind of miss cgi-bin. If we're ever to get back to a place where random "power users" can knock up a quick server to meet some computing need they have, easy deployment has to be a big part of that. Can we make it as easy to deploy as to post on Instagram?

I'd be happy with a future that takes some cues from literate programming, where if you want to deploy some application, the way you do it is to upload a copy of the software manual/specification. This should be sufficient to "teach" the server how the application should behave.

It's tempting to say "Ah, sounds like super advanced ChatGPT-ops or something approaching real AGI", but what I have in mind is something decidedly less magical. It's more akin to the sort of thing that Rob Pike brought up in his "The Design of the Go Assembler" talk ("you have a machine readable description[...] why not read it with a machine?").

<https://www.youtube.com/watch?v=KINIAgRpkDA>


> the way you do it is to upload a copy of the software manual/specification. This should be sufficient to "teach" the server how the application should behave.

i.e. a program.

I've actually worked with someone who had a system for compiling the "human readable" side of the h265 specification to executables. This he compared against the "reference implementation" provided in C. As a result he filed a large number of bugs against the standard for cases where the two differed in behavior.

Writing an unambiguous specification is hard work regardless of whether you do it in C or in English or something else, and it's what happens when the surprising cases arise that matters.


> i.e. a program

If you want to think of it that way (as a way to be dismissive), sure. But I don't know anyone who when asking if some program foo has a manual would accept foo.git as an acceptable answer wrt the spirit of the question.

There's also the not so small matter of packaging/distribution. It's the entire point of the linked post. Stuff like PDF or ebook formats are well understood to be self-contained, which is what makes them something that you can trivially hand-off to someone else with about the same ease as a real book (e.g. attaching it to an email). Software deployments tend to work differently. That should change.

> Writing an unambiguous specification is hard work regardless of whether you do it in C or in English or something else

Right. And programmers already have to contend with this. But when I float this idea around, people like to point this out, as if it's not merely hard, but impracticably hard—to the point of the suggestion being ludicrous. And yet (to repeat myself but not to belabor the point), it's not as if they get to escape this by following current practices; that's something that programmers _already_ have to contend with.


What does cgi-bin get you? What does it run on? If I already have a single-file web server, do I need to introduce extra cgi-bin technology also?


The CGI mechanism let the web server call a separate local binary passing parameters of the request as envars/stdin in a specified manner. The "cgi-bin" was just a server side directory where those CGI binaries lived.

If I understand the GP's point, they like the idea of dropping a singular binary in a directory on a server and then it's magically available as an endpoint off the cgi-bin/ path.


For anybody comfortable with the compiling the single-file binary ... it doesn't gain you anything. For a class of "power-end-users" it provides a mechanism to build sandboxed apps on a multi-tenant system. I think the spiritual successors split though between PAAS/heroku-like systems and "low-code" platforms.


It gains you multi-tenancy on the serving end, making infrastructure very cheap.


Apache supported it, back in the day before nginix existed. It gives you a single-file web page. You can then have multiple pages on the same server run by different users under different userids.

It also operates on a model of one execution run per request. So CGIs that aren't currently being served consume no resources.


It still does support it. In fact I use it on our company website to handle the contact form submission. It invokes a Kotlin script which reads the form, handles the recaptcha call and sends an email. Old school but it works and doesn't require any resources except when running.


Well, system-wise Go app is just a binary that only needs network access, could be run directly from systemd and just have all permissions set there.

Docker is a bunch of file mounts and app running in separate namespaces. So extra daemon, extra layers of complexity. Of course if you're already deploying other docker apps it doesn't really matter, as you'd want to have that one binary in docker container anyway just to manage everything from same place.


There is also the deployment part that is easier (au least for an amateur dev such as myself).

I have a CI/CD template, Erin all my web stuff via a dockerized caddy reverse proxy, do not need to touch the configuration of the host (to create .service &co. files)

I find deploying to docker just simpler.


Have you ever read https://medium.com/@gtrevorjay/you-could-have-invented-conta... ?

You were halfway there :-)


Excellent roast!


From the article:

"Standing here it looks like Docker was invented to manage dependencies for Python, Javascript and Java. It looks strange from a platform that deploys as one single binary."

Let me say the quiet part out loud: Docker is covering up the fact that we don't write deployable software any more.

Go isn't perfect either. The author isn't dealing with assets (images anyone?).

I think there is plenty of room for innovation here, and were over due for some change.


> Go isn't perfect either. The author isn't dealing with assets (images anyone?).

From the article: "The Go web application had all files like configurations (no credentials), static css and html templates embedded with embedfs (and proxied through a CDN)."

See https://pkg.go.dev/embed


OP here,

How are you doing caching without a mod time?

https://github.com/golang/go/issues/44854

Are you re-naming or hashing for cache clearing?


I'm not the author of the post, so I can't tell you what the author does.

What I do in my projects is that I tell varnish-cache to cache assets in "/static/..." forever. And I have a "curl -X PURGE <varnish_endpoint>" as part of the "ExecStartPre=" of my go binary.

https://varnish-cache.org/docs/trunk/users-guide/purging.htm...

https://www.freedesktop.org/software/systemd/man/systemd.ser...


Clever solution! Why not make it part of the startup of your binary?


Sorry to reply a day later. GP here, it's a simple separation of concern. IMHO the binary should not be aware that it is cached and/or how it's deployed. I separate "business logic" from "devops", and I consider purging the cache to be "devops". This is why I let systemd do it.

Of course, at the end it's a question of preference. People might disagree.


Go isn't perfect either. The author isn't dealing with assets (images anyone?).

Go supports embedding assets since 1.16, the author mentions embedfs in the post.


And people are using go-bindata since Go<1.0, so Go supports embedding assets since forever.


Assets of any kind can be embedded in the executable and accessed via the embed.FS interface. This makes it trivial to bundle up all dependencies if desired.


Of course Visual Basic 2.0 and Delphi 1.0 both had embeddable filesystems. Even updateable embedded filesystems (which worked because the exe would really be a zip files. Zip files are indexed from the end of the file. So you can prepend the actual executable code and that would work. Zip files are updateable ...)

I believe after a while you also had sqlite-inside-the-exe things.


Embedding your assets like this isn't always an improvement. For example, I work on a site with a Go server and static content pages, and I like that I can update one of the pages and see the change instantly without having to re-compile the entire server binary just to get the new files included.


Easy enough to have the app check the regular file system first, then fallback to the embedded fs. You could have the best of both worlds.


> we don't write deployable software any more

What did it use to look like exactly, this "deployable" software? Going back to the birth of web 2.0 we had Perl, PHP, Java(?), .Net Framework a few years later. These all required tons of pre-configured infrastructure on the servers to run..

> It looks strange from a platform that deploys as one single binary

It's just a tool with many uses. I CAN deploy my Asp.Net app as a self-contained(even single) file.. But the size of updates is smaller between images if I copy the app into an image that already has the .Net and Asp.Net library code in the base layers.


You can do that without ever touching the system libraries?


I'm unsatisfied with the current situation too, but it's a hard problem. You can either go full container (which means no BSD, and having to deal with Docker or Kubernetes and all the associated woes), or fallback to native packages which are a huge PITA to build, deploy and use.

I think that Nix and Guix have part of the solution: have a way to build fully independent packages that can be easily installed. But I'm not comfortable with the complexity of Nix, and Guix does not run on FreeBSD. And ultimately you still have to handle distribution and configuration of the base system you deploy on.

Innovation is possible, but there are a lot of expectations for any system dealing with building and deploying software. I feel that there are fundamental limitations inherited from the way UNIX OS work, and I wish we had lower level operating systems focused on executing services on multiple machines in a way similar to how mainframes work. One can dream.


I'm really growing tired of significant effort development effort going to dealing with deployment on the part of our stack that isn't written in Go. The Go side, deployment is replace binary, restart app, done. The Python and Javascript code we maintain takes significant effort to deploy, and builds can be brittle due to dependency issues.

> I feel that there are fundamental limitations inherited from the way UNIX OS work

There are, but the way go does deployments plays to Unix's strengths.


IMO if you're doing containers and cloud deployment then there's no point bothering with the OS layer. It'd be better to just build unikernels and deploy those directly. Some of the stripped down base images are going in this direction, and MirageOS looks pretty impressive although I've not been able to use it for real yet.


Just curious, what’s the benefit of using FreeBSD versus Debian/Ubuntu?


Not the person you replied to, but it tends to be more stable (commands and interfaces change less) and have better backwards compatibility; also ZFS is better-integrated than on Linux and better than any of the other options on Linux. (Jails used to be another advantage, but these days linux containers can more or less do most of the same things)


Anymore? When were we writing "deployable" software in the past?


I'll bite!

We used to ship software, on discs, and we didn't have the Internet to update it.

There is plenty of software out there that works via your linux distress package manager. Note that this isn't everything you can get from your package manager. Plenty of things that are available are broken or miersable to get working unless you get the container/vm version.


I remember those days well. It really meant you had to be careful with bugs and documentation. It is not clear to me that the we are winning with daily or multiple times daily release schedules at this point.


Were people careful with bugs and documentation? I remember the Internet blowing up one day because every Windows install was sending every IP on the Internet a virus, and there was nothing anyone could do about it. (And yes, Unix also had similar worms, though they largely predate me!) Word used to crash and corrupt your entire novel. There was no online banking. I'm not sure the rose-colored glasses are a realistic take on changing software quality.

Today, the tools are available to move quickly and maintain quality. You probably do what was months of manual testing every time you save a file, and certainly every time you commit a set of changes. There are fuzz testers to find the craziest bugs that no human could even imagine. There are robots that read your PR and point out common errors. HN really likes to pan on software quality, but "I don't like this feature" is not a bug per se, just a company you don't like. There are a lot of those, but there are more lines of code than ever, and a lot more stuff works than 30 years ago. I think we, as a field, are getting better.


I ran a software company that predated the web. Yes, we were incredibly careful. When you ran the risk of having to send out updates on disks QC was a big thing.


You used to buy a disk, put it on your computer and run the thing there.


Isn't that just "running" the software? To me, "deployment" implies some repeated process that actually wasn't especially valuable to automate or be very careful about when software was released once a year (and hence the reason it wasn't).

Also, disks are a horrible way to deploy software. They have all the same problems of just distributing a random tarball: What operating system is it for? What version? Where do I copy the files? How do I get the OS to automatically start the service on startup? What version of make does it use? How about which libc and cc? You can say this stuff in the README (or printed docs) but isn't something more "deployable" when it's all machine-readable and can be reasoned about automatically? This is what package managers were invented for.


My father told me a story about when he was in college, he had to find a book in the library catalogue with the program he wanted, order it and check it out, and then type it up and test/debug it. After that all his colleagues and professors wanted to borrow it as well. This was in the 70s.


A better question is what is deployable software? How does it contrast from non-deployable so we can understand what we're even talking about. Software gets "deployed" all the time so in what way is it currently non-deployable versus some rose tinted view of yesteryear's software?


Java never needed Docker, in fact the Docker / Kubernetes is now re-inventing Java Application Servers with WASM containers.


XML ruined Java.


Says the guy with a nickname from a language that has XML support on its type system.


Not back in the good days of VB6 when I adopted the nickname.


> Docker is covering up the fact that we don't write deployable software any more.

We do. It's just software now is a container image, and Docker is a development tool like Make.


I feel like i'm taking crazy pills (at a low dose) when i read this stuff.

I deploy Java applications. In a runnable condition, they aren't a single file, but they aren't many - maybe a dozen jars plus some scripts. Our build process puts all that in a tarball. Deployment comprises copying the tarball to the server, then unpacking it [1].

That is one step more than deploying a single binary, but it's a trivial step, and both steps are done by a release script, so there is a single user-visible step.

The additional pain associated with deploying a tarball rather than a single binary is negligible. It simply is not worth worrying about [2].

But Go enjoyers make such a big deal of this single binary! What am i missing?

Now, this post does talk about Docker. If you use Docker to deploy, then yes, that is more of a headache. But Docker is not the only alternative to a single binary! You can just deploy a tarball!

[1] We do deploy the JDK separately. We have a script which takes a local path to a JDK tarball and a hostname, and installs the JDK in the right place on the target machine. This is a bit caveman, and it might be better to use something like Ansible, or make custom OS packages for specific JDKs, or even use something like asdf. But we don't need to deploy JDKs very often, so the script works for us.

[2] Although if you insist, it's pretty easy to make a self-expanding-and-running zip, so you could have a single file if you really want: https://github.com/vmware-archive/executable-dist-plugin


Your footnotes basically invalidate your argument. You aren't just deploying a tarball, you also have to deploy the java runtime and make sure it's compatible with your application.

I agree that Go fans make too much of the single binary feature, but it does seem easier than your deployment process.

Of course your process is easy for you because you built it to fit your needs. But if you imagine a new developer who has no experience deploying either Java or Go applications, and consider what's easier to deploy without any previous knowledge or automation, I think you might agree the Go deployment options are simpler.


> you also have to deploy the java runtime and make sure it's compatible with your application.

That’s not much of a problem in practice though. The JDK is just a tarball as well. You can even combine it with your application tarball into one! (With the drawback that now you have to create one combined tarball per target platform.)


There are also some advantages towards the tarball jar encapsulating multiple jar approach - some cloud platform Java buildpacks superbly optimize the deployment process by only sending the differential jars - sometimes just the differential classes - which makes deployment 2x-3x faster than Golang single big bang executable approaches.

In our company which leverage both Java microservices and Golang microservices - the Java app deployment is much faster!


> Deployment comprises copying the tarball to the server

You must not scale servers up and down very frequently then.

> both steps are done by a release script, so there is a single user-visible step... we do deploy the JDK separately

Wait, so it's not really a single user-visible step. You have one user-visible step to deploy the application server, and a different user-visible step to deploy the JDK.

Look, there's a reason why this way is old-fashioned. If you bought the server outright (i.e. running on-prem/colo), and so it represents a sunk cost, and the usage is all well within the ceiling of what that server is capable of providing, then sure, that's an eminently reasonable setup. If that server is humming along for several years, and electricity/data center costs are cheap, you're probably even saving money.

But in most cloud-first architectures, if you're not scaling down on low-usage times, you're wasting money. Scaling up and down is much, much simpler with immutable infrastructure patterns, and it's much simpler to just replace the entire image - whether that's a VM, or a container, or something else - rather than replacing just the application.


> You must not scale servers up and down very frequently then.

Indeed we don't. But i don't see why it would be a problem if we did. If you can run a script to copy a Go binary to a VM when you scale up, you can run a script to copy a tarball and unpack it. If you're scaling based on a prepared image, then you can prepare the image by unpacking a tarball, rather than copying in one file.

> Wait, so it's not really a single user-visible step. You have one user-visible step to deploy the application server, and a different user-visible step to deploy the JDK.

Oh come on! If that matters to you, change the app deployment script to run the JDK deployment script first. Bam, one step.

> Scaling up and down is much, much simpler with immutable infrastructure patterns

Sure, and as far as i can see, this is completely orthogonal to having a single-file deployment. You haven't made any case at all for why having single-file deployment is valuable here.


> If you're scaling based on a prepared image, then you can prepare the image by unpacking a tarball, rather than copying in one file... as far as i can see, this is completely orthogonal to having a single-file deployment. You haven't made any case at all for why having single-file deployment is valuable here.

If you have a prepared image / immutable infrastructure pattern, that image is the single-file pattern. Container images are tarballs. Go isn't actually all that special here, if you compare apples to apples in containerized deployments: either you bake the standard library into the binary (which Go does), or you bake the standard library into the tarball (which the JDK forces you to do). Either way it's a single file.


I'm taking the same crazy pills.

We did this deployment pattern with a jar in, like... the early 2000s? It's trivial (well, maybe an annoying couple hours to config, but it's a config once and then done) in maven to build a megajar and add every single thing you need into one large jar. All resources, dependencies, etc.

And then deployment is, indeed, an rsync.


Related to [1], I thought modern Java deployment style is to bundle the required modules of the JDK with your app, rather than any concept of a "deployed JDK".

As it is, the difficulty of deploying a JDK + your app is much more than a single static binary, Go-style.


It depends on your requirements and environment. With most things I am working on the commands to deploy both look exactly the same:

  $ rsync -a foo server:/opt/foo
  $ rsync -a bar server:/opt/bar
Can you guess which one is a static binary written in Go, and which one is a directory with a slimmed down JRE produced by jlink + an application jar?

For those using containers, there's no practical difference between the two.


We also install the JRE separately, but each app is a single executable (by Java) jar, which stands up a jetty instance when run. We also add a yml for configuration.

It's a much better packaging & deploy story than frontend code or python.


Java comes the closest of any of the languages to Go’s static binary.


.Net? You can cross-compile self-contained binary and just drop it on the server.


twic, you just made the op's point.

I too miss the days where you can just ssh/ftp a file, and boom, it was live. (this was usually a php file back then).

It is such a great feeling to be able to know whats going on at every step. With increased complexity, so has the deployment process in general. The java steps you described where the beginning of more complex deployments back then (1999-2001)

And, yes, I agree with the author in this case. Golang, makes it super simple to deploy a web service.


Without any prior knowledge you can run my little go program on your computer.

Can that be said about your program with other people?


If you are distributing a tool to desktops, and not via a package manager, then i agree that the single binary is a genuine advantage. There are ways to get similar results with Java, packaging code and a JVM into a single file, but they aren't as simple.

But the original post we're discussing, and my comment on it, was about deploying to servers.


I feel like I agree with the general ethos of the project. And I am also a fan of pragmatism; I think Fred Brooks referred to our trade as “toolsmiths” and I feel it is an apt word. Our work product exists solely to fill a need or to enable things that were not previously possible. I feel like I work hard not to be an idealist or to view well-written code as an end to itself.

But I must confess,

> Systemd also restarts the app daily to make sure it works properly long term

leaves me with a viscerally negative feeling. I feel like daemons should be able to run for years unless there is some kind of leak. Maybe I am wrong.


You can see the daily restart as a restartability test. Programs expecting to run forever may develop "bad habits"


They should, but you don’t know if they will, and, when they suddenly crash after two years, whether they will be able to restart. Restarting daily ensures that any problems will be caught early, and that the last known-good configuration is only a day ago and not two years ago.


Restarting daily also ensures you never find entire classes of lurking bugs involving memory leaks, in-memory caches, stuff like that.


On the other hand, they also never become relevant then.


Static binaries and automatic code formatting (no debates on code format whatsoever) are two incredible qualities of Go that should be copied to every new language but for whatever reason are mostly left out.

And I’m not taking about “making a static binary in $LANG is easy, just follow these 7 steps…” trust me it’s nothing like Go then.


How does one handle zero downtime deployments with single-file golang binaries? I remember I tried this setup some time ago and I couldn't successfully manage cleanly to accomplish no downtime when deploying a new version of my service. The reason was mainly port reuse. I couldn't have the old and the new version of my service running on the same port... so I started to hack together something and it became dirty pretty quickly. I'm talking about deployment of new version of service on the same machine/server as the old version was running.


Some of this is solved by using e.g. systemd, depending on your needs.

> I couldn't have the old and the new version of my service running on the same port...

You can, actually! You just can’t open the port twice by default. So one or both of the processes needs to inherit the port from a parent process, get passed the port over a socket (Unix sockets can transmit file descriptors), or use SO_REUSEADDR.

There are some libraries that abstract this, and some of this is provided by tools like systemd.

Some of this is probably going to have to be done in your application—like, once your new version starts, the old version should stop accepting new connections and finish the requests it has already started.


> Some of this is probably going to have to be done in your application...

FWICT tableflip does exactly this: https://github.com/cloudflare/tableflip


So can I just start another process with SO_REUSEADDR and gracefully shutdown the old process?

The master/worker thing that nginx / gunicorn et al do is pretty neat, but relies on signals; so seems pretty messy and error prone to write yourself.


You may have to start both processes with SO_REUSEADDR, I don’t remember the exact semantics.

People have a healthy skepticism of signals from the C days, but if we’re talking about Go, you’d just call signal.Notify. Any way of signaling your app to shut down works, though.

https://pkg.go.dev/os/signal@go1.20.2#Notify


Same way you do with any other app not specifically designed for it; you start 2 copies of it and put loadbalancer in front of it. I did that via some systemd voodoo

But TECHNICALLY to do that in one without external proxy you'd need to figure out how to set SO_REUSEPORT for the web socket handler, then start the second one before the first.

Haven't actually tried it but someone apparently did: https://iximiuz.com/en/posts/go-net-http-setsockopt-example/

You'd still have any ongoing connections cut unless you unbind socket and then finish any existing connection, which would be pretty hard with default http server.

I just put HAProxy instance on my VPS that does all of that, including only allowing traffic once app says "yes I am ok" in healthcheck. Then the app can have "shutting down" phase, where it reports "I am down" on healthcheck but still finished any active connections to the client.


This doesn’t sound Go-specific, if you use something like haproxy targeting multiple nodes you can take them down one by one to perform a rolling upgrade.


Socket activation via systemd[0] is an option, assuming you are fine with certain requests taking a longer time to complete (if they arrive while the service is being restarted). Otherwise using a proxy in front of your app is your best bet (which has other benefits too, as you can offload TLS and request logging/instrumentation).

- https://github.com/bojanz/httpx#systemd-setup


If you really don't want to use different ports you can handle it with Docker. Since each container has its own IP, they can all expose the same port. Otherwise, for non-containerized deployments you'll have to resort to two different ports.

In either case, you will need a reverse proxy like Traefik/Nginx in front to smartly "balance" incoming requests to the two instances of the service.


I guess this is a problem inherent not just in a single-file go app, but in any deployment where the whole stack is contained within a single process.

The post says the process starts up quick enough that the process being temporarily unavailable isn't noticeable - but what if the process _doesn't come back_? It's also impossible to do blue/green deployments this way.

It's clearly not a solution suitable to large-scale deployments. The simplicity has its trade-offs.


If you want to do deployments with single-file apps or other "whole stack in a single process" type of deployments there are other options to do it with zero-downtime.

One good option would be to spin up a second server/instance/container, run binary on new system, ensure it's good, once comfortable then swap DNS entry to the new system.


Can't help with how to implement this, but just to be sure: You should be able to use the same port in multiple instances if you bind those with SO_REUSEPORT. A quick search points to https://github.com/libp2p/go-reuseport for an implementation. Now you just need a mechanism to drain the old process.


Rough psuedocode to do this with the built-in http.Server where startServer(..) would use the reuseport library to create the listener so multiple servers can listen within the same process:

  func reloadConfig(config) {
    if newServer, err := startServer(config); err != nil {
      // gracefully shutdown previous server
      // no new connections will go to old server
      oldServer.Shutdown(...)
      oldServer = newServer
    }
  }


You don't. You can sort of emulate it with services that stateless, load-balanced, and L7 proxied.

If you want stateful zero downtime deployments, use Elixir or Erlang that has the ability to live migrate data from one version of code to the next.


You can always share ports.

But the one way to do no downtime deployments is to have more than one server.


I've always run services behind a proxy. Spin up a new server with the code (works for any type of deployment). Validate it's up. Switch the proxy from the old to new server.


This is where the simplicity of single-file golang deployments falls short.

Just make sure you’re not slowly recreating bad, homebrew versions of all of the nice things that Kubernetes does in an attempt to turn a simple deployment into a production ready deployment.


I feel docker in many cases is a hack for languages and runtimes that don’t support single file static linked binaries.

Often a single binary is a simpler and better option instead of a docker container.


I more view it as us recognizing that there's more to "a system" than a binary. Kubernetes is this concept taken to its conclusion (since it defines everything in code, literally everything). But docker is often a super convenient middle ground where it's not nearly as stupidly verbose to just get a simple thing running, but still checks a lot of the boxes.

I used to feel similarly with Java. "Why," I asked, "would you need this docker thing? Just build the shaded JAR and off you go."

And to be sure, there are some systems - especially the kind people seem to build in go (network-only APIs that never touch the fs and use few libraries) - that do not need much more than their binary to work. But what of systems that call other CLI utilities? What of systems that create data locally that you'd like to scoot around or back up?

Eventually nearly every system grows at least a few weird little things you need to do to set it up and make it comfy. Docker accommodates that.

I do think there's a big kernel of truth to your sentiment though - I loved rails as a framework but hated, just hated deploying it, especially if you wanted 2 sites to share a linux box. Maybe I was just bad at it but it was really easy to break BOTH sites. Docker has totally solved this problem. Same for python stuff.

I do think docker is also useful as a way to make deploying ~anything all look exactly the same. "Pull image, run container with these args". I actually think this is what I like the most about it - I wrote my own thing with the python docker SDK, basically a shitty puppet/ansible, except it's shitty in the exact way I want it to be. And this has been the best side effect - I pay very little in resource overhead and suddenly now all my software uses the exact same deployment system.


Often ?

It's 100% scenario. Complex apps (like Gitea for example) delivered as a single binary is basically the pinnacle of deployment.


gitea, caddy (which can update itself even with the same modules included), restic (again in-place updates), adguard home which embeds a dhcp and dns service etc etc. I really like the stuff the golang developers can put out.

I even asked someone to produce a fresbsd binary please and they added one line to their github ci to make it available that day.


This can also be done with python and pyinstaller with the one file flag. It can bundle assets and C++ libs, as well as sign the executable (eg. signtool).

Worked well for me for a zero-install bridge app.

https://pyinstaller.org/en/stable/usage.html


> Systemd holding connections and restarting the new binary.

How does this work?

Or does it just mean it stops new connections while it's restarting?



Systemd effectively acts as a proxy. I don't know that it's actually a proxy, but it keeps accepting connections from what I've seen. I use it for zero-downtime single-binary deploys, and it's great.


No, it's not a proxy, and it's not accepting connections (unless you're using the inetd emulation, but that's rare and inefficient).

It's merely passing the listening socket as an already-open file descriptor to the spawned process.

The "keeps accepting" part is just the listening socket backlog.

Last I looked, systemd socket passing couldn't be used to do graceful shutdown, serving existing connections with the old version while having the new version receive new connections. Outside of that, it's very nice.


It's not always a static binary, if you use any os/config stdlib function calls or DNS look ups. In that case you need to specify CGO_ENABLED=0 to force static builds.

I have been doing single binary full website deploys for about ~16 months in production. That includes all html, css and js embedded. It has been wonderful.


And with a little bit of code you can do switching between "use embedded files/use local files" on the app start easily and have convenience of not having to re-compile app to change some static files.


Is this article from 2016? You can do all this with Java nowadays. I have observed a lot of folks on HN whose last knowledge about Java was from a decade plus ago pontificating about Java deficiencies that no longer exist today.

Use the GraalVM native build tools https://graalvm.github.io/native-build-tools/latest/index.ht....

"Use Maven to Build a Native Executable from a Java Application"

https://www.graalvm.org/22.2/reference-manual/native-image/g...


You, and any other non-Golang programmer, could visit the golang.org site, download the latest release, untar it, write a hello-world service, and run "go build". It will take about that many steps and about 5 minutes, and you'll have your single-file binary ready for deployment.

Can you compare doing the same thing with Java? How many more steps does it take, assuming that you don't already have a standard Java dev setup that's configured and ready-to-go on your machine? Now, let's let you put your thumb on the scale and assume that you do already the standard setup, but you want to pursue what this posts lays out and that you insist can be done with Java. How much more effort does it take just to go from "typical Java setup" to "setup that actually lets you do what this article describes"? If the answer is not zero but your position is still that there's nothing special here because "You can do all this with Java nowadays", then it's because you're not understanding what "here" and "this" actually are.


"How much more effort does it take just to go from "typical Java setup" to "setup that actually lets you do what this article describes"? "

5-6 minutes extra. 3 more steps. You are making a mountain out of a mole-hill. Sorry to burst your bias-bubble, but Java deployment is damn easy nowadays.

If you are leveraging a cloud build-pack, single file native exe deployments like Go's are un-optimal since they are slower - no intelligence to perform differential updates like the way you get for a Java fat-jar or tar-ball.


> 5-6 minutes extra. 3 more steps.

That's greater than zero. So you're already failing—and that's on top of the handicap already afforded to you for the initial setup.

(Even ignoring that, I'm suspicious of your numbers. Have you actually measured it? Do you have something to show that your off-the-cuff figures match what people will actually experience?)

> Sorry to burst your bias-bubble

Major irony—assuming bias on my part (where there is, in fact, none) without realizing that doing so broadcasts evidence of yours.


> Even ignoring that, I'm suspicious of your numbers. Have you actually measured it? Do you have something to show that your off-the-cuff figures match what people will actually experience?

What ? This is the time required to download GraalVM and then configure your project. This is a ONE-TIME setup. Strictly speaking, if you omit a build tool like maven - you don't really need it for a native binary - then it is simply one additional install - get Graal native image and compile your code to a binary using the CLI. Why would I even bother measuring this ?

The fact that you are even asking for "measurements" without providing corresponding "measurements" for the Golang setup - something the article never even bothered to mention is ludicrous. Why would one even consider the one-time cost of installing one additional tool ?

That way lies silliness - I should then consider Java superior because Go requires additional command to install godoc for example while Javadoc comes with the base JDK.

> Major irony—assuming bias on my part (where there is, in fact, none) without realizing that doing so broadcasts evidence of yours.

There is nothing ironic in pointing out your double-standards. I develop in both Go and Java. Both languages come with their advantages. The strict advantages Go has over Java is goroutines, more feature-packed stdlib and reduced memory footprint at runtime. (The goroutine advantage has also gone away now with virtual threads in Java).

But single file deployment is NOT an advantage Go holds over Java - since several years now.


So have you measured it or not?

> The fact that you are even asking for "measurements" without providing corresponding "measurements" for the Golang setup

I'm responding to your claim. The onus is on you to substantiate it.

> Why would one even consider the one-time cost of installing one additional tool ?

Aside from the low-hassle relative simplicity being the fundamental subject of the submitted article, there's no reason I suppose.

> There is nothing ironic in pointing out your double-standards.

There is no double-standard aside from the aforementioned handicap that benefits you, and you're moving the goalposts, besides. You specifically accused me of being in a "bias-bubble". There is no purer form of irony.


> I'm responding to your claim. The onus is on you to substantiate it.

Snort. I believe the original claim has been substantiated enough already - that Go holds no advantage over Java wrt single file deployment as you can achieve "single binary" in Java too if you wish.

Measurement of a one-time setup cost was a claim made by you, not by me. You gave a description and I gave a description. Demanding precise time measurement of a ballpark is where the "double standard" lies - since you never provided any to "substantiate" yours, yet demand one from me by statements like: "So have you measured it or not?" for a CLI compiler install.

But, hey, in the spirit of goodness:

    time bash <(curl -sL https://get.graalvm.org/jdk)  
    8.62s user 3.90s system 29% cpu 42.876 total

    export JAVA_HOME="graalvm-ce-java17-22.3.1/Contents/Home"
    export PATH="$JAVA_HOME/bin:$PATH"

    javac HelloWorld.java && native-image HelloWorld
    <....compiler output removed, except for last line>
    Finished generating 'helloworld' in 16.8s.

    ./helloworld                                                                                                                                              
    Hello, World!

Huh, so it was actually FASTER than I thought for an end to end setup. You don't even need the open JDK since graalvm already comes with it. (I mistakenly thought needing open jdk was a pre-requisite).

So, its literally just: install tool, set path and invoke compilation commands. 1-2 min end to end.


Here's the quest:

> write a hello-world service, and run "go build" [...] you'll have your single-file binary ready for deployment

You seem to have written a much simpler hello-world toy program (one that literally just prints those words and exits), instead of a "hello-world service [...] ready for deployment". (Am I mistaken? 16 seconds—on what I'm assuming is not a modestly specced workstation—is a crazy-long compile time for a simple hello-world program, but it would be pretty crazy even for a deployable service.) What happens if you attempt to satisfy the actual criteria laid out?

> I believe the original claim has been substantiated enough already

Uh, no. It's not substantiated until someone substantiates it. To "substantiate" something is not synonymous with merely claiming that it is true.

> Demanding precise time measurement of a ballpark is where the "double standard"

It's not a double standard; I didn't even ask for "precise time measurement". I asked you to substantiate what you're saying. Seeking substantiation does not comprise a separate claim in and of itself (but nice try, I guess?).


"I love the promise of Graal native binaries. At the moment I'm unable to get it to work with my codebase." [2]

"Building is a resource hog. I set up a VM with 2 CPUs, 100GB of disk space, and 8GB of memory, and even a relatively small project took over 10 minutes to build." [3]

"the power of the Java ecosystem is libraries, but you cannot use 99% of them because they just use too much reflection and I am afraid they will never be prepared for Spring AOT" [3]

"DI does not work inside a native binary at runtime, you need some tool which does the whole DI at compile time (Spring Native and Quarkus do that)" [1]

"I do not even think we are in the alpha stage. For example, for 2 days I am fighting with a simple microservice using JPA, MySQL, some transactions and without success. Fixed at least 4 bugs, and now I gave up. I cannot imagine what problems can arise in mid-size projects." [3]

"GraalVM [not] supporting Swing and JavaFX "out of the box"." [4]

"Even assuming your app run as intended (which is already complicated enough, you will have to run their java agent to register all reflections), there is no telling how it will perform. For example record methods are currently implemented using reflection which completely obliterate performance: https://github.com/oracle/graal/issues/4348" [1]

This is my pet peeve. Presumably you're very familiar with the single binary Java deployment story you're describing. And yet, somehow everything you say is false. It's incredibly vexing to have to go out and verify such claims about subject matters that I'm not that familiar with because the supposed experts are basically lying through their teeth :/

[1]: https://www.reddit.com/r/java/comments/vc9s3u/what_is_your_e...

[2]: https://www.reddit.com/r/java/comments/e9m51x/whats_your_opi...

[3]: https://www.reddit.com/r/java/comments/10cv886/personal_expe...

[4]: https://www.reddit.com/r/java/comments/wodoy4/if_you_were_th...


.Net too, also same prob with decade old worldview about .Net

One thing that hasn't changed is that C# > Java ;P


If you're looking for a similar deployment experience, but can't use Golang, we've been using Apptainer (previously Singularity) for a couple years at work. It's really nice to be able to get the benefits of containers while retaining the simplicity of copying and running a single file. Only dependency is installing Apptainer, which is easy as well.

[0]: https://apptainer.org/


> Fast-forward and we were automatically deploying Scala applications from CI bundled in Docker in the startup of my wife.

> Last forward and I have deployed a Golang application to a cloud server.

Editor hello?


Just a note that you can do the same now with .net: https://learn.microsoft.com/en-us/dotnet/core/deploying/sing...


How is this news. Welcome to 10 years ago.


It become easier few years ago as tools to embed files are now nicely builtin into Go instead of external packages.


(swirls fancy wine)

pairs well with single file frontends.


A single binary is nice...

Perhaps second place is using the rpath origin linker option to create a relocatable application.


We're bundling b/e services from typescript monorepo in production as single bundle files - it works very well. Main reason was simply enforcing lockfile from monorepo.


I'm often a fan of single source files, at the package level, including inline embedded API docs and unit tests.


That doesn't scale because it makes merging and file navigation nearly impossible, and the increased cognitive load is too much for anything real.

Encapsulate functionality and break it into logical containers.

That's what modules are.

That's what files are.

That's what functions are.

That's what software engineering is.


Sometimes it does scale.


I still put the single file in a docker container because docker isn't complex.


This makes zero sense.

I don't like cargo culting.


There are reasons. It's a reasonable security boundary. It integrates with other things that use Docker as the primary abstraction, and there's good odds I've got other docker things that aren't single binaries like databases and other tools. It puts it into a uniform control interface that works with other things as well. It doesn't cost much additional resources over simply running the binary directly because the real runtime cost of a docker container is the mini-OS they often bring up, not the target executable.

It isn't necessary, but it's not nonsense.


I deploy everything everywhere with docker, because then the only thing installed on the system is dockerd. I can deploy identically on different distributions; I don't need to know anything about the host or keep track of files on the host.

I can keep all of my build artifacts in a docker image repository with versions. I can deploy any version on any host without worrying about copying the version to the host.

Whether your deploy is 1000000 files or 1, this system has clear advantages to copying things to the base server OS and turning it into a snowflake.


I’m jealous of this coming from C#. They kind of have it but not really.


Interesting. Are you saying for a big project run something to minimize the .go files into just one? Assuming there is just one package.


Probably referring to the fact that when you build a go executable there's just a single file to deploy (the executable).


Go has a facility for embedding build time files within the resulting binary such that they can be read as if in a runtime file system, because the Go file access routines know about this file system type.


You can definitely pack a Java application into a single JAR file and skip the Docker. Java's xenophobia (allergy to linking libraries) is the real root of "write once run everywhere" so often all you need is the Java runtime.


You still need jre so it's not really a "single file" like on Go


You can use JLink to embed only the needed modules and classes of the JRE for your application to run


You still need a bespoke systemd configuration for TFA's Go deploys so it's not really a "single file" deploy their either.

And like the sibling comment noted, once you're allowed to set up the machine to support easy depolyment (e.g. JRE, Tomcat), a WAR becomes a single file deploy.


Eh. From an ops perspective there isn't much difference between an executable file that Golang statically compiled all dependencies into and embedded a file system into, and a WAR archive that the Java compiler embedded a file system including dependencies into.

Both are self-contained single files you can give to a completely different organization and expect to run on the first attempt with no complications.

It's just that the latter needs Tomcat (not an issue, realistically) and has to be written as EnterpriseFactoryPatternFactorySingletonAbstractBaseFactorySingletonProvider that makes you feel dead inside just from looking at the documentation; while Golang (and similar newer languages) give you a lot more flexibility and better ergonomics on the developer side.


Try Guava or Spring. In either case the framework supplies you with a

   Factory, FactoryFactory, FactoryFactoryFactory, ...
that does the transitive closure so you can just get a "single" object injected into your app where you need it.


"lasciate ogne speranza, voi ch'intrate" (~ lose all hope, who enters) from Dante's Inferno is what comes to mind whenever someone mentions Spring.


class loaders in java are jvm's dynamic linker/loaders, wouldn't you say?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: