Heh, I made a similar thing (mostly unreleased/undocumented at [0], original idea at [1]) that would take your Go code, build a binary wrapper for it using the target OS arch, SSH to the target, SCP the file over, run the binary, connect stdout/stderr/stdin with the remote, allow the remotely running binary to request files from the home binary, and delete itself once complete. One thing I found is SCP/SFTP is a really slow transfer protocol and an alternative should be made available as an option (but leave SSH as the default).
I believe that we are getting to a point where config management might as well be in a programming language instead of a bunch of ad-hoc scripts in templates that defer to dynamic language scripts and become a mess.
>One thing I found is SCP/SFTP is a really slow transfer protocol
Just FYI, SFTP is an OK protocol hampered by really poor implementations. Daniel Stenberg (author of `curl`) has a really good write-up about why this is the case.[1] -- What it boils down to is SFTP is "chunk" oriented, not file oriented, and most implementations wait for an acknowledgement of each chunk before fetching the next one. The end result is you're artificially band-limited by your latency to the remote host. -- If an implementation (like libssh2) in-flights many chunks at once SFTP can go much faster.
Sadly a lot of SFTP clients don't seem to have gotten the memo.
> I believe that we are getting to a point where config management might as well be in a programming language instead of a bunch of ad-hoc scripts in templates that defer to dynamic language scripts and become a mess.
Judo seems like a great automation option for some simpler tasks where Ansible could be considered overkill. For me, that covers almost 100% of my remote automation needs. :) Nice!
Nope, just to deploy to or configure a remote machine. So you write some Go code as though it's running locally that may run some apt commands, or may extract a tarball, or may setup a systemd script or whatever. Then the tool builds that binary and runs it on that remote machine (fetching resources lazily from the home computer such as the tarball you might need), then it deletes itself. I've used it for multiple server things that require more of an upgrade approach than a ephemeral container approach...I welcome anyone to take the idea and run with it.
That's a good point. Ansible is robust, well-known, well-supported, and has support for doing anything and everything from creating users, to setting cron-jobs, MySQL users, and more.
This application only allows two things:
* Uploading a file, or series of files.
* Running a command, or series of commands.
In short I'm very simple. But because of that it avoids some of the horrors of trying to be overly-complex in the way that Ansible is. It avoids cryptic failures, and the horrible "language" for looping constructs, etc.
I could imagine adding support for cron, mysql, etc, but I think as soon as you allow real conditional actions to such utilities it becomes a mess to use.
Go binaries are self-contained, unlike Ansible, which is a Python module. There are no runtime dependencies, and no worries about which version of Python and module version combination might be in use. This ensures uniformity of behavior among all users, and minimizes installation and operational difficulties.
Coincidentally I just wrote a similar set of tools using bash scripts for doing deployments of a simple app. I opted for rsync mostly because I’m more familiar with it than scp. It basically runs some build commands, copies the files up, and runs some commands to restart Docker containers.
I always think it’s interesting to see things pop up on HN that are randomly relevant to things I’m thinking about or working on.
In the readme (first section) you use, as an example, the command:
"Run adduser bob 2>/dev/null"
why do you add the "2>/dev/null" part? I see a lot of tutorial/guide/example/etc... do that! I understand what it does (send the stderr to devnull) but i don't understand why would i want to do that?
No, I had not. Interesting idea. Presumably over the existing SSH, rather than using the rsynd-deamon?
Not a bad idea, but I guess I'd need to think about it. Mostly the "large" files I pull from github, or distribution sites. I tend to only upload some simple config-files, and systemd-service units. So the extra overhead of SCP isn't so significant.
Yes, rsync communicates with the rsync at the other end automatically over SSH. You could use rsync by default and then fallback to scp if rsync isn't usable.
I believe that we are getting to a point where config management might as well be in a programming language instead of a bunch of ad-hoc scripts in templates that defer to dynamic language scripts and become a mess.
0 - https://github.com/cretz/systrument 1 - https://github.com/cretz/software-ideas/issues/1