I also use nginx with HTTPS + HTTP authentication in front of it, with a separate username/password combination for each server. This makes rest-server completely inaccessible to the rest of the internet and you don't have to trust it to be properly protected against being hammered by malicious traffic.
Been using this for about five years, it saved my bacon a few times, no problems so far.
Recently released. Hybrid thinking/non-thinking models with really great performance and plethora of sizes for every hardware. The Qwen3-30B-A3B can even run on CPU with acceptable speeds. Even the tiny 0.6B one is somewhat coherent, which is crazy.
I think it's more about getting used to f-F,t-T,A,I, and <semicolon> which can be quicker especially with code. You can also add easymotion or similar plugins for the powered version of those.
No dependencies for install, just is the command runner I use to keep things organized, it's a replacement for make(1). If it hurts to download just, you can build using SPM:
> This is a bit tangent, but does anyone know how to block an app from ever deciding to become a file handler on macOS?
The information below does not "block an app" from taking over file handler associations. It may be beneficial on its own and/or provide a starting point for further exploration.
> Every time I install Chrome on my machine (for testing purposes), macOS decides that Chrome is going to be the default file handler for a bunch of file associations, including HTML, WebP, and so on… and I have to figure out which was which for all of the mappings (which is super frustrating).
The following command can display current file extension associations for a user account:
In the XML output of the above should be the associations. As for determining changes, if Time Machine has been enabled, then changes to this plist can be identified.
Also regarding a Chrome OS-X/macOS installation - it installs plists for unconditional background updates. This may not be a desirable feature.
I love that it’s a command line tool — I’ll try it soon.
I use OpenIn for this [1] (it’s paid, but a one-time purchase at a very reasonable price). It works with URLs too, supports “browser profiles”, and lets you create logic using JavaScript (e.g., do X if the filename contains Y, or do Z if a modifier key is pressed).
It works really well and even has the ability to “fix” what external apps have changed. I plan to use this on new Macs to reconstruct my app associations and rules.
I do wish the rules were defined in plain text files — sometimes it’s hard to follow the logic through the UI and the way it handles things.
Another comment mentions Hammerspoon (which I used in the past — it was very nice). Maybe I can rebuild part of my current setup with it.
No place like ${HOME} https://dotfiles.gbraad.nl ;-). I went further and generate images to easily spin up development environments, based on bootc vms or containers.
Never stop tweaking. No computer can be called home until it runs your own set of aliases/commands.
Regular containers also happen to work great for testing dotfiles.
Many years ago I added an install script to https://github.com/nickjj/dotfiles to get set up in basically 1 command because I wanted a quick way to bootstrap my own system. I used the official Debian and Ubuntu images to test things.
Over the last few days I refactored things further to support Arch Linux which has an official Docker image too.
This enables being able to do full end to end tests in about 5 minutes. The container spins up in 1 second, the rest is the script running its course. Since it's just a container you can also use volume mounts and leave the container running in case you want to incrementally test things without wiping the environment.
Additionally it lets folks test it out without modifying their system in 1 command. Docker has enabled so many good things over the last 10+ years.
I think gemma-3-27b-it-qat-4bit is my new favorite local model - or at least it's right up there with Mistral Small 3.1 24B.
I've been trying it on an M2 64GB via both Ollama and MLX. It's very, very good, and it only uses ~22Gb (via Ollama) or ~15GB (MLX) leaving plenty of memory for running other apps.
Last night I had it write me a complete plugin for my LLM tool like this:
llm install llm-mlx
llm mlx download-model mlx-community/gemma-3-27b-it-qat-4bit
llm -m mlx-community/gemma-3-27b-it-qat-4bit \
-f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
-f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
-s 'Write a new fragments plugin in Python that registers
issue:org/repo/123 which fetches that issue
number from the specified github repo and uses the same
markdown logic as the HTML page to turn that into a
fragment'
Unless you're good at actually maintaining your gpg keychain and need other people to access this, I really wouldn't bother with gpg. There are way better and simpler options.
direnv can support any cli secrets manager per project directory https://direnv.net/
I've dealt with enough "why did this break" situations with gpg secrets files used by capable teams that I'd never recommend that to anyone. And unless you really need the public key support (teams and deployment support), you're unlikely to gain anything better over a password manager.
Podman quadlet supports "Socket activation of containers" https://github.com/containers/podman/blob/main/docs/tutorial...
This allows you to run a network server with `Network=none` (--network=none). If the server would be compromised, the intruder would not have the privileges to use the compromised server as a spam bot. There are other advantages, such as support for preserved source IP address and better performance when running a container with rootless Podman + Pasta in a custom network.
Unmentioned: there are serious security issues with memory cloning code not designed for it.
For example, an SSL library might have pre-calculated the random nonce for the next incoming SSL connection.
If you clone the VM containing a process using that library, now both child VM's will use the same nonce. Some crypto is 100% broken open if a nonce is reused.
It's important to refresh entropy immediately after clone. Still, there can be code that didn't assume it could be cloned (even though there's always been `fork`, of course). Because of this, we don't live clone across workspaces for unlisted/private sandboxes and limit the use case to dev envs where no secrets are stored.
An interesting tool for analyzing your personal kernel config file and pointing out areas for security improvement. It's more comprehensive than KSPP (https://kspp.github.io/) but sometimes goes a little too far, suggesting disabling kernel features you may actively use.
I've used this in the past to force bash to print every command it runs (using the -x flag) in the Actions workflow. This can be very helpful for debugging.
I'm using fedora coreos to run nextcloud on a cheap old workstation. It took some work to get the configuration right, but I'm very impressed by how little maintenance I need to do (so far none at all).
You can use `podman-compose --in-pod=1 systemd -a create-unit` and it will create podman-compose@ service, then you can register compose.yml files with `podman-compose systemd -a register` with a $name, after that you can manage those pods based on compose files using podman-compose@$name.service. Works completely rootless too.
2016 was the last year you could autosave documents locally, then fucking OneDrive.
Grab an ISO from Massgrave, write using Rufus and two scripts later your fully tweaked.
irm https://get.activated.win | iex
iwr -useb https://christitus.com/win | iex