Hacker Newsnew | past | comments | ask | show | jobs | submit | more broknbottle's favoriteslogin

10LTSC + Office 2016 + open shell is the peak windows experience.

2016 was the last year you could autosave documents locally, then fucking OneDrive.

Grab an ISO from Massgrave, write using Rufus and two scripts later your fully tweaked.

irm https://get.activated.win | iex

iwr -useb https://christitus.com/win | iex


Video about it here: https://developer.apple.com/videos/play/wwdc2025/346/

Looks like each container gets its own lightweight Linux VM.

Can take it for a spin by downloading the container tool from here: https://github.com/apple/container/releases (needs macOS 26)


For anyone looking to migrate off borg because of this, append-only is available in restic, but only with the rest-server backend:

https://github.com/restic/restic

https://github.com/restic/rest-server

which has to be started with --append-only. I use this systemd unit:

  [Unit]
  After=network-online.target

  [Install]
  WantedBy=multi-user.target

  [Service]
  ExecStart=/usr/local/bin/rest-server --path /mnt/backups --append-only --private-repos
  WorkingDirectory=/mnt/backups
  User=restic
  Restart=on-failure
  ProtectSystem=strict
  ReadWritePaths=/mnt/backups
I also use nginx with HTTPS + HTTP authentication in front of it, with a separate username/password combination for each server. This makes rest-server completely inaccessible to the rest of the internet and you don't have to trust it to be properly protected against being hammered by malicious traffic.

Been using this for about five years, it saved my bacon a few times, no problems so far.


If you want to run LLMs locally then the localllama community is your friend: https://old.reddit.com/r/LocalLLaMA/

In general there's no "best" LLM model, all of them will have some strengths and weaknesses. There are a bunch of good picks; for example:

> DeepSeek-R1-0528-Qwen3-8B - https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B

Released today; probably the best reasoning model in 8B size.

> Qwen3 - https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2...

Recently released. Hybrid thinking/non-thinking models with really great performance and plethora of sizes for every hardware. The Qwen3-30B-A3B can even run on CPU with acceptable speeds. Even the tiny 0.6B one is somewhat coherent, which is crazy.


My fav little-known trick is to test various syscalls fail with strace fault injection, like:

  $ strace -e trace=clone -e fault=clone:error=EAGAIN

random link: https://medium.com/@manav503/using-strace-to-perform-fault-i...

I think it's more about getting used to f-F,t-T,A,I, and <semicolon> which can be quicker especially with code. You can also add easymotion or similar plugins for the powered version of those.

> Is there a way to fully snapshot a container state and it's disk state

`docker commit` should help you there, in making a copy of the container


For Linux, there's a kernel setting for that.

Just run

  sysctl -w net.ipv6.bindv6only=1
so IPv6 will not include IPv4-mapped addresses.

https://www.kernel.org/doc/Documentation/networking/ip-sysct...


Cool! "Pi-hole -esque" is a nice descriptor.

Tangent: Bunny.net is my new favorite CDN / cloud service provider. They have scriptable DNS too.


For those interested in building something similar, I prompted a story book generator using v0 and Gemini’s image generation a few weeks ago:

Demo: https://v0-story-maker.vercel.app/

The chat: https://v0.dev/chat/ai-story-book-creator-zw7TrmkN2Eb


No dependencies for install, just is the command runner I use to keep things organized, it's a replacement for make(1). If it hurts to download just, you can build using SPM:

swift build -c release

mv .build/PLATFORM/release/infat /usr/local/bin


> This is a bit tangent, but does anyone know how to block an app from ever deciding to become a file handler on macOS?

The information below does not "block an app" from taking over file handler associations. It may be beneficial on its own and/or provide a starting point for further exploration.

> Every time I install Chrome on my machine (for testing purposes), macOS decides that Chrome is going to be the default file handler for a bunch of file associations, including HTML, WebP, and so on… and I have to figure out which was which for all of the mappings (which is super frustrating).

The following command can display current file extension associations for a user account:

  plutil -convert xml1 \
    ~/Library/Preferences/com.apple.LaunchServices.plist \
    -o -
In the XML output of the above should be the associations. As for determining changes, if Time Machine has been enabled, then changes to this plist can be identified.

Also regarding a Chrome OS-X/macOS installation - it installs plists for unconditional background updates. This may not be a desirable feature.


I love that it’s a command line tool — I’ll try it soon.

I use OpenIn for this [1] (it’s paid, but a one-time purchase at a very reasonable price). It works with URLs too, supports “browser profiles”, and lets you create logic using JavaScript (e.g., do X if the filename contains Y, or do Z if a modifier key is pressed).

It works really well and even has the ability to “fix” what external apps have changed. I plan to use this on new Macs to reconstruct my app associations and rules.

I do wish the rules were defined in plain text files — sometimes it’s hard to follow the logic through the UI and the way it handles things.

Another comment mentions Hammerspoon (which I used in the past — it was very nice). Maybe I can rebuild part of my current setup with it.

[1] https://loshadki.app/openin4/


For a latest reference on AI and machine learning for network engineer please check this book by Javier Antich [1].

Please also check the review here [2]. For what it's worth, the book is listed in the "10 Books Every Network Engineer Should Read" [3].

[1] Machine Learning for Network and Cloud Engineers: Get ready for the next Era of Network Automation:

https://www.goodreads.com/book/show/101180344-machine-learni...

[2] MUST READ: Machine Learning for Network and Cloud Engineers:

https://blog.ipspace.net/2023/02/machine-learning-network-cl...

[3] 10 Books Every Network Engineer Should Read:

https://networkphil.com/2024/05/21/10-books-every-network-en...


No place like ${HOME} https://dotfiles.gbraad.nl ;-). I went further and generate images to easily spin up development environments, based on bootc vms or containers.

Never stop tweaking. No computer can be called home until it runs your own set of aliases/commands.


Regular containers also happen to work great for testing dotfiles.

Many years ago I added an install script to https://github.com/nickjj/dotfiles to get set up in basically 1 command because I wanted a quick way to bootstrap my own system. I used the official Debian and Ubuntu images to test things.

Over the last few days I refactored things further to support Arch Linux which has an official Docker image too.

This enables being able to do full end to end tests in about 5 minutes. The container spins up in 1 second, the rest is the script running its course. Since it's just a container you can also use volume mounts and leave the container running in case you want to incrementally test things without wiping the environment.

Additionally it lets folks test it out without modifying their system in 1 command. Docker has enabled so many good things over the last 10+ years.


I think gemma-3-27b-it-qat-4bit is my new favorite local model - or at least it's right up there with Mistral Small 3.1 24B.

I've been trying it on an M2 64GB via both Ollama and MLX. It's very, very good, and it only uses ~22Gb (via Ollama) or ~15GB (MLX) leaving plenty of memory for running other apps.

Some notes here: https://simonwillison.net/2025/Apr/19/gemma-3-qat-models/

Last night I had it write me a complete plugin for my LLM tool like this:

  llm install llm-mlx
  llm mlx download-model mlx-community/gemma-3-27b-it-qat-4bit

  llm -m mlx-community/gemma-3-27b-it-qat-4bit \
    -f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
    -f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
    -s 'Write a new fragments plugin in Python that registers
    issue:org/repo/123 which fetches that issue
        number from the specified github repo and uses the same
        markdown logic as the HTML page to turn that into a
        fragment'
It gave a solid response! https://gist.github.com/simonw/feccff6ce3254556b848c27333f52... - more notes here: https://simonwillison.net/2025/Apr/20/llm-fragments-github/

Another trick with github urls: you can append .patch or .diff to any PR or commit URL, and you'll get back a git-formatted patch or diff.

https://github.com/rust-lang/rust/pull/139966

https://github.com/rust-lang/rust/pull/139966.patch

https://github.com/rust-lang/rust/pull/139966.diff


Unless you're good at actually maintaining your gpg keychain and need other people to access this, I really wouldn't bother with gpg. There are way better and simpler options.

Age has a simpler interface and SSH key support https://github.com/FiloSottile/age

ejson2env has the environment variable integration and ejson has multiple backends https://github.com/Shopify/ejson2env

direnv can support any cli secrets manager per project directory https://direnv.net/

I've dealt with enough "why did this break" situations with gpg secrets files used by capable teams that I'd never recommend that to anyone. And unless you really need the public key support (teams and deployment support), you're unlikely to gain anything better over a password manager.


https://github.com/abshkbh/arrakis

Will come with MacOS support very soon :) Does work on Linux


Podman quadlet supports "Socket activation of containers" https://github.com/containers/podman/blob/main/docs/tutorial... This allows you to run a network server with `Network=none` (--network=none). If the server would be compromised, the intruder would not have the privileges to use the compromised server as a spam bot. There are other advantages, such as support for preserved source IP address and better performance when running a container with rootless Podman + Pasta in a custom network.

Unmentioned: there are serious security issues with memory cloning code not designed for it.

For example, an SSL library might have pre-calculated the random nonce for the next incoming SSL connection.

If you clone the VM containing a process using that library, now both child VM's will use the same nonce. Some crypto is 100% broken open if a nonce is reused.


Yes, that's right. The Firecracker team has written a fantastic doc about this as well: https://github.com/firecracker-microvm/firecracker/blob/main....

It's important to refresh entropy immediately after clone. Still, there can be code that didn't assume it could be cloned (even though there's always been `fork`, of course). Because of this, we don't live clone across workspaces for unlisted/private sandboxes and limit the use case to dev envs where no secrets are stored.


This is by the author of the very helpful kernel-hardening-checker: https://github.com/a13xp0p0v/kernel-hardening-checker

An interesting tool for analyzing your personal kernel config file and pointing out areas for security improvement. It's more comprehensive than KSPP (https://kspp.github.io/) but sometimes goes a little too far, suggesting disabling kernel features you may actively use.

Definitely worth trying!


Try netbird which is an open-source alternative to free yourself from worries xD https://github.com/netbirdio/netbird

I've used this in the past to force bash to print every command it runs (using the -x flag) in the Actions workflow. This can be very helpful for debugging.

https://github.com/jstrieb/just.sh/blob/2da1e2a3bfb51d583be0...


uv --script

> Any program can be a GitHub Actions shell

systemd, echo "1" > /proc/sys/kernel/panic, echo > /bin/bash, etc.


I'm using fedora coreos to run nextcloud on a cheap old workstation. It took some work to get the configuration right, but I'm very impressed by how little maintenance I need to do (so far none at all).

If anyone is interested in doing the same, my configuration can be found here for inspiration: https://github.com/jeppester/coreos-nextcloud


You can use `podman-compose --in-pod=1 systemd -a create-unit` and it will create podman-compose@ service, then you can register compose.yml files with `podman-compose systemd -a register` with a $name, after that you can manage those pods based on compose files using podman-compose@$name.service. Works completely rootless too.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: