I upgraded my OpenBSD machines a few hours ago, and I'm still not entirely sure whether I notice any obvious TCP speed improvement. Then again, they're not really high-load computers. Maybe people with a higher throughput will be amazed.
FreeBSD is not really curious about being as portable as possible, I think. And it is somewhat larger, indeed, so it's not quite as easy to support more platforms.
I mean, are we surprised? Linux has on the order of millions times more users and funds (probably not developers though, but who knows). Thus, if there is any financial viability of a port I am certainly expecting Linux to "move" first. Rather, I am impressed that OpenBSD and NetBSD are keeping up as well as they do.
I genuinely wonder why it is considered "huge". Does it really matter how many % desktop usage one of the several dozens desktop operating systems has?
Because computers are cool. Its easier to point to a general operating system than to all the cool software that runs on them. When people say Linux is cool its not because of just the kernel, its everything from the culture to the software stack and ethos
That depends on what counts as “a handful of languages” for you.
You can use llm for this fairly easily:
uv tool install llm
# Set up your model however you like. For instance:
llm install llm-ollama
ollama pull mistral-small3.2
llm --model mistral-small3.2 --system "Translate to English, no other output" --save english
alias english="llm --template english"
english "Bonjour"
english "Hola"
english "Γειά σου"
english "你好"
cat some_file.txt | english
That's just the base/stock/instruct model for general use case. There gotta be a finetune specialized in translation, right? Any recommendations for that?
Plus, mistral-small3.2 has too many parameters. Not all devices can run it fast. That probably isn't the exact translation model being used by Chrome.
Setting aside general-purpose LLMs, there exist a handful of models geared towards translation between hundred of language pairs: Meta's NLLB-200 [0] and M2M-100 [1] can be run using HuggingFace's transformers (plus numpy and sentencepieces), while Google's MADLAD-400 [2], in GGUF format [3], is also supported by llama.cpp.
You could also look into Argos Translate, or just use the same models as Firefox through kotki [4].
Try to translate a paragraph with 1b gemma and compare it to DeepL :) Still amazing it can understand anything at all at that scale, but can't really rely on it for much tbh
If you need to support several languages, you're going to have to have a zoo of models. Small ones just can't handle that many; and they especially aren't good enough for distribution, we only use them for understanding.
> Everything you create should be in git or similar.
Everything you create should be on a machine you control, preferably in a house different from the one where you created it. Version control is optional (and Git probably overengineered for your one-man projects, but that's a different discussion).