That is exactly Linux Mint ( https://linuxmint.com/ ) . I encourage you to give it a try. It is what I have settled on after 25 years of using linux, and trying near about every distro in existence.
I'm aware of Linux Mint, but I always dismissed it because of Cinnamon without giving it much thought. Looks like KDE can be shoehorned in without much trouble.
This is gold. I wish so many people understood and used this. Pro equipment is more robust and more versatile. And if you know some people you can get amazing used stuff at bargain prices from your local musicians or sound engineers. Lot of times, people I know (and I too, though I'm not a pro sound engineer) will help you choose and setup just because you showed interest in the things we are interested in. Beers are assumed.
One drawback is that pro audio stuff may look ugly to non pro people. I made my wife listen to the cheapest studio monitors I could find on amazon, equalized with the same pink noise method in this video, and compare it to her bose and marshall speakers. She liked the sound better but my speakers are "yuck ugly" :(
They might look ugly to pro people too, just in a studio it’s function over form. However at home I’m inclined to agree that it being pleasing to look at is important to me.
It may be the way I use it, but qwen3-coder (30b with ollama) is actually helping me with real world tasks. Its a bit worse than big models for the way I use it, but absolutely useful. I do use ai tools with very specific instructions though, like file paths, line numbers if I can, and specific direction about what to do, my own tools, etc. so that may be why I don't see such a huge difference from big models.
It has everything to do with the way you use it. And the biggest difference is how fast the model/service can process context. Everything is context. It's the difference between you iterating on an LLM boosted goal for an hour vs 5 minutes. If your workflow involves chatting with an LLM and manually passing chunks, and manually retrieving that response, and manually inserting it, and manually testing....
You get the picture. Sure, even last year's local LLM will do well in capable hands in that scenario.
Now try pushing over 100,000 tokens in a single call, every call, in an automated process. I'm talking the type of workflows where you push over a million tokens in a few minutes, over several steps.
That's where the moat, no, the chasm, between local setups and a public API lies.
No one who does serious work "chats" with an LLM. They trigger workflows where "agents" chew on a complex problem for several minutes.
You'll see good results, Kimi is basically a micro dosing Sonnet lol. V v v reliable tool calls, but, because it's micro dosing, you don't wanna use it for implementing OAuth, maybe adding comments or strict direction (i.e. a series of text mutations)
this is not coder
this help typing instructions. Coding is different. For example look at my repository and tell me how refactorizing it, write a new function etc.
In my opinion You must change name.
Trees are barely a firm category of plant at all. It's basically just tall plants with woody stems. Plants can gain and lose woody stems without too much trouble (relatively speaking, over evolutionary time). So any time a plant species currently growing soft stems can benefit from being really tall, they have a good chance of evolving into "trees".
As an aside there: the blog post briefly talks about birds. It turns out that membrane wings are much easier to evolve than feathered wings. There have been lots of membrane winged creatures (including "birds" with membrane wings in the Jurassic) but not nearly as many appearances of feathered wings.
Is there a model which can generate vocals for an existing song given lyrics and some direction? I can't sing my way out of a paper bag, but I can make everything else for a song, so it would be a good way to try a bunch of ideas and then involve an actual singer for any promising ideas.
reply