CachyOS uses this one percent of performance gains? Since it uses every performance gain, unsurprising. But now I wonder how my laptop from 2012 did run CachyOS, they seem to switch based on hardware, not during image download and boot.
The most concerning part about modern CI to me is how most of it is running on GitHub Actions, and how GitHub itself has been deprioritizing GitHub Actions maintenance and improvements over AI features.
> Thank you for your interest in this GitHub repo, however, right now we are not taking contributions.
> We continue to focus our resources on strategic areas that help our customers be successful while making developers' lives easier. While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas of Actions and are not taking contributions to this repository at this time.
The last time the company I worked for was hosting code on Github, Actions did not exist yet and for personal stuff copying some 3 liners was fine, I'd hardly call that "using".
"Github Actions might be over, so not worth engaging" was not on my bingo card.
Know what I love in a good build system? Nondeterminism! Who needs coffee when you can get your thrills from stochastic processes. Why settle for just non-repeatable builds when you can have non-repeatable build failures!
Would a smart AI accept such foolishness? I doubt it. It'll still use something deterministic under the hood - it'll just have a conversational abstraction layer for talking to the Product person writing up requirements.
We used to have to be able to communicate with other humans to build something. It seems to me that's what they're trying to take out of the loop by doing the things that humans do: talk to other humans and give them what they're asking for.
I too am not a fan of the dystopias we're ending up in.
Would it, or would it rewrite / refactor the logic every time. I'd expect the logic to remain as it for months, but then change suddenly without warning when the AI is upgraded.
I personally find this pretty concerning: GitHub Actions already has a complex and opaque security model, and adding LLMs into the mix seems like a perfect way to keep up the recent streak of major compromises driven by vulnerable workflows and actions.
I would hope that this comes with major changes to GHA’s permissions system, but I’m not holding my breath for that.
I don't view it as a bug. It's a personality trait of the model that made "user steering" much easier, thus helping the model to handle a wider range of tasks.
I also think that there will be no "perfect" personality out there. There will always be folks who view some traits as annoying icks. So, some level of RL-based personality customization down the line will be a must.
This was my first thought and I suppose flock(1) could be used to recreate a lot of this. But it does come with some other quality-of-life improvements like being able to list all currently-used locks, having a lock holdable by N processes etc.
flock is indeed built-in: `flock -xn /tmp/mylock.lock -c "echo running locked command"` does mutex locking in bash. Your tool might offer better ergonomics or features beyond flock's capabilities?
> For performance, Go using CGO is going to be closer to Python than Go.
This is not true. In a lot of libraries, unless there are asm implementations on the pure Go side, the CGO implementation often outperforms. Zstd used to be one of the most notable example.
> CGO is slow and often still painful to build cross platform.
This is true. I found that using Bazel to manage the entire build graph made CGO a lot easier to deal with. By adopting Bazel, you formalize the cost of operating a cc toolchain and sysroot up front instead of hiding it inside nested layers of environment variables and CI container images. Bazel also made your build faster and cross-platform CGO easier.
> Go is no longer able to be built into a single static binary.
The "C" portion of CGO can be made static as well. It often results in a much bigger binary than a dynamically-linked binary though. In setups where you control the runtime environment (i.e. web servers), I don't see a clear benefit in shipping duplicate bytes inside a static binary. Even for consumer-facing use cases (i.e. devtools), a static binary can be too big and unwieldy versus just a set of pre-built binaries targeting specific platforms via Github Release.
`ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available. It's quick and dirty, but in a large code base, it can be quite tricky to check if you are passing too many values down the chain, or too little, and handle the failure cases.
What if you just use a custom struct with all the fields you may need to be defined inside? Then at least all the field types are properly defined and documented. You can also use multiple custom "context" structs in different call paths, or even compose them if there are overlapping fields.
Because you should wrapp that in a type safe function. You should not use the context.GetValue() directly but use your own function, the context is just a transport mechanism.
> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available
The docs https://pkg.go.dev/context#Context suggest a way to make it type-safe (use an unexported key type and provide getter/setter). Seems fine to me.
> What if you just use a custom struct with all the fields you may need to be defined inside?
> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available.
On a similar note, this is also why I highly dislike struct tags. They're string magic that should be used sparingly, yet we've integrated them into data parsing, validation, type definitions and who knows what else just to avoid a bit of verbosity.
Most popular languages support annotations of one type or another, they let you do all that in a type safe way. It's Go that's decided to be different for difference sake, and produced a complete mess.
IMO Go is full of stuff like this where they do something different than most similar languages for questionable gains. `iota` instead of enums, implicit interfaces, full strings in imports (not talking about URLS here but them having string literal syntax), capitalization as visibility control come to mind immediately, and I'm sure there are others I'm forgetting. Not all of these are actively harmful, but for a language that touts "simplicity" as one of its core values, I've always found it odd how many different wheels Go felt the need to reinvent without any obvious benefit over the existing ones.
the second i tried writing go to solve a non-trivial problem the whole language collapsed in on itself. footguns upon footguns hand-waved away with "it's the go way!". i just don't understand. the "the go way" feels more like a mantra that discourages critical thinking about programming language design.
It did not have to be this way, this is a shortcoming of Go itself. Generic interfaces makes things a bit better, but Go designers chose that dumb typing at first place. The std lib is full of interface {} use iteself.
context itself is an after thought, because people were building thread unsafe leaky code on top of http request with no good way to easily scope variables that would scale concurrently.
I remember the web session lib for instance back then, a hack.
ctx.Value is made for each go routine scoped data, that's the whole point.
If it is an antipattern well, it is an antipattern designed by go designers themselves.