Hacker Newsnew | past | comments | ask | show | jobs | submit | sluongng's commentslogin

Nice. This is one of the main reasons why I picked CachyOS recently. Now I can fallback to Ubuntu if CachyOS gets me stuck somewhere.


CachyOS uses this one percent of performance gains? Since it uses every performance gain, unsurprising. But now I wonder how my laptop from 2012 did run CachyOS, they seem to switch based on hardware, not during image download and boot.


correct, it just sets the repository in the pacman.conf to either cachyos, -v3, or -v4 during install time based on hardware probe


The most concerning part about modern CI to me is how most of it is running on GitHub Actions, and how GitHub itself has been deprioritizing GitHub Actions maintenance and improvements over AI features.

Seriously, take a look at their pinned repo: https://github.com/actions/starter-workflows

> Thank you for your interest in this GitHub repo, however, right now we are not taking contributions.

> We continue to focus our resources on strategic areas that help our customers be successful while making developers' lives easier. While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas of Actions and are not taking contributions to this repository at this time.


The last time the company I worked for was hosting code on Github, Actions did not exist yet and for personal stuff copying some 3 liners was fine, I'd hardly call that "using".

"Github Actions might be over, so not worth engaging" was not on my bingo card.


They are instead focusing on Agentic Workflows which used natural language instead of YAML.

https://github.com/githubnext/gh-aw


Know what I love in a good build system? Nondeterminism! Who needs coffee when you can get your thrills from stochastic processes. Why settle for just non-repeatable builds when you can have non-repeatable build failures!


Would a smart AI accept such foolishness? I doubt it. It'll still use something deterministic under the hood - it'll just have a conversational abstraction layer for talking to the Product person writing up requirements.

We used to have to be able to communicate with other humans to build something. It seems to me that's what they're trying to take out of the loop by doing the things that humans do: talk to other humans and give them what they're asking for.

I too am not a fan of the dystopias we're ending up in.


Would it, or would it rewrite / refactor the logic every time. I'd expect the logic to remain as it for months, but then change suddenly without warning when the AI is upgraded.


“Just make it generate YAML and cache that until the prompt changes!”

Orrrrr… just keep that YAML as the sole configuration input in the first place. Use AI to write it if you wish, but then leave it alone.


What I'm hearing is we need to invent LLM-based compilers.


Time to launch LLMLLVM.


It's just translation right? Llm's are pretty good at that..


I personally find this pretty concerning: GitHub Actions already has a complex and opaque security model, and adding LLMs into the mix seems like a perfect way to keep up the recent streak of major compromises driven by vulnerable workflows and actions.

I would hope that this comes with major changes to GHA’s permissions system, but I’m not holding my breath for that.


I don't view it as a bug. It's a personality trait of the model that made "user steering" much easier, thus helping the model to handle a wider range of tasks.

I also think that there will be no "perfect" personality out there. There will always be folks who view some traits as annoying icks. So, some level of RL-based personality customization down the line will be a must.



Nit: Probably https://man7.org/linux/man-pages/man1/flock.1.html (shell command, not the underlying libc function)


This was my first thought and I suppose flock(1) could be used to recreate a lot of this. But it does come with some other quality-of-life improvements like being able to list all currently-used locks, having a lock holdable by N processes etc.


Because that's a syscall ;) https://man7.org/linux/man-pages/man1/flock.1.html is the command line manual.

I would say one good reason is that

  waitlock myapp &
  JOB_PID=$!
  # ... do exclusive work ...
  kill $JOB_PID
is a lot easier to use and remember than

  (; flock -n 9 || exit 1; # ... commands executed under lock ...; ) 9>/var/lock/mylockfile


Why

  (; flock -n 9
and not

  ( flock -n 9

?


It's a "for" loop.


Could you elaborate?


A for loop in a shell script may sometimes look like this:

`for ((i = 0 ; i < max ; i++ )); do echo "$i"; done`

Here this is essentially a "while" loop, meaning it will keep executing the commands as long as we don't reach `exit 1`.

(; flock -n 9 || exit 1; # ... commands executed under lock ...; )


It doesn't seem to work?

  [~] 0 $ ( flock -n 9 || exit 1; echo in loop ; sleep 3 ; echo done working ; ) 9>~/tmp/mylock
  in loop
  done working
  [~] 0 $ (; flock -n 9 || exit 1; echo in loop ; sleep 3 ; echo done working ; ) 9>~/tmp/mylock
  -bash: syntax error near unexpected token `;'
  [~] 2 $

(This is bash)


Flock can be used in a single line for example for cronjobs.

Flock -s file && script.

Pretty simple. (I forgot the argument, I think is -s..


just pushed a change so now it's:

waitlock myapp & #... do stuff waitlock --done myapp


flock is indeed built-in: `flock -xn /tmp/mylock.lock -c "echo running locked command"` does mutex locking in bash. Your tool might offer better ergonomics or features beyond flock's capabilities?


More on Netflix's Remote Workstation setup for artists https://aws.amazon.com/solutions/case-studies/netflix-workst...


I think this is a common practice by now.

Here is a talk from Netflix about cloud workspace for their artists https://aws.amazon.com/solutions/case-studies/netflix-workst...


I’ve worked in VFX shops and it is indeed very common practice.


Hmm it’s weird that this submission and comments are being shown to me as “hours ago” while they are all 2 days old



Sometimes the moderators will effectively boost a post that they think is interesting so it gets more views.


> For performance, Go using CGO is going to be closer to Python than Go.

This is not true. In a lot of libraries, unless there are asm implementations on the pure Go side, the CGO implementation often outperforms. Zstd used to be one of the most notable example.

> CGO is slow and often still painful to build cross platform.

This is true. I found that using Bazel to manage the entire build graph made CGO a lot easier to deal with. By adopting Bazel, you formalize the cost of operating a cc toolchain and sysroot up front instead of hiding it inside nested layers of environment variables and CI container images. Bazel also made your build faster and cross-platform CGO easier.

> Go is no longer able to be built into a single static binary.

The "C" portion of CGO can be made static as well. It often results in a much bigger binary than a dynamically-linked binary though. In setups where you control the runtime environment (i.e. web servers), I don't see a clear benefit in shipping duplicate bytes inside a static binary. Even for consumer-facing use cases (i.e. devtools), a static binary can be too big and unwieldy versus just a set of pre-built binaries targeting specific platforms via Github Release.


Let me try to take the other side:

`ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available. It's quick and dirty, but in a large code base, it can be quite tricky to check if you are passing too many values down the chain, or too little, and handle the failure cases.

What if you just use a custom struct with all the fields you may need to be defined inside? Then at least all the field types are properly defined and documented. You can also use multiple custom "context" structs in different call paths, or even compose them if there are overlapping fields.


Because you should wrapp that in a type safe function. You should not use the context.GetValue() directly but use your own function, the context is just a transport mechanism.


If it is just a transport mechanism, why use context at all ant not a typed struct?


Because dozens of in between layers don't need to know the type, and should in fact work regardless of the specific type.

Context tells you enough: someone, somewhere may do magic with this if you pass it down the chain.

And in good Go tradition it's explicit about this: functions that don't take a context don't (generally) do that kind of magic.

If anything it mixes two concerns: cancelation and dynamic scoping.

But I'm not sure having two different parameters would be better.


> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available

The docs https://pkg.go.dev/context#Context suggest a way to make it type-safe (use an unexported key type and provide getter/setter). Seems fine to me.

> What if you just use a custom struct with all the fields you may need to be defined inside?

Can't seamlessly cross module boundaries.


> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available.

On a similar note, this is also why I highly dislike struct tags. They're string magic that should be used sparingly, yet we've integrated them into data parsing, validation, type definitions and who knows what else just to avoid a bit of verbosity.


Most popular languages support annotations of one type or another, they let you do all that in a type safe way. It's Go that's decided to be different for difference sake, and produced a complete mess.


IMO Go is full of stuff like this where they do something different than most similar languages for questionable gains. `iota` instead of enums, implicit interfaces, full strings in imports (not talking about URLS here but them having string literal syntax), capitalization as visibility control come to mind immediately, and I'm sure there are others I'm forgetting. Not all of these are actively harmful, but for a language that touts "simplicity" as one of its core values, I've always found it odd how many different wheels Go felt the need to reinvent without any obvious benefit over the existing ones.


the second i tried writing go to solve a non-trivial problem the whole language collapsed in on itself. footguns upon footguns hand-waved away with "it's the go way!". i just don't understand. the "the go way" feels more like a mantra that discourages critical thinking about programming language design.


> `ctx.Value` is an `any -> any`

It did not have to be this way, this is a shortcoming of Go itself. Generic interfaces makes things a bit better, but Go designers chose that dumb typing at first place. The std lib is full of interface {} use iteself.

context itself is an after thought, because people were building thread unsafe leaky code on top of http request with no good way to easily scope variables that would scale concurrently.

I remember the web session lib for instance back then, a hack.

ctx.Value is made for each go routine scoped data, that's the whole point.

If it is an antipattern well, it is an antipattern designed by go designers themselves.


Isn’t Rekor runs on top of Trillian?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: