Hands up, the dozens of us pedants that have used a relaxed atomic add in situations like these. Updating the SP in the most paranoid way possible is the reason that sort of thing exists.
(You cannot express relaxed atomics in golang, but you could technically add support in the compiler for use in the runtime code)
Go users can only insert assembly wrapped in a function call. That might be safety related, I am not entirely sure.
(Well technically there is a way to inject assembly without the function call overhead. That's what https://pkg.go.dev/runtime/internal/atomic is doing. But you will need to modify the runtime and compiler toolchain for it.)
If you look the docs, they expect the developer to add specific information and use the registers in a specific way, otherwise Go will face runtime issues.
Whereas when you go over CGO, you get a marshaling layer similar to how JNI, P/Invoke work, that take care of those issues.
Language safety helps get you 80% of the way there, but you are still working from software on top of fundametally unsafe hardware. Companies and agencies are and, increasingly, will pour money into hardware that give certain safety and security guarantees.
Totally agree. I have experienced 'ideal' circumstances of 33% taken/untaken branches where you will be hard pressed to make cmov perform better on real life workloads. Pass along other data inputs that do predict better and your cmov becomes a liability.
It's pretty hard to make modern compilers reliably emit cmovs in my experience. I had to resort to inline asm.
I wouldn't say they contain any model of the world. They're a statistical predictive model, which have proven effective at certain tasks.
My take is that the demoware part is not inherent to the NN approach, but rather that the tasks it's unreasonably effective produce very cool demo-able tasks for which the audience readily fills in the blanks. Cool demos make it easier to get further resources, so demoware-prone techniques tend to pull more funding, at least for a while.
well the question i answered was "Why would anything that isn't Intel implode" and an AI winter and another dotcom boom would do that to everyone not named Intel.
A basic block simulator like llvm-mca is unlikely to give useful information here, as memory access is going to play a significant part in the overall performance.
One of the key innovations behind the DNN/CNN models was Mechanical Turk. OpenAI used a similar system extensively to improve the early GPT models. I would not be surprised that the practice continues today; NN models needs a lot of quality ground truth training data.
Given the number of labs that are competing these days on "open weights" and "transparency" I'd be very interested to read details of how some of them are handling the human side of their model training.
I'm puzzled at how little information I've been able to find.
Beyond that, I think the reason you haven't heard more about it is that it happens in developing countries, so western media doesn't care much, and also because big AI companies work hard to distance themselves from it. They'll never be the ones directly employing these AI sweatshop works, it's all contracted out.
(You cannot express relaxed atomics in golang, but you could technically add support in the compiler for use in the runtime code)