There's a difference in terms of their motives and methods and the surrounding context, but, ultimately it's just actions and consequences and a messy collective decision-making process. The collective ruling body has thus far decided that Uber be allowed to continue and the conversations and laws continue to evolve around these things. Nobody is calling Travis a hero but we've [collectively] agreed that there was some value to some of those decisions.
Let's look at that gulf. One's a poor black woman in the 1960's and the other's a rich white guy in the 2010's. It's easy to see which one we've been programmed to be supportive of. But picking someone based on the color of their skin and not the content of their character isn't what we're going for. So we have to be explicit in saying that the documented actions by this particular rich white guy are what people find offensive about him, rather than simply that he is one.
In terms of societal change though, they both had a bad law in front of them, they both broke it. In Rosa Parks' case, the law got changed. In Travis Kalanick's case, new laws got passed specifically regulating his company. But the thing is, the taxi medallion laws haven't actually gone away. This results in Uber having to do things in weird ways to satisfy the letter of the law in order to comply with the various laws that exist in each jurisdiction.
Travis Kalanick got rich off the backs of an army of drivers and a swath of passengers. Rosa Parks did not.
He did some pretty shitty stuff along the way, sure.
One thing about Rosa Parks is that she wasn't the first. It was because she was the woman who wasn't going to fall to ad hominem attacks. We can name the logical fallacy, but unfortunately it works in the unregulated court of public opinion.
Neither was Travis, but they were both the ones that succeeded. She succeeded in changing minds and laws, and he succeeded in making a pile of money.
So there's absolutely a gulf between the two, and that gulf is that the laws about sitting in the back of the bus got struck down. The taxi laws did not. One happens to be a rich white guy and the other happens to be a not-exactly-well-off black woman, and the black woman actually managed to get the laws changed.
The one line you've pulled out is a quote from someone else and not the words of the authors of this paper, to be clear.
Here's the full context:
1 Overview
The culture of AI is imperialist and seeks to expand the kingdom of the machine. The AI community is well organized and well funded, and its culture fits its dreams: it has high priests, its greedy businessmen, its canny politicians. The U.S. Department of Defense is behind it all the way. And like the communists of old, AI scientists believe in their revolution; the old myths of tragic hubris don’t trouble them at all.
Tony Solomonides and Les Levidow (1985, pp. 13–14)
This paper sets out our expert position on artificial intelligence (AI) technologies permeating the higher education sector, demonstrating how this directly erodes our ability to function (see also our Open Letter, Guest, van Rooij, et al. 2025).
(the rest elided)
If you aren’t familiar with the term, you might want to consider reading about the “AI winter”. To me, it’s not surprising at all that the quote would come from 1985.
Athanasiou, Tom (1985). “Artificial intelligence: cleverly disguised politics”. In: Compulsive technology: computers as culture. Ed. by Tony Solomonides and Les Levidow. Free Association Books, pp. 13–35
I just had a conversation with free chatgpt about when a sports game started. Chatgpt got it hilariously wrong, like a time it couldn't possibly be given the other things I know about the game. I just didn't want to trawl through search results to find out, so thought AI could be a nice shortcut. Mistake, I guess. Then I tried to tell it how and why it was wrong, with further hilariously wrong attempts to respond from the AI. I couldn't help but give a few more pointless clarifying replies, even though I knew I would get nothing out of it and the AI would learn nothing. I seem to do this every month or so and then get frustrated with how useless it is and then swear off it for another month.
Did you ask it to search the Internet as a part of your request? It is still extremely imperfect, but that typically helps it get basic details correct. At least for me at any rate.
Wow it's just as stupid and ahistorical in context! Every revolution's fighters believed in it despite the utter catastrophe that came about afterwards. The only reason to point to that instead of, idk, Belgium's capitalist turn leading to the crimes against humanity in the Congo, is to score points with credulous burgers.
The detail that rightsholders seem to be demanding a revenue share is interesting. That sounds administratively and technologically very complex to implement and probably also just plain expensive to implement.
With some back of the napkin math, I am pretty sure you're off by at least two orders of magnitude, conceivably 4. I think 2 cents per video is an upper limit.
Generally speaking, API costs that the consumer sees are way higher than compute costs that the provider pays.
EDIT: Upper limit on pure on-going compute cost. If you factor in chip capital costs as another commentator on the other thread pointed out, you might add another order of magnitude.
I suspect amortized training costs are only a relatively small fraction of the amortized hardware costs (i.e. counting amortized hardware costs already accounts for the large fraction of the cost of training and pulling out training as a completely separate category double counts a lot of the cost).
It’s more a ballpark since exact numbers vary and OpenAI could be employing shenanigans to cut costs, but in comparison, Veo 3 which has similar quality 720p video costs $0.40/second for the user, and Sora’s videos are 10 seconds each. Although Veo 3 could cost more or less to Google than what is charged to the user.
I suspect OpenAI’s costs to be higher if anything since Google’s infra is more cost-efficient.
Workers getting paid a flat rate while owners are raking in the entire income generated by the work is how the rich get richer faster than any working person can.
This "but it's too hard to implement" excuse never made sense to me. So it's doable to make a system like this, to have smart people working on it, hire and poach other smart people, to have payments systems, tracking systems, personal data collection, request filtering and content awareness, all that jazz, but somehow all of that grinds to a halt the moment a question like this arises? and it's been a problem for years, yet some of the smartest people are just unable to approach it, let alone solve it? Does it not seem idiotic to see them serve 'most advanced' products over and over, and then pretend like this question is "too complex" for them to solve? Shouldn't they be smart enough to rise up to that level of "complexity" anyway?
Seems more like selective, intentional ignoring of the problem to me. It's just because if they start to pay up, everyone will want to get paid, and paying other people is something that companies like this systematically try to avoid as much as possible.
How much would it cost? I could stomach a pretty big tax increase if it meant no children in my home country would ever go to sleep starving again. That seems like a social good to me.
On linux it uses IFUNC resolved at load/dynamic relocation time, so at runtime it's the same cost as any other (relocatable) function call. But they're "static" in that it's not a calculated address so pretty easy for a superscaler CPU to follow.
So it does have some limitations like not being inlined, same as any other external function.
Since TEXTREL is basically gone these days (for good reasons!), IFUNC is the same as any other call that is relocatable to a target not in the same DSO. Which is either a GOT or PLT, either of which ends up being an indirect call (or branch if the compiler feels like it and the PLT isn’t involved). Which is what the person you’re replying to said :)
A relocatable call within the same DSO can be a PC-relative relocation, which is not a relocation at all when you load the DSO and ends up as a plain PC-relative branch or call.
Sure, but they're already paying that cost for every non-static function anyway. Any DSO, or executable that allows function interposition, already pays.
Ideally you should just multiversion the topmost exported symbol, everything below that should either directly inlined, or, as the architecture variant is known statically by the compiler, variants and a direct call generated. I know at least GCC can do this variant generation for things like constant propagation over static function boundaries, so /assume/ it can do the same for other optimization variants like this, but admittedly haven't checked.
What about duplicating the entire executable essentially a few times, and jumping to the right version at the very beginning of execution?
You have bigger binaries, but the logistics are simplified compared to shipping multiple binaries and you should get the same speed as multiple binaries with fully inlined code.
Since they don't seem to be doing that, my question is: what's the caveat I'm missing? (Or are the bigger binaries enough of a caveat by themselves?)
Ideally you only need to duplicate until you hit the first not-inlined function call; at that point there’s nothing gained and it’s just a waste of binary size.
I would be curious to see whether it's a common opinion that DirectX was a bad thing for the games industry. It was preceded by a patchwork of messy graphics/audio/input APIs, many of them proprietary, and when it started to gain prominence, Linux gaming was mostly a mirage.
A lot of people still choose to build games on Direct3D 11 or even 9 for convenience, and now thanks to Proton games built that way run fine on Linux and Steam Deck. Plus technologies like shadercross and mojoshader mean that those HLSL shaders are fairly portable, though that comes at the cost of a pile of weird hacks.
One good thing is that one of the console vendors now supports Vulkan, so building your game around Vulkan gives you a head start on console and means your game will run on Windows, Linux and Mac (though the last one requires some effort via something like MoltenVK) - but this is a relatively new thing. It's great to see either way, since in the past the consoles all used bespoke graphics APIs (except XBox, which used customized DirectX).
An OpenGL-based renderer would have historically been even more of an albatross when porting to consoles than DX, since (aside from some short-lived, semi-broken support on PS3) native high-performance OpenGL has never been a feature on anything other than Linux and Mac. In comparison DirectX has been native on XBox since the beginning, and that was a boon in the XBox 360 era when it was the dominant console.
IMO historically picking a graphics API has always been about tradeoffs, and realities favored DirectX until at least the end of the XBox 360 era, if not longer than that.
While Switch supports Vulkan, if you really want to take advantage of Switch hardware, NVN is the way to go, or make use of the Nintendo Vulkan extensions that are only available on the Switch.
Usually it is an opinion held by folks without background in the industry.
Back in my "want to do games phase", and also during Demoscene days, going to Gamedev.net, Flipcode, IGDA forums, or attending GDCE, this was never something fellow coders complained about.
Rather how to do some cool stuff with specific hardware, or gameplay ideas, and mastering various systems was also seen as a skill.
Their company name is apparently SQLite Cloud, inc. and they offer multiple products with SQLite in the name. I guess maybe the sqlite people just don't care about companies using the trademark?
My understanding is that material composition can make a CT scan take a really long time. It makes sense to me that scanning a battery would be pretty slow, given what they're made out of.
Maintaining a browser is already hard enough, it's a very tough sell to convince 3+ browser vendors to implement a new language with its own standard library and quirks in parallel without a really convincing argument. As of yet, nobody has come up with a convincing enough argument.
Part of why WebAssembly was successful is that it's a way of generating javascript runtime IR instead of a completely new language + standard library - browsers can swap out their JavaScript frontend for a WASM one and reuse all the work they've done, reusing most of their native code generator, debugger, caches, etc. The primitives WASM's MVP exposes are mostly stuff browsers already knew how to do (though over time, it accumulated new features that don't have a comparison point in JS.)
And then WASM itself has basically no standard library, which means you don't have to implement a bunch of new library code to support it, just a relatively small set of JS APIs used to interact with it.
Every modern implementation I know of at least partially reuses the internals of the JS runtime, which enables things like cross-language inlining between WASM and JS.
Depending on the language, GC is either implemented in userspace using linear memory, or using the new GC extension to webassembly. The latter has some restrictions that mean not every language can use it and it's not a turnkey integration (you have to do a lot of work), but there are a bunch of implementations now that use wasm's native GC.
If you use wasm's native GC your objects are managed by the WASM runtime (in browsers, a JS runtime).
For things like goroutines you would emulate them using wasm primitives like exception handling, unless you're running in a host that provides syscalls you can use to do stuff like stack switching natively. (IIRC stack switching is proposed but not yet a part of any production WASM runtime - see https://webassembly.org/features/)
Based on what I read in a quick search, what Go does is generate each goroutine as a switch statement based on a state variable, so that you can 'resume' a goroutine by calling the switch with the appropriate state variable to resume its execution at the right point.