I think this is mostly historical baggage unfortunately. Every codebase I've ever worked in there was a huge push to only use native ES6 functionality, like Sets, Maps, all the Iterable methods etc., but there was still a large chunk of files that were written before these were standardized and widely used, so you get mixes of Lodash and a bunch of other cursed shit.
Refactoring these also isn't always trivial either, so it's a long journey to fully get rid of something like Lodash from an old project
This has improved recently. Packages like lodash were once popular but you can do most stuff with the standard library now. I think the only glaring exception is the lack of a deep equality function.
As history showed us numerous times, it doesn't even have to be the best to win.
It rarely is, really. See the most pervasive programming languages for that.
Fear of change, or even just isles of stability, to help recuperate and reorient yourself whilst navigating the stormy seas of life.
Myself, I'm quite open to new forms of entertainment, as well as those previously unknown to me. Even within my favorite genres, I'm more than happy to explore - but I'm still gonna rewatch at least one Star Trek show each year.
It doesn't matter that I've seen most of those show 6-10 times each over the course of my life; it doesn't matter that I've watched some specific episodes 20+ times already. What matters to me is, each time I see those characters and those locations, it feels like coming home.
(And more so than actually coming home.)
People anchor to different things like this, not just TV shows. Sometimes it's a real place (or an event in that place - e.g. vacation), sometimes it's a club, sometimes it's a video game or an outdoor hobby.
Which is a shame really because if you want something simple, learning Service, Ingress and Deployment is really not that hard and rewards years of benefits.
Plenty of PaaS who will run your cluster for cheap so you don't have to maintain it yourself, like OVH.
It really is an imaginary issue with terrible solutions.
Deploying a free tool that doesn't solve an organizations problems isn't a valid choice. I'm tired of open source advocates hand-waving away the reasons people choose other software. For most organizations, software is not a big cost, labor is. It often makes sense to throw a million dollars at a piece of software to make people's job easier, because that can translate to tens of millions in labor.
That is stretching the subject beyond reasonable. Proprietary software as a general endeavor is not an invalid business and nobody is saying that here.
LibreOffice is close enough to Microsoft's offering that surely it makes sense accross the many EU states to stop spending millions on it, and spend a few to close the gap, saving even more millions in the future.
Respectfully, I think it's a bit of a Dunning–Kruger effect for random internet commenters to presume they know what is "close enough" to meet the requirements for the many thousands of different day jobs that people have across the different governments of dozens of different countries.
Certainly the people buying software know best what their requirements are.
> Certainly the people buying software know best what their requirements are.
I doubt it. The people who are going to use the software are the ones who know what the requirements are. The people buying it should be asking the users, but rarely do.
For a large software deployment, you should be getting part of your requirements from discussions with users, but there will often be a lot of requirements from non-user stakeholders. For government deployments, even more so.
Have you ever actually worked in a large org or government IT department? :D
Commendable ideas, but they do not translate to reality. Even taking the OSS discussion out of the equation: Understanding and integrating user requirements in development processes is a hard problem in general. It gets worse when we are talking about resource-constrained contexts (like government IT)
I didn’t say it wasn’t hard. Regardless it is extremely routine for multiple stakeholders groups to be involved in software purchases, at least over my 20 years of experience.
Let's be real... Tons of governments employ people just to boost employment numbers. Government staff are almost always simply a cost, governments don't need to be profitable. They extract taxes and then spend it. And I think a lot of countries would prefer to spend more on salaries than on software licenses going to a different country...
Being the best European AI company is also a multi billion business. Its not like China or the US respects GDPR. A lot of companies will choose the best European company.
The pre-training plateau is real. Nearly all the improvements since then have been around fine tuning and reinforcement learning, which can only get you so far. Without continued scaling in the base models, the hope of AGI is dead. You cannot reach AGI without making the pre-training model itself a whole lot better, with more or better data, both of which are in short supply.
While I tend to agree, I wonder if synthetic data might be reaching a new high with concepts like Google's AlphaEvolve. It doesn't cover everything, but at least in verifiable concepts, I could see it produce more valuable training data. It's a little unclear to me where AGI will come from (LLMs? EBMs - @LeCun)? Something completely different?)
> with more or better data, both of which are in short supply
Hmmm. It's almost as if a company without a user data stream like OpenAI would be driven to release an end-user device for the sole purpose of capturing more training data...
Could it be that at least for the "lowest" fruits, most amazing things that can one can hope to obtain from scraping the whole web and throw it at some computation training was already achieved? Maybe AGI simply can not be obtained without some relevant additional probes sent in the wild to feed its learning loops?
LLMs haven't improved much. What's improved is the chat apps: switching between language model, vision, image and video generation and being able to search the internet is what has made them seem 100x more useful.
Run a single LLM without any tools... They're still pretty dumb.
Why would the debt matter when you have $60 billion in ad revenue and are generating $20 billion in op income? That's OpenAI 5-7 years from now, if they're able to maintain their position with consumers. Once they attach an ad product their margins will rapidly soar due to the comparatively low cost of the ad segment.
The technology is closer to a decade from seeing a plateau for the large general models. GPT o3 is significantly beyond o1 (much less 3.5 which was just Nov 2022). Claude 4 is significantly beyond 3.5. They're not subtle improvements. And most likely there will be a splintering of specialization that will see huge leaps outside the large general models. The radical leap in coding capabilities over the past 12-18 months is just an early example of how that will work, and it will affect every segment of human endeavour.
> Once they attach an ad product their margins will rapidly soar due to the comparatively low cost of the ad segment.
They're burning through computers and capital. No amount of advertising could cover the cost of training or even running these models. The massive subscription costs we've started seeing are just a small glimpse into the money they are burning through.
They will NOT make a profit using the current methods unless the models become at least 10 times more efficient than they are now. At which point can Europe adapt to the innovation without much cost.
It's an arms race to see who can burn the most money the fastest, while selling the result for as little as possible. When they need to start making money, it will all come crashing down.
I think the upside is that we stop spending limited human time on mundane/easy things and focus on higher-value pursuit.
Because you can no longer be a cheap artist, because you can no longer help students on easy problems en masse, because family businesses no longer need a webmaster.
That's a step in the right direction, maybe even towards UBI.
On growth, I disagree that we reached the plateau already. We won't fundamentally change things but larger context windows, speed, compute and cost? Obviously.
That in itself is a major evolution.
It looks like it is fading out of hype maybe, but that's just like all things. LLMs aren't going anywhere, just like Rails got version 8 out and it's better than ever.
One package for lists, one for sorting, and down the rabbit hole you go.
reply