I feel like there is a huge elephant in the room, and I'm sure I can't be the only one thinking about this currently. However, in the web dev community, which I likely follow a bit too closely for my own good, there is a lot of business as usual. People argue about all kinds of things like AI had never happened. Sure, everyone is regularly confronted with the rapid advances made by Copilot and the likes, but influencers on YouTube and X, as well as the very companies developing all these novel solutions, keep telling us: "there will always be a need for good (whatever that means) developers," "the rising tide lifts all the boats," or "the ones who adapt will be the ones who survive." And it seems obvious why each of those are making these claims. I'm just increasingly having a hard time believing any of that.
I think pretty much everyone who has seen the advances of AI in the past years can imagine a not-too-distant future (likely less than 10 years) where literally everything we are doing currently will be obsolete. Even the best developers will not be able to compete with the literal infinite compute the large companies throw at training ever more capable models. Artificial agents likely will be better at any given knowledge-worker task by the end of the decade and why would they need instructions, that go beyond a simple list of necessities? Sure, currently it's still incredibly advantageous to have a deeper understanding of how complex software systems work to leverage to full potential of the current generation of AI assistants, but I'm not at all convinced that this will last another decade. Powerful agents will be ubiquitous. Infinite compute outperforms anyone.
There might be societal backlash at some point, as more and more people become obsolete in their jobs, but democracies are already under a lot of pressure all around the world and have a tendency to (by design) not be sufficiently quickly in adapting to technological change. It seems foreshadowing to me that even the more consolidated democracies from a decade ago failed spectacularly in regulating Big Tech. The result of which we live with today in the form of accumulations of power that outmatch almost all of the democratically elected governments worldwide and dictate large parts of our daily lives—a concentration of power the likely will seem laughable compared to what we're headed at.
And even in the case, that a sufficiently powerful societal movement could emerge in one or many democracies of the world, I'm almost convinced there will be a sufficiently large counter-movement claiming —and maybe rightly so— that "we" need to be faster than China, because otherwise "they" will be the ones calling the shots for the centuries to come. So "going into politics" seems like a waste of time from my perspective at this point.
I'm currently a freelance web developer, who studied Design and trained and worked a couple of years in a kitchen as a chef. This is to say, that I adapted quite a bit over the years. However I haven't started a bunch of startups or worked in a FAANG-company or anything of the like. Heck, I don't even have a large following on X or any other social platform. I have two kids and no large amounts of savings, that could assure me, that I would at least be able to sustain myself for a couple of years during the AI-pocalypse. This means I cannot "invest" myself through the next years by buying stock or real estate or anything of the like.
I find myself in a unique position here, as my diverse background and experiences have given me a perspective that allows me to see the potential impact of AI more clearly than large parts of the society I'm living in. At the same time, I feel like there is absolutely nothing to be done for someone in my position.
How are you dealing with this?
You can't get a steam engine to dig a hole though. It was used to pump water. Then carry things. Then locomotion. They were limited to rails. They did take jobs. Then other kinds of engines appeared. They did things steam engines didn't do so well.
But turning an engine to a bulldozer isn't easy and that created a lot of new jobs. Right now LLMs are doing the equivalent of pumping water. Heavy, boring, tedious things that humans don't really want to do. I hated doing data entry and cleaning and LLMs are great at that.
One thing many creatives won't admit is that a lot of their work is generic and repetitive, simply requiring lots of attention to get good output. Screenwriters were one of the first people I've observed utilizing LLMs, because they understood this. Someone hacked Copilot to start writing scripts. There's probably a lot of things it can do, perhaps in animation and logo design and stuff. And there's a lot it can't do. But they're shocked because it's doing the "soulful" stuff that people thought made them human, when it's actually just the advantage of having a huge training database of tropes.
Will art go obsolete? The first knowledge worker to lose his job to AI was Kasparov, and yet human chess is still more popular than ever.