One option that didn't seem to be discussed in TFA is turning away from AI.
There's an implicit assumption in the article that the coding models are here to stay in development. It's possible that assumption is incorrect for multiple reasons.
Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.
Perhaps the industry segments itself to a degree. There's a big difference in tolerance for errors in a cat fart app and a nuclear cooling system. I can see a role for certified 100% AI free development. Maybe vibe coders go in one direction, with lower quality output but rapid TTM, but a segment of more highly skilled developers focus on AI free development.
I also think it's possible that over time the AI hyper-productivity stuff is revealed to be mostly a mirage. My personal experience and a few studies seem to indicate this. The purported productivity boost is a result of confirmation bias and ridiculous metrics (like LOC generated) that have little to do with actual value creation. When the mirage fades, companies realize they are stuck with heaps of AI slop and no technical talent able to deal with it. A bitter lesson indeed.
Since we're reading tea leaves, I think the most likely outcome is that the massive central models for code generation fade due to enormous costs and increased endpoint device capabilities. The past 50 years have shown us clearly that computing will always distribute, and centralized mainframe style compute gets pushed down to powerful local devices.
I think it settles at an improved intellisense running locally. The real value of the "better search engine" that LLMs hold today reduces as hard economics drive up subscription fees and content is manipulated by sponsors (same thing that happened to the Google search results).
For end users, I think the models get shoved into a box to do things they're really good at, like giving a much more intuitive human-computer interface, but structured data from that is handed off to a human developer to reason about, MCP will expand and become the glue.
I think that over time market forces will balance between AI and human created content, with a premium placed on the latter. McDonalds vs a 5 star steakhouse.
Assuming AI is at all useful it's likely to be used for safety-critical software development. Safety-critical processes aren't likely to care about LLM involvement much at all, much like they don't generally care about competence of those doing the work already.
>Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.
I'd put my money on this. From my understanding of LLMs, they are basically mashing words together via markov chains and have added a little bit of subject classification with attention, a little bit of short-term memory, and enough grammar to lay things out correctly. They don't understand anything they are saying, they are not learning facts and trying to build connections between them, they are not learning from their conversations with people. They aren't even running the equivalent of a game loop where they can even think about things. I would expect something we're trying to call an AI to call you up sometimes and ask you questions. Trillions of dollars have got us this far, how far can it actually take us?
I want my actual AI personal assistant that I have to coerce somehow into doing something for me like an emo teen.
> So you live in a world where code history must only be maintained orally?
There are many companies and scenarios where this is completely legitimate.
For example, a startup that's iterating quickly with a small, skilled dev team. A bunch of documentation is a liability, it'll be stale before anyone ever reads it.
Just grabbing someone and collaborating with them on what they wrote is much more effective in that situation.
> For example, a startup that's iterating quickly with a small, skilled dev team. A bunch of documentation is a liability, it'll be stale before anyone ever reads it.
This is a huge advantage for AI though, they don't complain about writing docs, and will actively keep the docs in sync if you pipeline your requests to do something like "I want to change the code to do X, update the design docs, and then update the code". Human beings would just grumble a lot, an AI doesn't complain...it just does the work.
> Just grabbing someone and collaborating with them on what they wrote is much more effective in that situation.
Again, it just sounds to me that you are arguing why AIs are superior, not in how they are inferior.
Documentation isn't there to have and admire, you write it for a purpose.
There are like eight bajillion systems out there that can generate low-level javadoc-ish docs. Those are trivial.
The other types of internal developer documentation are "how do I set this up", "why was this code written" and "why is this code the way it is" and usually those are much more efficiently conveyed person to person. At least until you get to be a big company.
For a small team, I would 100% agree those kinds of documentation are usually a liability. The problem is "I can't trust that the documentation is accurate or complete" and with AI, I still can't trust that it wrote accurate or complete documentation, or that anyone checked what it generated. So it's kind of worse than useless?
The LLM writes it with the purpose you gave it, to remember why it did things when it goes to change things later. The difference between humans and AI is that humans skip the document step because they think they can just remember everything, AI doesn’t have that luxury.
Just say the model uses the files to seed token state. Anthropomorphizing the thing is silly.
And no, you don't skip the documentation because you "think you can just remember everything". It's a tradeoff.
Documentation is not free to maintain (no, not even the AI version) and bad or inaccurate documentation is worse than none, because it wastes everyone's time.
You build a mental map of how the code is structured and where to find what you need, and you build a mental model of how the system works. Understanding, not memorization.
When prod goes down you really don't wanna be faffing about going "hey Alexa, what's a database index".
With apologies, and not GP, but this has been the same feedback I've personally seen on every single model release.
Whenever I discuss the problems that my peers and I have using these things, it's always something along the lines of "but model X.Y solves all that!", so I obediently try again, waste a huge amount of time, and come back to the conclusion that these things aren't great at generation, but they are fantastic at summarization and classification.
When I use them for those tasks, they have real value. For creation? Not so much.
I've stopped getting excited about the "but model X.Y!!" thing. Maybe they are improving? I just personally haven't seen it.
But according to the AI hypers, just like with every other tech hype that's died over the past 30 years, "I must just be doing it wrong".
A lot of people are consistently getting their low expectations disproven when it comes to progress in AI tooling. If you read back in my comment history, six months ago I was posting about how AI is over hyped BS. But I kept using it and eventually new releases of models and tools solved most of the problems I had with them. If it has not happened for you yet then I expect it will eventually. Keep up with using the tools and models and follow their advancements and I think you'll eventually get to the point where your needs are met
Perfect was not the bar that was set. Elon can be the richest person in the world and a lair at the same time. It's about what kind of person lies about being one of the best gamers in the world when clearly they're not. This is of course not the only thing he has lied about but it is possibly the pettiest. And possibly the stupidest because the very people it was supposed to impress were going to find out near instantly and now despise him for it. Consider his foray into politics, it wasn't enough to sway the elections with a large sum of money he also had to insert himself into the process. In addition to being the best gamer he was trying to be the best politician - the result was a catastrophic failure. I'm still pretty convinced Adrian Dittmann is his sock puppet account and his attempt at being the best streamer as well. Done 'anonymously' to make the case that he's not bootstrapping on his other successes but not too anonymously to avoid being totally irrelevant.
I assume then, when you've had a massive positive influence on the world, employed hundred of thousands, brought electric vehicles to the mainstream, built a rocket company and blanketed the entire planet in affordable, high speed internet, etc... then you'll agree with the people on the internet that attack you because you claim to be a better video game player than you actually are.
In your hypothetical you are asking that if I was a lier would I be ok with it. One would have to presume that I wouldn’t be a lier if I wasn’t also ok with it. I am neither a lier and if somehow I had lied I would also not be ok with it and would hope others hold me to a better standard.
No Lit Element or Lit or whatever it's branded now, no framework just vanilla web components, lit-html in a render() method, class properties for reactivity, JSDoc for opt-in typing, using it where it makes sense but not junking up the code base where it's not needed...
No build step, no bundles, most things stay in light dom, so just normal CSS, no source maps, transpiling or wasted hours with framework version churn...
Such a wonderful and relaxing way to do modern web development.
I love it. I've had a hard time convincing clients it's the best way to go but any side projects recently and going forward will always start with this frontend stack and no more until fully necessary.
This discussion made me happy to see more people enjoying the stack available in the browser. I think over time, what devs enjoy using is what becomes mainstream, React was the same fresh breeze in the past.
> (We sure as hell aren’t there yet, but that’s a possibility)
What makes you think so?
Most of the stuff I've read, my personal experience with the models, and my understanding of how these things work all point to the same conclusion:
AI is great at summarization and classification, but totally unreliable with generation.
That basic unreliablity seems to fundamental to LLMs, I haven't seen much improvement in the big models, and a lot of the researchers I've read are theorizing that we're pretty close maxing out what scaling training and inference will do.
This seems really vague. What does "totally unreliable" mean?
If you mean that a completely non-technical user can't vibe code a complex app and have it be performant, secure, defect-free, etc, then I agree with you. For now. Maybe for a long time, we'll see.
But right now, today, I'm a professional software engineer with two decades of experience and I use Cursor and Opus to reliably generate code that's on par with the quality of what I can write, at least 10x faster than I can write it. I use it to build new features, explore the codebase, refactor existing features, write documentation, help with server management and devops, debug tricky bugs, etc. It's not perfect, but it's better than most engineers I've worked with in my career. It's like pair programming with a savant who knows everything, some of which is a little out of date, who has intermediate level taste. With a tiny bit of steering, we're an incredibly productive duo.
I know the tech is here to stay, and the best parts of it are where it provides accessibility and tears down barriers to entry.
My work is to make sure that you don't need to reach for AI just because human typing speed is limited.
I love to think in terms of instruments versus assistants: an assistant is unpredictable but easy to use. It tries to guess what you want. An instrument is predictable but relatively harder to use. It has a skill curve and perhaps a skill cap. The purpose of an instrument is to directly amplify the expressive power of its user or player through predictable, delicately calibrated responses.
My experience has been much worse. Random functions with no purpose, awful architecture with no theory of mind, thousands of lines of comprehension debt, bugs that are bizarre and difficult to track down and reason about...
This coupled with the occasional time when it "gets it right".
Those moments make me feel like I saved time, but when I truly critically look at my productivity, I see a net decline overall, and I feel myself getting dumber and losing my ability to come up with creative solutions.
I have used Claude to write a lot of code. I am however already a programmer, one with ~25 years of experience. I’ve also lead organizations of 2-200 people.
So while I don’t think the world I described exists today — one where non-programmers, with neither programming nor programmer-management experience, use these tools to build software — I don’t a priori disbelieve its possibility.
A teacher that needs to know the kids that are struggling the most with a recent exam doesn't want to ask the AI 10 different ways, deal with hallucinations and frustrations, send tech support a ticket only to receive a response that the MCP doesn't support that yet - isn't going to be impressed.
They just want to see a menu of available reports, and if the one they want isn't there, move on to a different way of doing what they need.
There's an implicit assumption in the article that the coding models are here to stay in development. It's possible that assumption is incorrect for multiple reasons.
Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.
Perhaps the industry segments itself to a degree. There's a big difference in tolerance for errors in a cat fart app and a nuclear cooling system. I can see a role for certified 100% AI free development. Maybe vibe coders go in one direction, with lower quality output but rapid TTM, but a segment of more highly skilled developers focus on AI free development.
I also think it's possible that over time the AI hyper-productivity stuff is revealed to be mostly a mirage. My personal experience and a few studies seem to indicate this. The purported productivity boost is a result of confirmation bias and ridiculous metrics (like LOC generated) that have little to do with actual value creation. When the mirage fades, companies realize they are stuck with heaps of AI slop and no technical talent able to deal with it. A bitter lesson indeed.
Since we're reading tea leaves, I think the most likely outcome is that the massive central models for code generation fade due to enormous costs and increased endpoint device capabilities. The past 50 years have shown us clearly that computing will always distribute, and centralized mainframe style compute gets pushed down to powerful local devices.
I think it settles at an improved intellisense running locally. The real value of the "better search engine" that LLMs hold today reduces as hard economics drive up subscription fees and content is manipulated by sponsors (same thing that happened to the Google search results).
For end users, I think the models get shoved into a box to do things they're really good at, like giving a much more intuitive human-computer interface, but structured data from that is handed off to a human developer to reason about, MCP will expand and become the glue.
I think that over time market forces will balance between AI and human created content, with a premium placed on the latter. McDonalds vs a 5 star steakhouse.
reply