AIs today basically fail because they've been trained to be aggressive editors of code. This makes the first steps feel amazing, gives you the most out of your first tokens, and helps win the evals focused on coding simple-to-moderate tasks.
Once they hit some threshold of project size, they overcommit, bite off too much or don't recognize that they're missing some context. Agents help this by allowing them to see their mistakes and try again, but eventually you hit some death loop.
I think someone around now-ish will realize that there should be two separate RLHF tunes--one for the initial prototype, and another for the hard engineering that follows. I doubt its that hard to make an methodical, engineering-minded tune, but the emphasis has been on the flashy demos and the quick wins. Cursor and folks should be collecting this data as we speak, and I expect curmudgeony agents to start appearing within a year.
Combine this with better feedback loops (e.g. mcp-accessible debuggers), the agent doing its own stackoverflow/github searches, and continued efficiency work driving token costs down by an order-of-magnitude every year or so, and agents will get very very good very fast.
In this atmosphere, humans will shortly exist to get context for the agent that it can't fetch itself, either for security reasons, or because no one's built the integration yet. And that will be short-lived, because integrations will always be built.
So I guess there's a window for the "copilot" reality, but it feels very very brief. I don't think agents will need humans for very long.
I agree, soon systems of agents will be trained with good engineering practices not just locally-good code. Already when I work with current SOTA agents I find myself basically pair programming with them, providing senior guidance and cutting off directions of development that will be dead ends while they type all the actual code.
> It just needs to be able to deliver 80% of your output at 20% of your cost
What tho is the actual cost? Are AI tools still loss leaders? What will happen if the AI bubble bursts and there is a severe shortage of software engineers? It is this uncertainty that people are having to deal with now.
Are u talking about incurring technical debt from the generated AI code that vastly out prices the original low cost of using AI. I cannot answer ur question on how big is one compared to the other but I have an idea that can sideline them. I don't think it will matter, AI is so exceptionally good at generating just good enough spam, so exceptionally good at delivering a shitty minimally viable product that it might warp the expectations and needs of consumers. Where the new shittyness becomes the new norm because it drowns out everything else around it with shear volume. People around me prefer to generate their Dungeons and Dragons characters and cities with AI because it good enough even though it looks painfully bad and often doesn't completely fit their vision. Music songs are being composed for small communities almost constantly at the moment because people do not want to bother to go out of their way to find a real human composer.
It's easy, it's fast and it gets the point across. Quality is only encouraged socially, people don't really care that much about quality. Rather people have 100 things they care about in their lives - an app for their groceries, a small game of their own idea to show to friends and play, a piece of music about that one time their group of friends got drunk and went into the mountains to fight a bear that in the end turned out to be some old granpa's cow. And only one or two which are important enough to spend the effort to find a quality product.
For software - the places where hard identifiable metric matter... Sensors, weapons, performance, networking, etc. They won't be replaced by AI's any time soon but so many other types of product imo will be assimilated by the machine. All desktop apps for regular people, all websites for blogs, posting, sharing. Probably most IoT related things in your own home
It is hilarious that the machines will first devour the industries that need more feelings and ideas rather then raw precision.
> Are u talking about incurring technical debt from the generated AI code
I think they're talking about the marginal cost and capital cost of running all those GPU's, as well as the capital costs of training foundational models. With GPT-(n+m) projected to require new nuclear power plants dedicated to GPU usage, there's a question of what the payback time will be and whether the marginal costs will exceed that of a human.
What we get from the sun isn't energy. All the energy we receive has to be irradiated away, or we'd be cooked. We get low entropy energy that we dissipate to sustain our local low entropy systems going and growing.
What LLMs offer is second hand low entropy data. They feed off the low entropy human generated data and give it back. But they raise the entropy of the total body of data, inevitably. The more AI slop there is, the less useful work LLMs can extract from data.
The actual cost is fucking your customers up by cutting costs after being sold the ideology not the tool (the tool is irrelevant). That hasn’t changed and this is just another hammer to make it worse. Customers and businesses will tire and revenue will not make ends meet (hint: it doesn’t now).
This hammer is however one which becomes stale very quickly unless you keep throwing megawatts and billions of dollars at it constantly. Thus when the bubble bursts it will break all growth predictions instantly and cause a major collapse.
It’s going to be a meaty train wreck and a huge opportunity and I can’t wait. Reckon I’ll retire in 5 years.
This is a good question. The other question that I haven't yet seen the answer for is how anyone will make money off of an AGI, if such a thing is actually feasible in the near term. As it is the business model for just LLMs and generative models in general seems flimsy.
> My thesis is that AI will fragment the role of software engineering. It will become a role with a large pool of low-skilled coders who move forward with AI and a few specialists that will unblock those coders when stuck as well as address performance bottlenecks for production-scale.
This sounds like outsourcing on steroids. Joke aside, what the software engineering will become really depends on the growth of the industry. Many people thought that most of the software engineering jobs would be outsourced to India and software engineer as a profession would soon die in the US. It turned out that the investment to software engineering far outpaced outsourcing, and as software engineers we were incredibly lucky to work in this field. The trend will not last forever, though. If it turns out that the growth areas in the world do not require much of novel software engineering, then the demand of this profession will dwindle, and the investment will diminish. As a result, our jobs will be outsourced or replaced by AI to a large degree, as AI is really good at slicing and dicing mature code for mature use cases.
I’m genuinely curious on the point about reducing headcount because AI will be more efficient. I’ve seen it articulated here but other places too that a company will be able to have less engineers because each would be more productive. What if companies kept the same number of people engineers but now massively out produce what they used to? And I disagree with the example that this is like typewriters replacing typists. I think typists have a fixed number of things that need to be typed. Software is different - a company that has a better or more feature rich project could gain on their competitors.
Curious if anyone else thinks this. Maybe it’s just optimism but I’ve yet to be convinced that a company would want to maintain its productivity through trading engineers for AI if it had the same opportunity to grow its productivity through AI and maintaining headcount.
And to add-on, isn’t there some market dynamics we are avoiding here with this example? If I’m an AI company and really produced a principal level engineer, why would I sell it for less than the labor market is willing to bear? Wouldn’t I price it perhaps less than the market but not so dramatically less as to lose money.
You make a good point. The shrinking headcount is not necessarily tied to mass-firing. It's more likely tied to +10M newly trained engineers entering the job market every year, but only 50 positions being opened.
Over time, with each recessions, headcount will shrink at some companies, and will not grow back to pre prior levels. Over time, the line trends downwards
> It will become a role with a large pool of low-skilled coders who move forward with AI and a few specialists that will unblock those coders when stuck as well as address performance bottlenecks for production-scale.
You see this already in medicine. Anesthesiologist can oversee up to 6 concurrent cases, with NP’s or CRNA’s doing the actual work.
This only works for straightforward, not medically complicated cases. The more complicated cases (pregnancy, cancer, obesity, etc) are still typically fully managed by an MD /DO.
The results are controversial. Healthcare systems can save cost, but patient care is hit or miss.
I think about the reviewer problem. An AI can write 3000 lines in less than a minute. But it might take me an hour to understand the architecture it's decided on.
There's a couple possibilities with this:
1. Agents become so powerful that a human can't conceivably keep up with them. And, it becomes a drain on efficiency for any human to try. The only important things are wether or not the prompt fits the desired outcome, and is the creation 'safe'. Safe can mean many things. Will not crash, will not leak data, will not take over the world... Atlas Computing is one startup that's taking this view. By ensuring an AI can only do 'safe' things as defined by some formal ontology/methods.
2. A human stays in the loop, and tries to stay at least reasonably up to date on the code architecture. For this to work long term, the weak link is the human understanding. In which case there's interesting opportunities for AI-generated lessons, animations, and examples that are used to get the human up to speed as fast as possible. If I see a very nice 3Blue1Brown style animation generated by AI about how a piece of software functions, than I can probably start working with it more quickly than if I only had the code. At least if the animation links very closely with the code itself.
The article is right: the low-tier tech jobs will likely not exist in a few years. These jobs mostly involve gluing APIs together.
However, I think the assumed usefulness of humans can be slashed even further. Right now, LLMs interface with languages and systems that were abstracted to a human level of understanding. There's an empty spot for new languages and frameworks with thousands of primitive "patterns", all represented by unique symbols, that could be put together much quicker by a LLM than by a human.
LLMs have monstrously high associative horizons -- this means the way they segment info requires many more "boxes" / classifications / names, while humans top out at some arbitrary low value but are able to generate more / new categories on-demand.
Instead of faffing about with a thousand examples to get some certain indentation right, it could be something akin to a spoken language but way more logical (or perhaps like a language with incredibly long compound words).
Removing all computer programmers would require a bottom-up unity of the hardware stack with software, and that's an almost impossible ask by today's standards. Would need to start over and get rid of old systems in many areas.
This post is spot on. It will be incredibly lucrative to be one of the "ones who knows" in the relatively near future.
What's scary is what happens when those types cease to exist (due to retirement or age) and all you're left with is the semi-coders described here. There's a similar problem with outdated technologies that few-to-no developers understand anymore.
> It will be incredibly lucrative to be one of the "ones who knows" in the relatively near future.
I'm not so optimistic on this if AI prevails. Think about the chip industry. It's an incredibly challenging field for only the top few to truly understand the art of chip design, yet even the top engineers may not necessarily have the same "lucrative" packages compared to the software engineers in the same percentile, let alone the pay of industry average.
In the end, it is the supply and demand that determines our packages. AI can suppress demand to the point that the entire industry needs fewer senior engineers than now, and we will then be paid less accordingly.
You're right. I think it will be less of an employee-employer relationship and those types will be hired guns on retainer (i.e., you don't need a lawyer until you do but it helps to have one on retainer if you're a big entity).
> It just needs to be able to deliver 80% of your output at 20% of your cost
Yeah. The average cost of a senior engineer in an IPO'd company is about $500K in the bay area (salary, stock grants, and all the company expenses). That's enough to buy 500 Cursor business licenses for 2 years. It's a brainer that companies will be trying to figure out how to replace as many engineers with AI licenses.
> In the last century, typesetting used to be a big industry. It once required specialized skills and machinery. You could make great money being one. However, in the 1980s, desktop publishing software suddenly enabled anyone with a computer to design and print content. This democratized publishing led to a decline in traditional print jobs and a rise in graphic design and DIY publishing in its stead.
A big difference here is that typesetting can be done by individuals as an ongoing task accompanying writing. So, software indeed replaces this profession. On the other hand, we do software engineering all day long. It's a stand-alone profession. A more relatable historical example should be automation in chip industry. With CAD and automation, the chip industry requires a lot fewer engineers, even though chip design and manufacturing are still very challenging. But then, this probably has more to do with limited investment (therefore limited demand for talent) than how much automation in the field.
Once they hit some threshold of project size, they overcommit, bite off too much or don't recognize that they're missing some context. Agents help this by allowing them to see their mistakes and try again, but eventually you hit some death loop.
I think someone around now-ish will realize that there should be two separate RLHF tunes--one for the initial prototype, and another for the hard engineering that follows. I doubt its that hard to make an methodical, engineering-minded tune, but the emphasis has been on the flashy demos and the quick wins. Cursor and folks should be collecting this data as we speak, and I expect curmudgeony agents to start appearing within a year.
Combine this with better feedback loops (e.g. mcp-accessible debuggers), the agent doing its own stackoverflow/github searches, and continued efficiency work driving token costs down by an order-of-magnitude every year or so, and agents will get very very good very fast.
In this atmosphere, humans will shortly exist to get context for the agent that it can't fetch itself, either for security reasons, or because no one's built the integration yet. And that will be short-lived, because integrations will always be built.
So I guess there's a window for the "copilot" reality, but it feels very very brief. I don't think agents will need humans for very long.