Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> assign work to an LLM

This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.

Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.

LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.





I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev.

Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world.

Exactly this, I lead cloud consulting + app dev projects. Before I would have staffed my projects with at least me leading it and doing the project management + stakeholder meetings and some of the work and bringing a couple of others in to do some of the grunt work. Now with Gen AI even just using ChatGPT and feeding it a lot of context - diagrams I put together, statements of work, etc - I can do it all myself without having to go through the coordination effort of working with two other people.

On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.

I would have actually slowed him down.


Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension.

> Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs

Curious if you gave Antigravity a try yet? It auto-launches a browser and you can watch it move the mouse and click around. It's able to review what it sees and iterate or report success according to your specs. It takes screen recordings and saves them as an artifact for you to verify.

I only tried some simple things with it so far but it worked well.


Right, and as a hiring manager, I'm more inclined to hire junior devs since they eventually learn the intricacies of the business, whereas LLMs are limited in that capacity.

I'd rather babysit a junior dev and give them some work to do until they can stand on their own than babysit an LLM indefinitely. That just sounds like more work for me.


You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now)

Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid.

I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.


Well your anecdote is clearly at odds with absolutely all of the macro economic data.

Actually, it’s not. The job market changed about a year or so before ChatGPT

The idea that there is a hiring spree of developers doesn’t jibe with reality

It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession.

But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.


> It's me. I'm the LM

Okay.



They mean LLM

> This is just not happening anywhere around me.

Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.

And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.


> Don't worry about where AI is today, worry about where it will be in 5-10 years.

And where will it be in 5-10 years?

Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".

Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.

If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.


Right about where it is today with better integrations?

One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.

The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.


You buried the lede with “exponential capex scaling”. How is this technology not like oil extraction?

The bulk of that capex is chips, and those chips are straight up depreciating assets.


The depreciation schedule is debatable (and that's currently a big issue!). We've been depreciating based on availability of next generation chips rather than useful life, but I've seen 8 year old research clusters with low replacement rates. If we stop spending on infra now, that would still give us an engine well into the next decade.

> We're already committed to ~3 years of the current trajectory

How do you mean committed?


better integrations won't do anything to fix the fact that these tools are, by their mathematical nature, unreliable and always will be

so are people

But humans have vastly lower error rates than llms. And in a multi-step process that means that those error rates compound. And when that happens, you end up with a 50/50 or worse

And, more importantly, a given human can, and usually will, learn from their mistakes and do better in a reasonably consistent pattern.

And when humans do make mistakes, they're also in patterns that are fairly predictable and easy for other humans to understand, because we make mistakes due to a few different well-known categories of errors of thought and behavior.

LLMs, meanwhile, make mistakes simply because they happen to have randomly generated incorrect text that time. Or, to look at it another way, they get things right simply because they happen to have randomly generated correct text that time.

Individual humans can be highly reliable. Humans can consciously make tradeoffs between speed and reliability. Individual unreliable humans can become more reliable through time and effort.

None of these are true of LLMs.


It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.

I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: