There's a buge difference between possible and likely.
Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet
Knowing your system components’ various error rates and compensating for them has always been the job. This includes both the software itself and the engineers working on it.
The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.
Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output.
Perhaps that’s what lead to a decline in accountability and quality.
The decline in accountability has been in progress for decades, so LLMs can obviously not have caused it.
They might of course accelerate it if used unwisely, but the solution to that is arguably to use them wisely, not to completely shun them because "think of the craft and the jobs".
And yes, in some contexts, using them wisely might well mean not using them at all. I'd just be surprised if that were a reasonable default position in many domains in 5-10 years.
We've been in that era for at least two decades now. We just only now invented the steam engine.
> I wonder how long it takes until this comes all crashing down.
At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.