Protip: Any time you read "AI" in a news article, substitute the phrase "faster, more numerous, and confidently incorrect." I don't think we need "confidently incorrect" weather models. Who is asking for this?
These models actually outperform traditional methods on many fronts, including accuracy a lot of the time. They are technically generative AI models, but they're definitely not LLMs.
The actual act of typing code into a text editor and building it could be the least interesting and least valuable part of software development. A developer who sees their job as "writing code" or a company leader who sees engineers' jobs as "writing code" is totally missing where the value is created.
Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.
Where's the beautiful human generated code? There's the IOCCC but that's the only code comleo that's a competition based on the code itself, and it's not even a beauty pageant. There's some demo scene stuff, which is more of a golf thing. There's random one-offs, like not-Carmack's inverse square, or Duff's device, but other than that, where're the good code beauty pageants?
> now that the Web had become the de-facto standard application platform.
I feel like we can continue to resist this, although I admit it's getting more and more futile every year. It's like trying to hold back the tide. I personally don't want the web to be an application platform. The web is for browsing web pages. I have an application platform on my computer already.
I see your point. But there is an objective need for a some common ground to applications on. Something with zero install friction and proper sandbox isolation.
Because the alternative isn't "yes, we are providing Linux and MacOS-arm64 binaries", the alternative is "here is your Win32 blob that is broken on wine because screw you that's why" or "here is a .jar with a horrible awt fronts that is also broken unless you run it under an ancient JRT" - and that's on user's side, on developer's side it's even worse. I feel that web becoming an application platform was net negative for the web, but positive for every other platform (and users and developers as well). Yes, it makes web crappy, but we need some crappy platform where all the crap goes - and at least the browser contains the crap well.
Or we can accept it, make a good access control system in an app platform for once, and add the few missing parts that the web standards are still missing so it becomes a good platform.
And none of that requires that we give up on an entire facade focused on reading text.
But if Mozilla focus on resisting, they can't do that, and honestly, nobody else out there will.
Why even give most apps even one chance? For almost every app I have zero interest in ever getting a notification from. I see no reason to give them an opportunity to annoy me even once.
Honestly because I won't remember to go into the settings page and disable it. When a notification comes in, there's a quick route to disable forever, otherwise I have to go preemptively digging
“The reason is that, in other fields [than software], people have to deal with the perversity of matter. [When] you are designing circuits or cars or chemicals, you have to face the fact that these physical substances will do what they do, not what they are supposed to do. We in software don't have that problem, and that makes it tremendously easier. We are designing a collection of idealized mathematical parts which have definitions. They do exactly what they are defined to do.
And so there are many problems we [programmers] don't have. For instance, if we put an ‘if’ statement inside of a ‘while’ statement, we don't have to worry about whether the ‘if’ statement can get enough power to run at the speed it's going to run. We don't have to worry about whether it will run at a speed that generates radio frequency interference and induces wrong values in some other parts of the data. We don't have to worry about whether it will loop at a speed that causes a resonance and eventually the ‘if’ statement will vibrate against the ‘while’ statement and one of them will crack. We don't have to worry that chemicals in the environment will get into the boundary between the if statement and the while statement and corrode them, and cause a bad connection. We don't have to worry that other chemicals will get on them and cause a short-circuit. We don't have to worry about whether the heat can be dissipated from this ‘if’ statement through the surrounding ‘while’ statement. We don't have to worry about whether the ‘while’ statement would cause so much voltage drop that the ‘if’ statement won't function correctly. When you look at the value of a variable you don't have to worry about whether you've referenced that variable so many times that you exceed the fan-out limit. You don't have to worry about how much capacitance there is in a certain variable and how much time it will take to store the value in it.
All these things are defined a way, the system is defined to function in a certain way, and it always does. The physical computer might malfunction, but that's not the program's fault. So, because of all these problems we don't have to deal with, our field is tremendously easier.”
Counterpoint, I have definitely taken them into consideration when designing my backup script. It's the reason why I hash my files before transferring, after transferring, and at periodic intervals.
And if you're designing a Hardware Security Module, as another example, I hope that you've taken at least rowhammer into consideration.
He makes a valid distinction, in a very specific sense. As long as we understand a program correctly, then we understand its behavior completely [0]. The same cannot be said of spherical cows (which, btw, can be modeled by computers, which means programs inherit the problems of the model, in some sense, and all programs model something).
However, that "as long as" is doing quite a bit of work. In practice, we rarely have a perfect grasp of a real world program. In practice, there is divergence between what we think a program does and what it actually does, gaps in our knowledge, and so on. Naturally, this problem also afflicts mathematical approximations of physical systems.
[0] And even this is not entirely true. Think of a concurrent program. Race conditions can produce all sorts of weird results that are unpredictable. Perfect knowledge of the program will not tell you what the result will be.
The first one is the best one, but it only works well with an educated population. Unfortunately, many democratic countries have lost that key ingredient.
> a lot of things I consider errors based on the "middle-of-the-road" code style that it has picked up from all the code it has ingested. For instance, I use a policy of "if you don't create invalid data, you won't have to deal with invalid data"
Yea, this is something I've also noticed but it never frustrated me to the point where I wanted to write about it. Playing around with Claude, I noticed it has been trained to code very defensively. Null checks everywhere. Data validation everywhere (regardless of whether the input was created by the user, or under the tight control of the developer). "If" tests for things that will never happen. It's kind of a corporate "safe" style you train junior programmers to do in order to keep them from wrecking things too badly, but when you know what you're doing, it's just cruft.
For example, it loves to test all my C++ class member variables for null, even though there is no code path that creates an incomplete class instance, and I throw if construction fails. Yet it still happily whistles along, checking everything for null in every method, unless I correct it.
I remember that article. It's wild the extent to which "anti-fraud" has captured companies, destroyed their UX, and seemingly directs all their actions. And when you criticize it, they blame KYC/AML and cry and act as though they have no agency. A very small tail is wagging the dog!
Tail size is fraud budget (loss) and appetite (loss+mitigation costs). The math is straightforward to determine how much fraud you're willing to eat on an annual basis. They still have customers and revenue, right? So not terribly wild imho.
I feel like this becomes kind of unacceptable as soon as you take on your first developer employee. 10K LOC changes from the CTO is fine when it's only the CTO working on the project.
Hell, for my hobby projects, I try to keep individual commits under 50-100 lines of code.
Templates and templating languages are still a thing. Source generators are a thing. Languages that support macros exist. Metaprogramming is always an option. Systems that write systems…
If these AIs are so smart, why the giant LOCs?
Sure, it’s cheaper today than yesterday to write out boilerplate, but programming is about eliminating boilerplate and using more powerful abstractions. It’s easy to save time doing lots of repetitive nonsense, stopping the nonsense should be the point.
Chargeback has become the only way to get any justice out of companies anymore. It used to be the last resort--the point where you have tried everything and customer support won't budge. Now it's sometimes your only option because customer support doesn't even exist.
I swear, I've probably done a single chargeback from all of 1995-2015, yet I've done at least five from 2015-2025.
reply