Highly recommend checking out Andreas' other work on https://ertdfgcvb.xyz/. Probably one of my favorite websites of all time, his body of work is what complete commitment to an idea looks like. Couple cool hardware projects in there too.
Can you share an example of what you're talking about in PHP 8.5? On the linked web page, the only code pattern that looks remotely complicated to me is the following:
#[SkipDiscovery(static function (Container $container): bool {
return ! $container->get(Application::class) instanceof ConsoleApplication;
})]
final class BlogPostEventHandlers
{ /\* … \*/ }
Nice to see this passed around on Hacker News. I think the whole concept of lenses is super cool and useful, but suffered from the usual Haskellificiation problems of being presented in an unnecessarily convoluted way.
I think Accessors.jl has a quite nice and usable implementation of lenses, it's something I use a lot even in code where I'm working with a lot of mutable data because it's nice to localize and have exact control over what gets mutated and when (and I often find myself storing some pretty complex immutable data in more 'simple' mutable containers)
Very cool. tl;dw: an inverted triple pendulum has 2^3 = 8 equilibria, since each arm of the pendulum can either be up or down (naturally, all but one equilibria are unstable), and this controller is able to make all 8*7 = 56 transitions between them.
Control theory is one of those things that shouldn't possibly work, yet here we are.
Jup and it doesn't bloat the context unnecessarily. The agent can call --help when it needs it. Just imagine a kubectl MCP with all the commands as individual tools, doesn't make any sense whatsoever.
Here’s a conceptual background about how and why HTTP/3 came to be (recollected from memory):
HTTP/1.0 was built primarily as a textual request-response protocol over the very suitable TCP protocol which guaranteed reliable byte stream semantics. The usual pattern was to use a TCP connection to exchange a request and response pair.
As websites grew more complex, a web page was no longer just one document but a collection of resources stitched together into a main document. Many of these resources came from the same source, so HTTP/1.1 came along with one main optimisation — the ability to reuse a connection for multiple resources using Keep Alive semantics.
This was important because TCP connections and TLS (nee SSL) took many round-trips to get established and transmitting at optimal speed. Latency is one thing that cannot be optimised by adding more hardware because it’s a function of physical distance and network topology.
HTTP/2 came along as a way to improve performance for dynamic applications that were relying more and more on continuous bi-directional data exchange and not just one-and-done resource downloads. Two of its biggest advancements were faster (fewer round-trips) TLS negotiation and the concept of multiple streams over the same TCP connection.
HTTP/2 fixed pretty much everything that could be fixed with HTTP performance and semantics for contemporary connected applications but it was still a protocol that worked over TCP. TCP is really good when you have a generally stable physical network (think wired connections) but it performs really badly with frequent interruptions (think Wi-Fi with handoffs and mobile networks).
Besides the issues with connection reestablishment, there was also the challenge of “head of the line blocking” — since TCP has no awareness of multiplexed HTTP/2 streams, it blocks everything if a packet is dropped, instead of blocking only the stream to which the packet belonged. This renders HTTP/2 multiplexing a lot less effective.
In parallel with HTTP/2, work was also being done to optimise the network connection experience for devices on mobile and wireless networks. The outcome was QUIC — another L4 protocol over UDP (which itself is barebones enough to be nicknamed “the null protocol”). Unlike TCP, UDP just tosses data packets between endpoints without much consideration of their fate or the connection state.
QUIC’s main innovation is to integrate encryption into the transport layer and elevate connection semantics to the application space, and allow for the connection state to live at the endpoints rather than in the transport components. This allows retaining context as devices migrate between access points and cellular towers.
So HTTP/3? Well, one way to think about it is that it is HTTP/2 semantics over QUIC transport. So you get excellent latency characteristics over frequently interrupted networks and you get true stream multiplexing semantics because QUIC doesn’t try to enforce delivery order or any such thing.
Is HTTP/3 the default option going forward? Maybe not until we get the level of support that TCP enjoys at the hardware level. Currently, managing connection state in application software means that over controlled environments (like E-W communications within a data centre), HTTP/3 may not have as good a throughput as HTTP/2.
If I might also plug ‘the Atlas of Fourier Transforms’. If your interested in understanding building intuition of symmetry and phase in fourier space, the book illustrates many structures.
How do you compute the fractional FT? My guess is by interpolating the DFT matrix (via matrix logarithm & exponential) -- is that right, or do you use some other method?
I thought this might be about the saying I've heard a bunch recently, "Slow is smooth, and smooth is fast."
I've mostly heard it in the context of building and construction videos where they are approaching a new skill or technique and have to remind themselves to slow down.
Going slowly and being careful leads to fewer mistakes, which will be a "smoother" process and ends up taking less time, whereas going too fast and making mistakes means work has to be redone and ultimately takes longer.
On rereading it, I see some parallels: When one is trying to go too fast, and is possibly becoming impatient with their progress, their mental queue fills up and processing suffers. If one accepts a slower pace, one's natural single-tasking capability will work better, and they will make better progress as a result.
And maybe its just my selection bias working hard to confirm that he actually is talking about what I want him to say!
While studying well-designed codebases is incredibly valuable, there's an important "tip of the iceberg" effect to consider: much of good software design lives in the "negative space" - what's deliberately not there.
The decisions to exclude complexity, avoid premature abstractions, or reject certain patterns are often just as valuable as the code you can see. But when you're studying a codebase, you're essentially seeing the final edit without the editor's notes - all the architectural reasoning that shaped those choices is invisible.
This is why I've started maintaining Architectural Decision Records (ADRs) in my projects. These document the "why" behind significant technical choices, including the alternatives we considered and rejected. They're like technical blog posts explaining the complex decisions that led to the clean, simple code you see.
ADRs serve as pointers not just for future human maintainers, but also for AI tools when you're using them to help with coding. They provide readable context about architectural constraints and compromises - "we've agreed not to do X because of Y, so please adhere to Z instead." This makes AI assistance much more effective at respecting your design decisions rather than suggesting patterns you've deliberately avoided.
When studying codebases for design patterns, I'd recommend looking for projects that also maintain ADRs, design docs, or similar decision artifacts. The combination of clean code plus the architectural reasoning behind it - especially the restraint decisions - provides a much richer learning experience.
Some projects with good documentation of their design decisions include Rust's RFCs, Python's PEPs, or any project following the ADR pattern. Often the reasoning about what not to build is more instructive than the implementation itself.
Handling SIGINT (ctrl-c) with child processes is tricky, but a more pervasive problem for Rust CLI programs is handling SIGPIPE. For historical reasons the compiler adds a signal handler before calling main() which ignores SIGPIPE. It means when you pipe your Rust CLI program's output to something like head, instead of being killed by the signal sent when head closes the pipe, you get a write error and usually print an error message instead of silently dying. You can match on the type of write error and skip printing a message when it's from a broken pipe, but a more subtle problem is that the shell sets the exit status of a program killed by a signal to 128 + the signal number, so 141 in the case of a broken pipe. You can emulate this behaviour by checking for a broken pipe and explicitly exiting with a 141 status code, but it's not possible to fully reproduce being killed by a signal. There's been an issue to make this configurable (the latest proposal is via a compiler flag) for years.
> docs that tells you basically that the code is self documented
Anytime someone tells me the code is self-documented I hear "there's no documentation."
The most common programmer's footgun
I don't have time to document
| ^
v |
Spends lots of time trying to understand code
We constantly say we don't have time to document the code. So instead we spend all our time reading code and trying to figure out what it does, to the minimal amount of understanding needed to implement whatever thing we need to implement.
This, of course, itself is naive because you can't know what the minimal necessary information is without knowing something about the whole codebase. Which is also why institutional knowledge is so important and why it is also weird that we'd rather have pay raises through switching companies than through internal raises. That's like trying to fix the damage from the footgun with a footgun.
I always had troubles having multiple processes get write access to the sqlite file. For example if I have node.js backend work with that file, and I try to access the file with different tool (adminer for example) it fails (file in use or something like that). Should it work? I don't know if I'm doing something wrong, but this is my experience with multiple projects.
Self plug: I made Jupyter notebooks for each chapter of this and the DFT and Physical Modeling books in this series, with Python animations/audio for some key concepts:
My most recent example of this is mentoring young, ambitious, but inexperienced interns.
Not only did they produce about the same amount of code in a day that they used to produce in a week (or two), several other things made my work harder than before:
- During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).
- The time spent on trivial issues went down a lot, almost zero, the remaining issues were much more subtle and time-consuming to find and describe.
- Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be. This breakdown of pattern-matching compared to "organic" code made the overhead much higher. Spending decades reviewing code and answering Stack Overflow questions often makes it possible to pinpoint not just a bug but how the author got there in the first place and how to help them avoid similar things in the future.
- A simple, but bad (inefficient, wrong, illegal, ugly, ...) solution is a nice thing to discuss, but the LLM-assisted junior dev often cooks up something much more complex, which can be bad in many ways at once. The culture of slowly growing a PR from a little bit broken, thinking about design and other considerations, until its high quality and ready for a final review doesn't work the same way.
- Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.
This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves. The junior dev would feel (I assume) much more productive and competent, but the response to their work would eventually lack most of the usual enthusiasm or encouragement from senior devs.
How do people work with these issues? One thing that worked well for me initially was to always require a lot of (passing) tests but eventually these tests would suffer from many of the same problems
The corrections in his blog post show you the level of precision mathematicians are used to.
A regular interviewer would have left the inaccuracies as they are because it’s too tedious to go over all of them when you have a casual conversation.