There is little point in inventing new protocols, given how low the overhead of UDP is. That's just 8 bytes per packet, and it enables going through NAT. Why come up with a new transport layer protocol, when you can just use UDP framing?
Agreed. Building a custom protocol seems “hard” to many folks who are doing it without any fear on top of HTTP. The wild shenanigans I’ve seen with headers, query params and JSON make me laugh a little. Everything as text is _actually_ hard.
A part of the problem with UDP is the lack of good platforms and tooling. Examples as well. I’m trying to help with that, but it’s an uphill battle for sure.
I think the "problem" of sending data is a lot harder without some concept of payloads and signaling. HTTP just happens to be the way that people do that but many RPCs like zeromsg/nng, gRPC, Avro, Thrift, etc work just fine. Plenty of tech companies use those internally.
Some of this is hurt by the fact that v8, Node's runtime, has had first class JSON parsing support in but no support for binary protocol parsing. So writing Javascript to parse binary protocols is a lot slower than parsing JSON.
Sure, you can reimplement multiplexing on the application level, but it just makes more sense to do it on the transport level, so that people don't have to do it in JavaScript.
It only takes a few thousand lines (easily less than 10k even with zero dependencies and no standard library) to implement QUIC.
Kernel management of transport protocols has zero actual benefit for latency or throughput given proper network stack design. Neither does hardware offload except for crypto offload. Claimed differences are just due to poor network stack design and poor protocol implementation.
Not fully standards compliant since I skipped some irrelevant details like bidirectional streams when I can just make a pair of unidirectional streams, but handles all of the core connection setup and transport logic. It is not actually that complicated. And just to get ahead of it, performance is perfectly comparable.
FWIW, quic-go, a fully-featured implementation in Go used by the Caddy web server, is 36k lines in total (28k SLoC), excluding tests. Not quite 10k, but closer to that than to your figure.
This is not the case with Starlink (and presumably Starlink) satellites. The ground stations use directional phased arrays. They can do it, because they keep good track of where each satellite is at any given moment, and do trajectory adjustments as needed.
Yes, groundstations are virtually always highly directional, except for, like, radio hams sometimes. (Even hams usually use yagis.) Possibly you didn't notice this, but I'm talking about the antennas on the satellites, which are the ones that could suffer interference (since they're the ones receiving the uplink frequencies we're discussing), not the groundstation antennas.
You always have to keep track of where each satellite is at any given moment.
What do you mean by "Starlink (and presumably Starlink)"?
To add to this, we know what objects interfere with our satellite contacts. We keep their orbital positions (as best as possible) in mind when scheduling satellite operations to avoid communication failures (partial or total) caused by their interference.
This is often learned after the fact. A contact will fail or go badly and then you can examine what was around it at the time. Over a series of failures the offending satellite will be identified.
Yeah, if you don't know the name of the thing you're looking for, you can spend weeks looking for it. If you just search for generic like "eigenvalue bound estimate", you'll find thousands of papers and hundreds of textbooks, and it will take substantial amount of time to decide whether each is actually relevant to what you're looking for.
There is no reason to expect that the test results would be the same across all demographic groups, and in fact, everything we know about psychometry (i.e. the science of mental testing) suggests that we should expect exactly opposite. See e.g. "Intelligence: Knowns and unknowns", which described the consensus position of the American Psychological Association as of 1995:
> The cause of [test achievement] differential is not known; it is apparently not due to any simple form of bias in the content or administration of the tests
themselves.
Not sure what is your point, the "test achievement" mentioned in the document refers to totally different "test" that the ones we were talking about.
Also, on just pure logic, I don't think the document shows what you think it shows. The document you provide (which is 30 years old, so with just this one, we should not assume it reflects today's consensus) explains that the difference is not understood, and that there is no _obvious_ answer, neither from biology, from group culture or from bias in the tests. In other words: the difference is due to something _not obvious_, for example (but not limited to, of course, it's just an example), _not obvious_ form of bias.
What you describe using many completely unnecessary mathematical terms is not only not found in “every real-world protocol”, but in fact is something virtually absent from overwhelming majority of actually used protocols, with a notable exception of the kind of protocol that gets a four digit numbered RFC document that describes it. Believe it or not, but in the software industry, nobody is defining a new “version number” with “strictly defined algebra” when they want to add a new field to an communication protocol between two internal backend services.
> What you describe using many completely unnecessary mathematical terms
Unnecessary for you, surely.
> Believe it or not, but in the software industry, nobody is defining a new “version number” with “strictly defined algebra” when they want to add a new field to an communication protocol between two internal backend services.
Name a protocol that doesn't have a respective version number, or without the defined algebra in terms of the associated spec clarifications that accompany the new version. The word "strictly" in "strictly defined algebra" has to do with the fact that you cannot evolve a protocol without strictly publishing the changed spec, that is you're strictly obliged to publish a spec, even the loosely defined one, with lots of omissions and zero-values. That's the inferior algebra for protobuf, but you can think it is unnecessary and doesn't exist.
Instead of just handwaving about whether it's necessary or not, why not point to any protocol that relies on that attribute, and we can then evaluate how important that protocol is?
Yeah. And for anyone curious about the actual content hidden under the jargon-kludge-FP-nerd parent comment, here's my attempt at deciphering it.
They seem to be saying that you have to publish code that can change a type from schema A to schema B... And back, whenever you make a schema B. This is the "algebra". The "and back" part makes it bijective. You do this at the level of your core primitive types so that it's reused everywhere. This is what they meant by "pervasive" and it ties into the whole symmetric groups thing.
Finally, it seems like when you're making a lossy change, where a bijection isn't possible, they want you to make it incompatible. i.e, if you replaced address with city, then you cannot decode the message in code that expects address.
Granted, on paper it’s a cool feature. But I’ve never once seen an application that will actually preserve that property.
Chances are, the author literally used software that does it as he wrote these words. This feature is critical to how Chrome Sync works. You wouldn’t want to lose synced state if you use an older browser version on another device that doesn’t recognize the unknown fields and silently drops them. This is so important that at some point Chrome literally forked protobuf library so that unknown fields are preserved even if you are using protobuf lite mode.