One of the biggest interests & excitements I feel over QUIC & HTTP3 is the potential for something really different & drastically better in this realm. Right out of the box, QUIC is "connectionless", using cryptography to establish session. I feel like there's so much more possibility for a data-center to move around who is serving a QUIC connection. I have a lot of research to do, but ideally that connection can get routed stream by stream, & individual servers can do some kind of Direct Server Return (DSR), to individual streams. But I'm probably pie in the sky with these over-flowing hopes.
Edit: oh here's a Meta talk on their QUIC CDN doing DSR[1].
The original "live migration of virtual machines"[2] paper blew me away & reset my expectations for computing & the connectivity, way back in 2005. They live migrated a Quake 3 server. :)
Co-author of multiple QUIC libraries here: Even though QUIC uses a connectionless protocol (UDP) and QUIC allows client IP addresses to change during a lifetime of a connection, a QUIC connection is actually extremely stateful - a lot more than TCP. There's lots of state for each stream (including potentially very fragmented send and receive buffers) and the overall connection. You could potentially serialize that, but it would be even more work than for TCP.
Normal applications would typically just make sure the client can reconnect as fast as possible (and QUIC can do it with 0 to 1RTT), and then have suitable application level semantics that limit any availability issues when a reconnect happens (e.g. for large downloads you can restart using a ranged request. For persistent connection the server can tell the client with a GOAWAY that it might shut down and the client can reconnect early to avoid running into the availability issue).
My understanding is that for [1], their frontend proxy still is a single QUIC peer which contains all state of the actual connection - otherwise they also couldn't do connection level flow control and overall connection congestion control. That layer now just instructs another server about packet layouts to send, but doesn't make the other layer handle QUIC transmission completely on its own.
Multihoming is one of the key features of most protocols invented after tcp (sctp, QUIC, mptcp) and for good reason, it is so so useful in many scenarios.
but given the place where we ended up, maybe host addresses make more sense than interface addresses (ignoring the effect that would have on routing table aggregation)
I guess it all depends on your constraints. If you can use a message bus or high-level abstractions, like kafka, go for it. But for extremely low latency or very constrained environments, if you need to switch billions packets per second, it's hard to beat the simplicity and efficiency of arp+ip ...
Edit: oh here's a Meta talk on their QUIC CDN doing DSR[1].
The original "live migration of virtual machines"[2] paper blew me away & reset my expectations for computing & the connectivity, way back in 2005. They live migrated a Quake 3 server. :)
[1] https://engineering.fb.com/2022/07/06/networking-traffic/wat...
[2] https://lass.cs.umass.edu/~shenoy/courses/spring15/readings/...