Yes, but exposed to everyone. This proposal settles on headers that get passed back and forth that surely some servers would adopt. Headers that established "previously downloaded thing" with a hash. It feels like the process could be exactly the same at the start, but end instead with some data to the client that it could use to do the http range requests. This proposal feels like it could be extended in a fairly easy way to support differential download.
Interesting! Privacy concerns aside for the moment, I would guess that there is some "optimal" compression dictionary over some transferred data set, and that Google/Chrome of all people would be in a position to calculate it and distribute it along with Chrome.
Then Google could ship that dictionary to all (Chrome) browsers and update it once a month, and ship the same dictionary to anybody who wants to use it on the server side. Browsers can indicate which shared dictionaries they have preloaded and if there's overlap with which dictionaries the server has, the server can just choose one and use that to compress the stream (which seems kind of like ciphersuite negotiation). Compression should be faster and use way less memory if the sender can assume that the receiver already has the same dictionary, and if there's any problem, both sides can just fall back to on-the-fly compression with inlined dictionary like we've always done.
There are almost certainly different optimal dictionaries for different locales / countries, each of which could be calculated and distributed in the same way. Even if the 'wrong' dictionary gets used for a given request, it's (probably) not catastrophic.
I guess it might be possible for an attacker to indicate their client only has one dictionary and request pages that they know are disadvantageous for the server to have to compress with that dictionary. Even then, server-side heuristics can account for a lower-than-expected compression ratio and, again, fall back to on-the-fly dictionary calculation.