We insert/extract about 2.5 bits more info from the 8 bit jpegs, leading to about 10.5 bits of precision. There is quite some handwaving necessary here. Basically it comes down to coefficient distributions where the distributions have very high probabilities around zeros. Luckily, this is the case for all smooth noiseless gradients where banding could be otherwise observed.
From a purely theoretical viewpoint 10+ bits encoding will lead into slightly better results even if rendered using a traditional 8 bit decoder. One source of error has been removed from the pipeline.
Has there been any outreach to get a new HDR decoder for the extra bits into any software?
I might be wrong, but it seems like Apple is the primary game in town for supporting HDR. How do you intend to persuade Apple to upgrade their JPG decoder to support Jpegli?
How does the data get encoded into 10.5 bits but displayable correctly by an 8 bit decoder while also potentially displaying even more accurately by a 10 bit decoder?
Through non-standard API extensions you can provide a 16 bit data buffer to jpegli.
The data is carefully encoded in the dct-coefficients. They are 12 bits so in some situations you can get even 12 bit precision. Quantization errors however sum up and worst case is about 7 bits. Luckily it occurs only in the most noisy environments and in smooth slopes we can get 10.5 bits or so.
8-bit JPEG actually uses 12-bit DCT coefficients, and traditional JPEG coders have lots of errors due to rounding to 8 bits quite often, while Jpegli always uses floating point internally.