The article mentions that the bit depth can be 16.
You may need more bits for HDR and some additional bits for precision. For example, screen pixels have an exponential intensity curve but image processing is best done in linear.
However, I wonder if floating-point is necessary, or even the best to use compared to using 32-bit fixed-point.
The floating-point format includes subnormal numbers that are very close to zero, and I'd think that could be much more precision than needed.
Processing of subnormal numbers is extra slow on some processors and can't always be turned off.
However, I wonder if floating-point is necessary, or even the best to use compared to using 32-bit fixed-point. The floating-point format includes subnormal numbers that are very close to zero, and I'd think that could be much more precision than needed. Processing of subnormal numbers is extra slow on some processors and can't always be turned off.