Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So they "ignore" bit depth by using 32 bits for each sample. This may be a good solution but it's not really magic. They just allocated many more bits than other codecs were willing to.

It also seems like a very CPU-centric design choice. If you implement a hardware en/decoder, you will see a stark difference in cost between one which works on 8/10 vs 32 bits. Maybe this is motivated by the intended use cases for JPEG XL? Or maybe I've missed the point of what JPEG XL is?





image decoding is fast enough that no one uses hardware encoders. The extra bits are very cheap on both CPU and GPU, and by using them internally, you prevent internal calculations from accumulating error, and end up with a much cleaner size quality trade-off. (note that 10 bit output is still valuable on an 8 bit display because it lets the display manager dither the image

That is true! But AVIF is based on AV1. As a video codec, AV1 often does need to be implemented in dedicated hardware for cost and power efficiency reasons. I think the article is misleading in this regard: "This limitation comes from early digital video systems". No, it is very much a limitation for video systems in the current age too.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: