Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Compression seems silly in the modern world. Virtually everything is already compressed.

IIRC my laptop's zpool has a 1.2x compression ratio; it's worth doing. At a previous job, we had over a petabyte of postgres on ZFS and saved real money with compression. Hilariously, on some servers we also improved performance because ZFS could decompress reads faster than the disk could read.



> we also improved performance because ZFS could decompress reads faster than the disk could read

This is my favorite side effect of compression in the right scenarios. I remember getting a huge speed up in a proprietary in-memory data structure by using LZO (or one of those fast algorithms) which outperformed memcpy, and this was already in memory so no disk io involved! And used less than a third of the memory.


The performance gain from compression (replacing IO with compute) is not ironic, it was seen as a feature for the various NAS that Sun (and after them Oracle) developped around ZFS.


How do you get a PostgreSQL database to grow to one petabyte? The maximum table size is 32 TB o_O


Cumulative; dozens of machines with a combined database size over a PB even though each box only had like 20 TB.


Probably by using partitioning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: