Not really. OrioleDB solve the vacuum problem with the introduction of the undo log. Neon gives you scale out storage which is in a way orthogonal to OrielDB. With some work you can run OrioleDB AND neon storage and get benefits of both.
One of the services that can replace the fauna service is DocumentDB Postgres plugin (+proxy that is not open sourced yet, but will be shortly). It's available on Azure, but I can also see other Postgres Providers will start picking this up.
This is an exciting project. Few highlights:
- Query processor is DuckDB - as long as it translates PG type system to DuckDB typesystem well - it will be very fast.
- Data is stored on S3 in Parquet with Delta or Iceberg metadata. This is really cool. You don't need to push analytical data through WAL - only metadata goes into WAL. This mean fast loading at least in theory, and compatibility with all the Delta/Iceberg ecosystem.
- Once they build real-time ingest, you can just push timeseries into this system and you don't need a second system like Clickhouse
The word is it's a serious effort on the part of Microsoft. It's missing a MongoDB wire protocol which they plan to opensource as well. In the meantime it's possible to use FerretDB for that.
I think the biggest use case is big data and dev platform that need application compatibility and wrapping Atlas is less attractive for some reason.
I remember having a hard time solving this in my first year of college. This is really impressive.
So I have this problem here: given that for positive integers m and n, 0 ≤ x[m + n] ≤ x[m] + x[n], I need to prove that the limit of x[n]/n exists. Hmm, okay, let's see how to approach this.
…
Alternative approach: maybe I can show that a_n is convergent by showing that it's a Cauchy sequence.
But that might be complicated.
Alternatively, perhaps I can use the subadditivity to bound a_n.
Wait, let's consider the following:
Since x is subadditive, x[m + n] ≤ x[m] + x[n].
Let me set m = n, then x[2n] ≤ x[n] + x[n] = 2x[n].
A number of features stood out to me in this release:
1. Chipping away more at vacuum. Fundamentally Postgres doesn't have undo log and therefore has to have vacuum. It's a trade-off of fast recovery vs well.. having to vacuum. The unfortunate part about vacuum is that it adds load to the system exactly when the system needs all the resources. I hope one day people stop knowing that vacuum exists, we are one step closer, but not there.
2. Performance gets better and not worse. Mark Callaghan blogs about MySQL and Postgres performance changes over time and MySQL keep regressing performance while Postgres keeps improving.
3. JSON. Postgres keep improving QOL for the interop with JS and TS.
4. Logical replication is becoming a super robust way of moving data in and out. This is very useful when you move data from one instance to another especially if version numbers don't match. Recently we have been using it to move at the speed of 1Gb/s
5. Optimizer. The better the optimizer the less you think about the optimizer. According to the research community SQL Server has the best optimizer. It's very encouraging that every release PG Optimizer gets better.
MySQL can be faster in certain circumstances (mostly range selects), but only if your schema and queries are designed to exploit InnoDB’s clustering index.
But even then, in some recent tests I did, Postgres was less than 0.1 msec slower. And if the schema and queries were not designed with InnoDB in mind, Postgres had little to no performance regression, whereas MySQL had a 100x slowdown.
I love MySQL for a variety of reasons, but it’s getting harder for me to continue to defend it.