Hacker Newsnew | past | comments | ask | show | jobs | submit | nikita's commentslogin

replit


(Neon CEO)

Not really. OrioleDB solve the vacuum problem with the introduction of the undo log. Neon gives you scale out storage which is in a way orthogonal to OrielDB. With some work you can run OrioleDB AND neon storage and get benefits of both.


> OrioleDB solve the vacuum problem with the introduction of the undo log.

Way more than just this!

> With some work you can run OrioleDB AND neon storage and get benefits of both.

This would require significant design work, given that significant OrioleDB benefits are derived from row-level WAL.


One of the services that can replace the fauna service is DocumentDB Postgres plugin (+proxy that is not open sourced yet, but will be shortly). It's available on Azure, but I can also see other Postgres Providers will start picking this up.

https://github.com/microsoft/documentdb


There is an open-source proxy for PostgreSQL with DocumentDB to implement the MongoDB interface: https://blog.ferretdb.io/ferretdb-v2-ga-open-source-mongodb-...


This is an exciting project. Few highlights: - Query processor is DuckDB - as long as it translates PG type system to DuckDB typesystem well - it will be very fast. - Data is stored on S3 in Parquet with Delta or Iceberg metadata. This is really cool. You don't need to push analytical data through WAL - only metadata goes into WAL. This mean fast loading at least in theory, and compatibility with all the Delta/Iceberg ecosystem. - Once they build real-time ingest, you can just push timeseries into this system and you don't need a second system like Clickhouse


Data is also stored in the local filesystem. How does that compare with vanilla PG and/or S3 based storage?


The word is it's a serious effort on the part of Microsoft. It's missing a MongoDB wire protocol which they plan to opensource as well. In the meantime it's possible to use FerretDB for that.

I think the biggest use case is big data and dev platform that need application compatibility and wrapping Atlas is less attractive for some reason.


(neon ceo). What region are you in?


Hi! London.


Oh, that's coming very very soon! London and San Paolo. Def in Q1.


I remember having a hard time solving this in my first year of college. This is really impressive.

So I have this problem here: given that for positive integers m and n, 0 ≤ x[m + n] ≤ x[m] + x[n], I need to prove that the limit of x[n]/n exists. Hmm, okay, let's see how to approach this.

Alternative approach: maybe I can show that a_n is convergent by showing that it's a Cauchy sequence.

But that might be complicated.

Alternatively, perhaps I can use the subadditivity to bound a_n.

Wait, let's consider the following:

Since x is subadditive, x[m + n] ≤ x[m] + x[n].

Let me set m = n, then x[2n] ≤ x[n] + x[n] = 2x[n].

Therefore, x[2n]/(2n) ≤ x[n]/n.

So, a_{2n} ≤ a_n.

That means that a_{2n} ≤ a_n.

Similarly, x[3n] ≤ x[2n] + x[n] ≤ 2x[n] + x[n] = 3x[n].

So, x[3n]/(3n) ≤ x[n]/n.

Similarly, x[4n]/(4n) ≤ x[2n]/(2n) ≤ x[n]/n.

It seems like a_{kn} ≤ a_n for any integer k ≥ 1.

This suggests that as n increases, a_n tends to decrease or stay the same.

But wait, in the example where x[n] = n, a_n = 1 for all n, so it's constant.

In the example where x[n] = sqrt(n), a_n = 1/sqrt(n), which decreases to 0.

So, in some cases, a_n is decreasing, and in some cases, it's constant.

But in general, a_n is bounded below by 0 and is decreasing along powers of 2, for example.

Wait, from x[2n]/(2n) ≤ x[n]/n, and x[4n]/(4n) ≤ x[2n]/(2n), and so on, so along the sequence n, 2n, 4n, 8n, etc., a_n is decreasing.

Maybe I can use this to show that a_n has a limit.

Let me consider n = 2^k, for integer k.

Let b_k = x[2^k]/2^k.

Then, from x[2^{k+1}] ≤ x[2^k] + x[2^k] = 2x[2^k], so b_{k+1} = x[2^{k+1}]/2^{k+1} ≤ (2x[2^k])/2^{k+1} = x[2^k]/2^k = b_k.

So, b_{k+1} ≤ b_k, meaning that b_k is a decreasing sequence.

Since b_k ≥ 0, it must converge to some limit, say L.

Now, I need to show that x[n]/n approaches L as n approaches infinity.

Given that b_k is decreasing and converges to L, and for general n, a_n is bounded below by L, and perhaps approaches L.

Wait, more carefully:

For any n, there exists k such that 2^{k-1} < n ≤ 2^k.

Then, x[n]/n ≤ x[2^k]/2^{k-1} = 2(x[2^k]/2^k) = 2b_k.

Since b_k approaches L, for large k, b_k is close to L.

But I need to show that x[n]/n is close to L.

Wait, perhaps I can bound x[n]/n between b_k and something else.

Given that 2^{k-1} < n ≤ 2^k, then x[n] ≤ x[2^{k-1}] + x[n - 2^{k-1}] ≤ x[2^{k-1}] + x[2^{k-1}] = 2x[2^{k-1}].

Therefore, x[n]/n ≤ (2x[2^{k-1}])/n ≤ (2x[2^{k-1}])/2^{k-1} = 2(x[2^{k-1}]/2^{k-1}) = 2b_{k-1}.

But b_{k-1} approaches L as k increases.

Similarly, since n > 2^{k-1}, x[n]/n > x[2^k]/2^k = b_k.

But b_k ≤ b_{k-1}, and both approach L.

Therefore, x[n]/n is squeezed between b_k and 2b_{k-1}, both of which approach L.

Therefore, x[n]/n approaches L as n approaches infinity.

This suggests that the limit exists and is equal to L.

Therefore, the limit of x[n]/n exists.

Alternatively, if I recall correctly, this is exactly what Fekete's lemma states.

So, in conclusion, the limit of x[n]/n exists.

*Final Answer*

\boxed{\lim_{n \to \infty} \frac{x[n]}{n} \text{ exists}}


Neon CEO...

This is possible from the cost perspective due to the architecture of Neon and the ability scale to 0. Happy to answer questions!


(neon ceo). We have lots of examples of this. Here is one with Retool. https://neon.tech/blog/how-retool-uses-retool-and-the-neon-a...


A number of features stood out to me in this release:

1. Chipping away more at vacuum. Fundamentally Postgres doesn't have undo log and therefore has to have vacuum. It's a trade-off of fast recovery vs well.. having to vacuum. The unfortunate part about vacuum is that it adds load to the system exactly when the system needs all the resources. I hope one day people stop knowing that vacuum exists, we are one step closer, but not there.

2. Performance gets better and not worse. Mark Callaghan blogs about MySQL and Postgres performance changes over time and MySQL keep regressing performance while Postgres keeps improving.

https://x.com/MarkCallaghanDB https://smalldatum.blogspot.com/

3. JSON. Postgres keep improving QOL for the interop with JS and TS.

4. Logical replication is becoming a super robust way of moving data in and out. This is very useful when you move data from one instance to another especially if version numbers don't match. Recently we have been using it to move at the speed of 1Gb/s

5. Optimizer. The better the optimizer the less you think about the optimizer. According to the research community SQL Server has the best optimizer. It's very encouraging that every release PG Optimizer gets better.


MySQL can be faster in certain circumstances (mostly range selects), but only if your schema and queries are designed to exploit InnoDB’s clustering index.

But even then, in some recent tests I did, Postgres was less than 0.1 msec slower. And if the schema and queries were not designed with InnoDB in mind, Postgres had little to no performance regression, whereas MySQL had a 100x slowdown.

I love MySQL for a variety of reasons, but it’s getting harder for me to continue to defend it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: