Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If your database is not Open Source then your marketing lingo needs to be more open or else you'll have the same mistake as FoundationDB (which looked like vapor-ware).

As a proprietary service, you are now competing against Cloud Spanner which (while people love the underdog) means your toast because they have Eric Brewer to hand wave away their marketing lingo.

On the flip side, you are competing against Cockroach, but they are Open Source so that puts you in a rock and a hard place. From previous comments of mine, you may know I don't think Cockroach has much of a future either because Globally Consistent databases aren't going to cater to the necessary P2P future of the web (5B+ new people coming online, 100B+ IoT devices, graph enabled social web, Machine Learning, etc. which is what we, http://gun.js.org/ , caters to and we just successfully ran load tests on low end hardware doing 1.7K table inserts/sec across a federated system, we plan on getting this up to 10K inserts/second on cheap [if not free] hardware).

Why are these systems going to fail to pick up the market? Because the best of the best, both in engineering and as an Open Source community, RethinkDB (which I praise highly) couldn't. At the end of the day, the few companies that need globally consistent transactions will trust (for better or for worse) Cloud Spanner, and the others who want to roll their own infrastructure will try Cockroach but ultimately switch to RethinkDB in the end.

So on that note, as others have noted, don't use your /fantastic/ marketing opportunities (top of HN) to make false claims about being "industry first", it won't help you gather a developer community. Use this time to win developers over like Firebase did (which itself now has their community scared of when/if Google will shut them down, those developers are now flooding to RethinkDB and ours, despite Firebase being one of the best - high praise for them as well, like Rethink).



> Globally Consistent databases aren't going to cater to the necessary P2P future of the web

Well, that's an interesting assertion. Why do you think that?


Because even a "3ms latency" (which was a problem, with respect to "global", that other people have commented out) can absolutely kill the performance for IoT data that may be emitting thousands of updates a second.

Those systems are largely highly localized, and so /strong eventual consistency/ is more important than globally consistent blocking operations.

Also, again with 5B+ people coming online, Master-Slave systems (even distributed ones) still have a huge bottleneck already in the present day. P2P systems (master-master) will scale better in these settings.


I was more curious about the "necessary P2P future of the web" part.

I think there's an assumption here that most of the responsibility for storing the source of truth will move out to things like IoT devices (i.e. fog computing).

And sure, there will probably be a need for that. But regarding the assumption that most web services will go away, I don't there's sufficient evidence to bet on it happening anytime reasonably soon. Data centers and public clouds will probably still be there in the next decade or two.


Twitter is spending over $15M/month on server costs alone to support 333M active monthly users.

Now compare to Pokemon Go's huge explosion of 20M daily users from a while ago.

This problem is only going to get worse with another 5B+ people coming online into the 2020s.

In order to scale, using (what you call) "fog computing" will be absolutely necessary. Cloud services will still be used, of course, but they will be built as P2P systems to take advantage of the "fog".

Cloud infrastructure will always be around, but how apps are built will be a fundamentally different architecture. But when S3 goes out, like it did the other week, we can't suffer worldwide downtime - that will be unacceptable.

Rethink's unfortunate failure to capitalize in this market is a signal that Master-Slave databases (even the best of the best) will have a very small role with respect to the total amount of data flowing through the internet.

My thoughts here: https://hackernoon.com/the-implications-of-rethinkdb-and-par...

As well as my the Changelog podcast interview: https://changelog.com/podcast/236


Why do you think people will leave CockroachDB for RethinkDB?

I ask this as a long-time user of RethinkDB.


While Cockroach has more emphasis on being globally consistent than Rethink (which has more emphasis on realtime), they are both distributed Master-Slave systems. So:

(1) RethinkDB got good reviews/patches on the Jepsen tests, the recent Cockroach review wasn't as successful (although I'm sure they'll get patches and performance up).

(2) The convenience of the realtime updates and developer community friendliness is going to win over (from a social perspective) the types of startups/teams that choose to roll their own not-locked-in all-open-source infrastructures that they deploy to clouds.

I'm pretty strongly opinionated on these things, I think Firebase and RethinkDB nailed it, and other contenders (in those spaces, whether a Master-Slave service or open source one) have hard fighting battles.


1. CockroachDB is still in beta.

2. I've been using RethinkDB for years and I've never found a use for the realtime updates. I think the benefit of that is mostly limited to chat apps, realtime collaboration, etc.


It's all based on use case, I guess. I spent the early part of my programming career building ERP and Accounting type systems, where real time updates are not factored into any design.

However, of late, I have been working on collaborative type apps, including IoT device programming, and real time updating is not just a luxury, it is expected. Indeed we are seeing things like SSE (Server Send Events) etc. being incorporated in the latest browser specs to support this.

Granted, unless you are using frameworks like Meteor etc., there is still a lot of work to be done to ease the integration between back end server push and browser real time display. Websockets are great, but require a lot of tedious management at scale.

But the thing is - once you start down the path of realtime updated apps, possibilities open up, and you begin to wonder how you used to program without it. For me, it all started when I knocked together this [0] real time update of Hacker News as a weekend project using RethinkDB for push updates, and Vue.js as the front end...

[0] - https://tophn.info


Interesting, would you mind sharing more of what you are doing? Batch processing, or something? The category of "realtime collaboration" seems to be the broad catch-all that I'm thinking of (todo lists/trellos, chat apps/gitter, social networks/facebook, search apps/google, productivity suites/gDocs, recording apps/youtube, automation tools/IFFT), plus the hype around drones, IoT, ML, etc.

That would be excluding banking apps, reports, etc. could you expand on yours/other uses that don't benefit from live updates?


I use it mainly for storing user data, error data, login info, etc. I can't imagine how realtime could be useful for that.


Ahh, that makes sense. Logs/status and such. Thanks!


No probs. I suspect a lot of organisations use a database mainly for user data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: