Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're right in terms of breadth officially covered. But if you look at the features where they both officially have support, there are many examples where the GCP version is more reliable and usable than the AWS version. Even GKE is an example of this, despite the outage in node pool creation that we're discussing here. Way better than EKS.

(Disclosure: I worked for Google, including GCP, for a few years ending in 2015. I don't work or speak for them now and have no inside info on this outage.)



I think you're going to have to back up a claim like this with some facts.

GKE being the exception, since it was launched a couple years before EKS. AWS clearly has way more services, and the features are way deeper than GCP.

Just compare virtual machines and managed databases, AWS has about 2-3x more types of VMs (VMs with more than 4TB of RAM, FPGAs, AMD Epyc, etc.), and in databases, more than just MySQL and PostgreSQL. When you start looking at features you get features that you just can't get in GCP, like 16 read-replicas, point in time recovery, backtrack, etc.

Disclaimer: I work for AWS but my opinions are my own.


Each platform has features the other platform doesn't, even though AWS has more.

Some of GCP's unique compelling features include live VM migration that makes it less relevant when a host has to reboot, the new life that has recently been put into Google App Engine (both flexible environment and the second generation standard environment runtimes), the global load balancer with a single IP and no pre-warming, and Cloud Spanner.

In terms of feature coverage breadth I started my previous comment by agreeing that AWS was ahead, and I still reaffirm that. But if you randomly select a feature that they both have to a level which purports to meet a given customer requirement, the GCP offering will frequently have advantages over the AWS equivalent.

Examples besides GKE: BigQuery is better regarded than Amazon Redshift, with less maintenance hassle. And EC2 instance, disk, and network performance is way more variable than GCE which generally delivers what it promises.

One bit of praise for AWS: when Amazon does document something, the doc is easier to find and understand, and one is less likely to find something out of date in a way that doesn't work. But GCP is more likely to have documented the thing in the first place, especially in the case of system-imposed limits.

To be clear, I want there to be three or four competitive and widely used cloud options. I just think GCP is now often the best of the major players in the cases where its scope meets customer needs.


Redshift is not a direct competitor with BigQuery. It's a relational data warehouse. BigQuery more directly competes with Athena, which is a managed version of Apache Presto, and my personal opinion is that Athena is way better than BigQuery because I can query data that is in S3 (object storage) without having to import it into BigQuery first.

Disk and network performance is extremely consistent with AWS so long as you use newer instance types and storage types. You can't reasonably compare the old EBS magnetic storage to the newer general purpose SSD and provisioned IOPS volume types, and likewise, newer instances get consistent non-blocking 25gbps network performance.

I'm not so sure I would praise our documentation; it is one of the areas that I wish we were better at. Some of the less used services and features don't have excellent documentation, and in some cases you really have to figure it out on your own.

GCP is a pretty nice system overall, but most of the time when I see comparisons, when GCP looks better its because the person making the comparison is comparing the AWS they remember from 5-6 years ago with the GCP of today, which would be like comparing GAE from 2012 with today.


The comments I made about Redshift vs BigQuery and about disk/network/etc reflect current opinions of colleagues who use AWS currently (or recently in some cases) and extensively, not 5-6 year old opinions. Even my own last use of AWS was maybe 2-3 years ago, when Redshift was AWS's closest competitor to BigQuery and when I saw disk/network issues directly.

You're right that Athena seems like the current competitor to BigQuery. This is one of those things that are easy to overlook when people made the comparison as recently as a couple of years ago (before Athena was introduced) and Redshift vs BigQuery is still often the comparison people make. This is where Amazon's branding is confusing to the customer: so many similar but slightly different product niches, filled at different times by entirely different products with entirely unrelated names.

When adding features, GCP would usually fill adjacent niches like "serverless Redshift" by adding a serverless mode to Redshift, or something like that, and behavior would be mostly similar. Harder to overlook and less risky to try.

Meanwhile, when Athena was introduced, people who had compared Redshift and BigQuery and ruled out the former as too much hassle said "ah, GCP made Amazon introduce a serverless Redshift. But it's built on totally different technology. I wonder if it will be one of the good AWS products instead of the bad ones." (Yes, bad ones exist. Amazon WorkMail is under the AWS umbrella but basically ignored, to give one example.)

And then they go back to the rest of their day, since moving products (whether from Redshift or BigQuery) to Athena would not be worth the transition cost, and forget about Athena entirely.

On the disk/network question, no I didn't see performance problems with provisioned IOPS volume types, but that doesn't matter: for GCE's equivalent of EBS magnetic storage, they do indeed give what they promise, at way less cost than their premium disk types. There's no reason it isn't a fair comparison.

And for the "instance" part of my EC2 performance comment, I was referring to a noisy neighbor problem where sometimes a newly created instance would have much worse CPU performance than promised and so sometimes delete and recreate was the solution. GCE does a much better job at ensuring the promised CPUs.

I'm glad AWS and GCP have lots of features, improve all the time, and copy each other when warranted. But I don't think the general thrust of my comparison has gone invalid, even if my recent data is more skewed toward GCP and my AWS data is skewed toward 2-3 years old. Only the specifics have changed (and the feature gap narrowed with respect to important features).


Presto is not an Apache project (although it is open source under the Apache License).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: