Hacker Newsnew | past | comments | ask | show | jobs | submit | littlecranky67's commentslogin

The irony, when you realize that Linus Torvalds created git.

There's an interview with Torvalds where he states that his daughter told him that in the computer lab at her college Linus is more known for Git than Linux

Clip from the interview: https://www.youtube.com/shorts/0wLidyXzFk8


That 10% missing are event normalization and event bubbling (preact bubble through DOM, react follows vDOM). Choose yourself what you need, but 100% compat with 3rd party libs is why I stick with react over preact.


Best option for a whale to manipulate the price again.

The same page now says 58 services - just 23 minutes after your post. Seems this is becoming a larger issue.

When I first visited the page it said like 23 services, now it says 65

74 now. This is an extreme way of finding out just how many AWS services there really are!

82 now.

Looks like AWS detonated six sticks of dynamite under a house of cards...


Up to 104 now, with 33 services reported as having issues that have been resolved.

Maybe unrelated, but yesterday I went to pick up my package from an Amazon Locker in Germany, and the display said "Service unavailable". I'll wait until later today before I go and try again.

I wonder why a package locker needs connectivity to give you a package. Since your package can't be withdrawn again from a different location, partitioning shouldn't be an issue.

Generally speaking, it's easier to have computation (logic, state, etc.) centralized. If the designers didn't prioritize scenarios where decentralization helped, then centralization would've been the better option.

Just a couple of days ago in this HN thread [0] there were quite some users claiming Hetzner is not an options as their uptime isn't as good as AWS, hence the higher AWS pricing is worth the investment. Oh, the irony.

[0]: https://news.ycombinator.com/item?id=45614922


As a data point, I've been running stuff at Hetzner for 10 years now, in two datacenters (physical servers). There were brief network outages when they replaced networking equipment, and exactly ONE outage for hardware replacement, scheduled weeks in advance, with a 4-hour window and around 1-2h duration.

It's just a single data point, but for me that's a pretty good record.

It's not because Hetzner is miraculously better at infrastructure, it's because physical servers are way simpler than the extremely complex software and networking systems that AWS provides.


> physical servers are way simpler than the extremely complex software and networking systems that AWS provides.

Or, rather, it's your fault when the complex software and networking systems you deployed on top of those physical servers go wrong (:


Yes. Which is why I try to keep my software from being overly complex, for example by not succumbing to the Kubernetes craze.

Well the complexity comes not from Kubernetes per se but that the problem it wants to solve (generalized solution for distributed computing) is very hard in itself.

Only if you actually has a system complex enough to require it. A lot of systems that use kubernetes are not complex enough to require it, but use it anyway. In that case kubernetes does indeed add unnecessary complexity.

Except that k8s doesn't solve the problem of generalized distributed computing at all. (For that you need distributed fault-tolerant state handling which k8s doesn't do.)

K8s solves only one problem - the problem of organizational structure scaling. For example, when your Ops team and your Dev team have different product deadlines and different budgets. At this point you will need the insanity of k8s.


I am so happy to read that someone views kubernetes the same way I do. for many years i have been surrounded by people who "kubernetes all the things" and that is absolute madness to me.

Yes, I remember when Kubernetes hit the scene and it was only used by huge companies who needed to spin-up fleets of servers on demand. The idea of using it for small startup infra was absurd.

As another data point, I run a k8s cluster on Hetzner (mainly for my own experience, as I'd rather learn on my pet projects vs production), and haven't had any Hetzner related issues with it.

So Hetzner is OK for the overly complex as well, if you wish to do so.


I love my k8s. Spend 5 minutes per month over the past 8 years and get a very reliable infra

Do you work on k8s professionally outside of the project you’re talking about?

5 mins seems unrealistic unless you’re spending time somewhere else to keep up to speed with version releases, upgrades, etc.


I think it sounds quite realistic especially if you’re using something like Talos Linux.

I’m not using k8s personally but the moment I moved from traditional infrastructure (chef server + VMs) to containers (Portainer) my level of effort went down by like 10x.


I would say even if not using Talos, Argo CD or Flux CD together with Renovate really helps to simplify the reoccuring maintenence.

You've spent less than 8 hours total on kubernetes?

I agree. Even when Kubernetes is used in large environments, is it still cumbersome, verbose and overly complex.

What are the alternatives?

Right, who needs scalability? Each app should have a hard limit of users and just stop acceppting new users when limits are reached.

Yeah scalability is great! Let’s burn through thousands of dollars an hour and give all our money to Amazon/Google/Microsoft

When those pink slips come in, we’ll just go somewhere else and do the same thing!


You know that “scale” existed long before K8s - or even Borg - was a thing, right? I mean, how do you think Google ran before creating them?

yes and mobile phones existed before smartphones, what's the point? So far in terms of scalability nothing beats k8s. And from OpenAI and Google we also see that it even works for high performance use case such as LLM trainings with huge amounts of nodes.

If the complex software you deployed and/or configured goes wrong on AWS it's also your fault.

On the other hand, I had the misfortune of having a hardware failure on one of my Hetzner servers. They got a replacement harddrive in fairly quickly, but still complete data loss on that server, so I had to rebuild it from scratch.

This was extra painful, because I wasn't using one of the OS that is blessed by Hetzner, so it requires a remote install. Remote installs require a system that can run their Java web plugin, and that have a stable and fast enough connection to not time out. The only way I have reliably gotten them to work is by having an ancient Linux VM that was also running in Hetzner, and had the oldest Firefox version I could find that still supported Java in the browser.

My fault for trying to use what they provide in a way that is outside their intended use, and props to them for letting me do it anyway.


That can happen with any server, physical or virtual, at any time, and one should be prepared for it.

I learned a long time ago that servers should be an output of your declarative server management configuration, not something that is the source of any configuration state. In other words, you should have a system where you can recreate all your servers at any time.

In your case, I would indeed consider starting with one of the OS base installs that they provide. Much as I dislike the Linux distribution I'm using now, it is quite popular, so I can treat it as a common denominator that my ansible can start from.


They allow netbooting to a recovery OS from which the disks can be provisioned via an ssh session too, for custom setups. Likely there are cases that require the remote "keyboard", but I wanted to mention that.

Cloud marketing and career incentives seems to have instilled in the average dev that MTBF for hardware is in days rather than years.

MTBF?

Mean time between failures

Mean Time Between Failures.

Do you monitor your product closely enough to know that there weren't other brief outages? E.g. something on the scale of unscheduled server restarts, and minute-long network outages?

I personally do through status monitors at larger cloud providers at 30 sec resolutions, never noticed a downtime. They will sometimes drop ICMP though, even though the host is alive and kicking.

Surprised they allow ICMP at all

why does this surprise you?

actually, why do people block ICMP? I remember in 1997-1998 there were some Cisco ICMP vulnerabilities and people started blocking ICMP then and mostly never stopped, and I never understood why. ICMP is so valuable for troubleshooting in certain situations.


Security through obscurity mostly, I don't know who continues to push the advice to block ICMP without a valid technical reason since at best if you tilt your head and squint your eyes you could almost maybe see a (very new) script kiddie being defeated by it.

I've rarely actually seen that advice anywhere, more so 20 years ago than now but people are still clearly getting it from circles I don't run in.


I don’t disagree. I am used to highly regulated industries where ping is blocked across the WAN

I do. Routers, switches, and power redundancy are solved problems in datacenter hardware. Network outages rarely occur because of these systems, and if any component goes down, there's usually an automatic failover. The only thing you might notice is TCP connections resetting and reconnecting, which typically lasts just a few seconds.

Of course. It's a production SaaS, after all. But I don't monitor with sub-minute resolution.

I do for some time now, on the scale of around 20 hosts in their cloud offering. No restarts or network outages. I do see "migrations" from time to time (vm migrating to a different hardware, I presume), but without impact on metrics.

Having run bare-metal servers for a client + plenty of VMs pre-cloud, you'd be surprised how bloody obvious that sort of thing is when it happens.

Also sorts of monitoring gets flipped.

And no, there generally aren't brief outages in normal servers unless you did it.

I did have someone accidentally shut down one of the servers once though.


to stick to the above point, this wasn't a minute long outage. if you care about seconds/minutes long outages, you monitor. running on aws, hetzer, ovh, or a raspberry in a shoe box makes no difference

7 years, 20 servers, same here.

When AWS is down, everybody knows it. People don’t really question your hosting choice. It’s the IBM of cloud era.

Yes, but those days are numbered. For many years AWS was in a league of its own. Now they’ve fallen badly behind in a growing number of areas and are struggling to catch up.

There’s a ton of momentum associated with the prior dominance, but between the big misses on AI, a general slow pace of innovation on core services, and a steady stream of top leadership and engineers moving elsewhere they’re looking quite vulnerable.


Can you throw out an example or two, because in my experience, AWS is the 'it just works' of the cloud world. There's a service for everything and it works how you'd expect.

I'm not sure what feature they're really missing, but my favorite is the way they handle AWS Fargate. The other cloud providers have similar offerings but I find Fargate to have almost no limitations when compared to the others.


You’ve given a good description of IBM for most of the 80s through the 00s. For the first 20 years of that decline “nobody ever got fired for buying IBM” was still considered a truism. I wouldn’t be surprised if AWS pulls it off for as long as IBM did.

I think that the worst thing that can happen to an org is to have that kind of status ("nobody ever got fired for buying our stuff" / "we're the only game in town").

It means no longer being hungry. Then you start making mistakes. You stop innovating. And then you slowly lose whatever kind of edge you had, but you don't realize that you're losing it until it's gone


Unfortunately I think AWS is there now. When you talk to folks there they don’t have great answers to why their services are behind or not as innovative as other things out there. The answer is basically “you should choose AWS because we’re AWS.” It’s not good.

I couldn't agree more, there was clearly a big shift when Jassy became CEO of amazon as a whole and Charlie Bell left (which is also interesting because it's not like azure is magically better now).

The improvements to core services at AWS hasn't really happened at the same pace post covid as it did prior, but that could also have something to do with overall maturity of the ecosystem.

Although it's also largely the case that other cloud providers have also realized that it's hard for them to compete against the core competency of other companies, whereas they'd still be selling the infrastructure the above services are run on.


Looks like you’re being down voted for saying the quiet bit out loud. You’re not wrong though.

Or because people don’t agree with “days are numbered”.

As much as I might not like AWS, I think they’ll remain #1 for the foreseeable future. Despite the reasons the guy listed.


Given recent earnings and depending on where things end up with AI it’s entirely plausible that by the end of the decade AWS is the #2 or #3 cloud provider.

AWS' core advantage is price. No one cares if they are "behind on AI" or "the VP left." At the end of the day they want a cheap provider. Amazon knows how to deliver good-enough quality at discount prices.

That story was true years ago but I don’t know that it rings true now. AWS is now often among the more expensive options, and with services that are struggling to compete on features and quality.

That is 100% true. You cant be fired for picking AWS... But I doubt its the best choice for most people. Sad but true

Schrodingers user;

Simultaneously too confused to be able to make their own UX choices, but smart enough to understand the backend of your infrastructure enough to know why it doesn't work and excuses you for it.


The morning national TV news (BBC) was interrupted with this as breaking news, and about how many services (specifically snapchat for some reason) are down because of problems with "Amazon's Web Services, reported on DownDetector"

I liked your point though!


Well, at that level of user they just know "the internet is acting up this morning"

I thought we didn't like when things were "too big to fail" (like the banks being bailed out because if we didn't the entire fabric of our economy would collapse; which emboldens them to take more risks and do it again).

A typical manager/customer understands just enough to ask their inferiors to make their f--- cloud platform work, why haven't you fixed it yet? I need it!

In technically sophisticated organizations, this disconnect simply floats to higher levels (e.g. CEO vs. CTO rather than middle manager vs. engineer).


You can't be fired, but you burn through your runway quicker. No matter which option you choose, there is some exothermic oxidative process involved.

AWS is smart enough to throw you a few mill credits to get you started.

MILL?!

I only got €100.000 bounded to a year, then a 20% discount for spend in the next year.

(I say "only" because that certainly would be a sweeter pill, €100.000 in "free" credits is enough to make you get hooked, because you can really feel the free-ness in the moment).


Mille is thousand in Latin so they might have meant a few thousand dollars.

Every one of the big hyperscalers has a big outage from time to time.

Unless you lose a significant amount of money per minute of downtime, there is no incentive to go multicloud.

And multicloud has its own issues.

In the end, you live with the fact that your service might be down a day or two per year.


> In the end, you live with the fact that your service might be down a day or two per year.

This is hilarious. In the 90s we used to have services which ran on machines in cupboards which would go down because the cleaner would unplug them. Even then a day or two per year would be unacceptable.


When we looked at this our conclusion was not multi cloud but local resiliency with cloud augmentation. We still had our own small data center

Usually, 2 founders creating a startup can't fire each other anyway so a bad decision can still be very bad for lots of people in this forum

On the other side of that coin, I am excited to be up and running while everyone else is down!

On one hand it's allows to shift the blame but on other hand is shows a disadvantage of hyper centralization - if AWS is down too many important services are down at the same time which makes it worse. E. g. when AWS is down it's important to have communication/monitoring services UP so engineers can discuss / co-ordinate workarounds and have good visibility but Atlassian was (is) significantly degraded today too.

Facebook had a comically bad outage a few years ago wherein the internal sign-in, messaging, and even server keycard access went down

https://en.wikipedia.org/wiki/2021_Facebook_outage#Impact

Somewhat related tip of the day, don't host your status page as a subdomain off the main site. Ideally host it with a different provider entirely


100%. When AWS was down, we'd say "AWS is down!", and our customers would get it. Saying "Hetzner is down!" raises all sorts of questions your customers aren't interested in.

I've ran a production application off Hetzner for a client for almost a decade and I don't think I have had to tell them "Hetzner is down", ever, apart from planned maintenance windows.

A bold strategy to think they'll never have an outage though, right? Maybe even a little naive and a little arrogant...

No provider is better than two providers.

Hosting on second- or even third-tier providers allows you to overprovision and have much better redundancy, provided your solution is architected from the ground up in a vendor agnostic way. Hetzner is dirt cheap, and there are countless cheap and reliable providers spread around the globe (Europe in my case) to host a fleet of stateless containers that never fail simultaneously.

Stateful services are much more difficult, but replication and failover is not rocket science. 30 minutes of downtime or 30 seconds of data loss rarely kill businesses. On the contrary, unrealistic RTOs and RPOs are, in my experience, more dangerous, either as increased complexity or as vendor lock-in.

Customers don't expect 100% availability and noone offers such SLAs. But for most businesses, 99.95% is perfecty acceplable, and it is not difficult to have less than 4h/year of downtime.


The point seems to be not that Hetzner will never have an outage, but rather that they have a track record of not having outages large enough for everyone to be affected.

Seems like large cloud providers, including AWS, are down quite regularly in comparison, and at such a scale that everything breaks for everyone involved.


> The point seems to be not that Hetzner will never have an outage, but rather that they have a track record of not having outages large enough for everyone to be affected.

If I am affected, I want everyone to be affected, from a messaging perspective


Okay, that helps for the case when you are affected. But what about the case when you are not affected and everyone else is? Doesn't that seem like good PR?

Take the hit of being down once every 10 years compared to being up for the remaining 9 that others are down.


To back up this point, currently BBC News have it as their most significant story, with "live" reporting: https://www.bbc.co.uk/news/live/c5y8k7k6v1rt

This is alongside "live" reporting on the Israel/Gaza conflict as well as news about Epstein and the Louvre heist.

This is mainstream news.


I like how their headline starts with Snapchat and Roblox being affected.

Actually I am keen to know how Roblox got impacted. Following the terrible Halloween Outage in 2021, they posted 2 years ago about migrating to a cell based architecture in https://corp.roblox.com/newsroom/2023/12/making-robloxs-infr...

Perhaps some parts of the migration haven't been completed, or there is still a central database in us-east1


The journalist found out about it from their tween.

They're the only apps English people are allowed to use, the rest of the internet is banned by Ofcom. /s

That depends on the service. Far from everyone is on their PC or smartphone all day, and even fewer care about these kinds of news.

Which eventually leads to the headline "AWS down indefinitely, society collapses".

Amazon is up, what are they doing?

And yet they still all activate their on call people (wait why do we have them if we are on the cloud?) to do .. nothing at all.

most people dont even know aws exists

Non-techies don’t. Here’s how CNN answered, what is AWS?

“Amazon Web Services (AWS) is Amazon’s internet based cloud service connecting businesses to people using their apps or online platforms.”

Uh.. yeah.


Kudos to the Globe/AP for getting it right:

> An Amazon Web Services outage is causing major disruptions around the world. The service provides remote computing services to many governments, universities and companies, including The Boston Globe.

> On DownDetector, a website that tracks online outages, users reported issues with Snapchat, Roblox, Fortnite online broker Robinhood, the McDonald’s app and many other services.


That's actually a fairly decent description for the non-tech crowd and I am going to adopt it, as my company is in the cloud native services space and I often have a problem explaining the technical and business model to my non-technical relatives and family - I get bogged down in trying to explain software defined hardware and similar concepts...

I asked ChatGPT for a succinct definition, and I thought it was pretty good:

“Amazon Web Services (AWS) is a cloud computing platform that provides on-demand access to computing power, storage, databases, and other IT resources over the internet, allowing businesses to scale and pay only for what they use.”


For us techies yes, but to the regular folks that is just as good as our usual technical gobbledy-gook - most people don´t differentiate between a database and a hard-drive.

You make a good point.

This part:

    > access to computing power, storage, databases, and other IT resources
could be simplified to: access to computer servers

Most people who know little about computers can still imagine a giant mainframe they saw in a movie with a bunch of blinking lights. Not so different, visually, from a modern data center.


Ah, yes, servers. I have seen those at Chili's and TGI Fridays!

It's the difference between connecting your home to the grid to get electricity vs having your own generator.

It's the same as having a computer room but in someone else's datacentre.


This one's great too, thanks.

You can argue about Hetzner's uptime, but you can 't argue about Hetzner's pricing which is hands down the best there is. I'd rather go with Hetzner and cobble up together some failover than pay AWS extortion.

For the price of AWS you could run Hetzner, a second provider for resiliancy and still make a large saving.

Your margin is my opportunity indeed.


I switched to netcup for even cheaper private vps for personal noncritical hosting. I'd heard of netcup being less reliable but so far 4 months+ uptime and no problems. Europe region.

Hetzner has the better web interface and supposedly better uptime, but I've had no problems with either. Web interface not necessary at all either when using only ssh and paying directly.


I am on Hetzner with a primary + backup server and on Netcup (Vienna) with a secondary. For DNS I am using ClouDNS.

I think I am more distributed then most of the AWS folks and it still is way cheaper.


I used netcup for 3 years straight for some self hosting and never noticed an outage. I was even tracking it with smokeping so if the box disappeared I would see it but all of the down time was mine when I rebooted for updates. I don't know how they do it but I found them rock solid.

I've been running my self-hosting stuff on Netcup for 5+ years and I don't remember any outages. There probably were some, but they were not significant enough for me to remember.

netcup is fine unless you have to deal with their support, which is nonexistent. Never had any uptime issues in the two years I've been using them, but friends had issues. Somewhat hit or miss I suppose.

Exactly. Hetzner is the equivalent of the original Raspberry Pi. It might not have all fancy features but it delivers and for the price that essentially unblocks you and allows you to do things you wouldn't be able to do otherwise.

They've been working pretty hard on those extra features. Their load balancing across locations is pretty decent for example.

> I'd rather go with Hetzner and cobble up together some failover than pay AWS extortion.

Comments like this are so exaggerated that they risk moving the goodwill needle back to where it was before. Hetzner offers no service that is similar to DynamoDB, IAM or Lambda. If you are going to praise Hetzner as a valid alternative during a DynamoDB outage caused by DNS configuration, you would need to a) argue that Hetzner is a better option regarding DNS outages, b) Hetzner is a preferable option for those who use serverless offers.

I say this as a long-time Hetzner user. Herzner is indeed cheaper, but don't pretend that Herzner let's you click your way into a highly-availale nosql data store. You need non-trivial levels of you're ow work to develop, deploy, and maintain such a service.


> but don't pretend that Herzner let's you click your way into a highly-availale nosql data store.

The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda - is something I've only heard from people who've done AWS quickstarts but never run anything at scale in AWS.

Of course nobody else offers AWS products, but people use AWS for their solutions to compute problems and it can be easy to forget virtually all other providers offer solutions to all the same problems.


>The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda

With some services I'd agree with you, but DynamoDB and Lambda are easily two of their 'simplest' to configure and understand services, and two of the ones that scale the easiest. IAM roles can be decently complicated, but that's really up to the user. If it's just 'let the Lambda talk to the table' it's simple enough.

S3/SQS/Lambda/DynamoDB are the services that I'd consider the 'barebones' of the cloud. If you don't have all those, you're not a cloud provider, your just another server vendor.


> Lambda are easily two of their 'simplest'

Not if you want to build something production ready. Even a simple thing like say static IP ingress for the Lambda is very complicated. The only AWS way you can do this is by using Global Accelerator -> Application Load Balancer -> VPC Endpoint -> API Gateway -> Lambda !!.

There are so many limits for everything that is very hard to run production workloads without painful time wasted in re-architecture around them and the support teams are close to useless to raise any limits.

Just in the last few months, I have hit limits on CloudFormation stack size, ALB rules, API gateway custom domains, Parameter Store size limits and on and on.

That is not even touching on the laughably basic tooling both SAM and CDK provides for local development if you want to work with Lambda.

Sure Firecracker is great, and the cold starts are not bad, and there isn't anybody even close on the cloud. Azure functions is unspeakably horrible, Cloud Run is just meh. Most Open Source stacks are either super complex like knative or just quite hard to get the same cold start performance.

We are stuck with AWS Lambda with nothing better yes, but oh so many times I have come close to just giving up and migrate to knative despite the complexity and performance hit.


>Not if you want to build something production ready.

>>Gives a specific edge case about static IPs and doing a serverless API backed by lambda.

The most naive solution you'd do on any non-cloud vendor, just have a proxy with a static ip that then routes traffic whereever it needed to go, would also work on AWS.

So if you think AWS's solution sucks why not just go with that? What you described doesn't even sound complicated when you think of the networking magic behind the scenes that will take place if you ever do scale to 1 million tps.


> Production ready

Don’t know what you think should mean but for me that means

1. Declarative IaaC in either in CF/terraform

2. Fully Automated discovery which can achieve RTO/RPO objectives

3. Be able to Blue/Green and % or other rollouts

Sure I can write ansible scripts, have custom EC2 images run HA proxy and multiple nginx load balancers in HA as you suggest, or host all that to EKS or a dozen other “easier” solutions

At the point why bother with Lambda ? What is the point of being cloud native and serverless if you have to literally put few VMs/pod in front and handle all traffic ? Might as well host the app runtime too .

> doesn’t even sound complicated .

Because you need a full time resource who is AWS architect and keeps up with release notes and documentation or training and constantly works to scale your application - because every single component has a dozen quotas /limits and you will hit them - it is complicated.

If you spend few million a year on AWS then spending 300k on an engineer to do just do AWS is perhaps feasible .

If you spend few hundred thousands on AWS as part of mix of workloads it is not easy or simple.

The engineering of AWS impressive as it maybe has nothing to the products being offered . There is a reason why Pulumi, SST or AWS SAM itself exist .

Sadly SAM is so limited I had to rewrite everything to CDK in couple of months . CDK is better but I am finding that I have to monkey patching limits on CDK with the SDK code now, while possible , the SDK code will not generate Cloudformation templates .


> Don’t know what you think should mean but for me that means

I think your inexperience is showing, if that's what you try to mean by "production-ready". You're making a storm in a teacup over features that you automatically onboard if you go through an intro tutorial, and "production-ready" typically means way more than a basic run-of-the-mill CICD pipeline.

As most of the times, the most vocal online criticism comes from those who have the least knowledge and experience over the topic they are railing against, and their complains mainly boil down to criticising their own inexperience and ignorance. There is plenty of things to criticize AWS for, such as cost and vendor lock-in, but being unable and unwilling to learn how to use basic services is not it.


> Even a simple thing like say static IP ingress for the Lambda is very complicated.

Explain exactly what scenario you believe requires you to provide a lambda behind a static IP.

In the meantime, I recommend you learn how to invoke a lambda, because static IPs is something that is extremely hard to justify.


Try telling that to customers who can only do outbound API calls to whitelisted IP addresses

When you are working with enterprise customers or integration partners it doesn’t even have to be regulated sectors like finance or healthcare, these are basic asks you cannot get away from .

people want to be able to know whitelist your egress and ingress IPs or pin certificates. It is not up to me to say on efficacy of these rules .

I don’t make the rules of the infosec world , I just follow them.


> Try telling that to customers who can only do outbound API calls to whitelisted IP addresses

Alright, if that's what you're going with then you can just follow a AWS tutorial:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-v...

Provision an elastic IP to have your static IP address, set the NAT gateway to handle traffic, and plugin the lambda to the NAT gateway.

Do you think this qualifies as very complicated?


This architecture[1] requires the setup of 2 NAT gateways (one in each AZ), a routing table, an Internet Gateway, 2 Elastic IP and also the VPC. Since as before we cannot use Function URLs for Lambda we will still need the API Gateway to make HTTP calls.

The only parts we are swapping out `GA -> ALB -> VPC` for `IG -> Router -> NAT -> VPC`.

Is it any simpler ? Doesn't seem like it is to me.

Going the NAT route means, you also need to have intermediate networking skills to handle a routing table (albeit a simple one), half the developers of today never used IP tables is or what chaining rules is.

---

I am surprised at the amount of pushback on a simple point which should be painfully obvious.

AWS (Azure/GCP are no different) has become overly complex with no first class support for higher order abstractions and framework efforts like SAM or even CDK seem to getting not much love at all in last 4-5 years.

Just because they offer and sell all these components to be independently, doesn't mean they should not invest and provide higher order abstractions for people with neither bandwidth or the luxury to be a full time "Cloud Architect".

There is a reason why today Vercel, Render or Railway others are popular despite mostly sitting on top of AWS.

On Vercel the same feature would be[1] quite simple. They use the exact solution you suggest on top of AWS NAT gateway, but the difference I don't have to know or manage it, is the large professional engineering team with networking experience at Vercel.

There is no reason AWS could not have built Vercel like features on top of their offerings or do so now.

At some point small to midsize developers will avoid direct AWS by either choosing to setup Hetzner/OVH bare machines or with bit more budget colo with Oxide[3] or more likely just stick to Vercel and Railway kind of platforms.

I don't know how that will impact AWS, we will all still use them, however a ton of small customers paying close to rack rate is definitely much much higher margin than what Vercel is paying AWS for the same workload is going to be.

--

[1] https://docs.aws.amazon.com/prescriptive-guidance/latest/pat...

[2] this https://vercel.com/docs/connectivity/static-ips

[3] Would be rare, obviously only if they have the skill experience to do so.


> With some services I'd agree with you, but DynamoDB and Lambda are easily two of their 'simplest' to configure and understand services, and two of the ones that scale the easiest. IAM roles can be decently complicated, but that's really up to the user. If it's just 'let the Lambda talk to the table' it's simple enough.

We agree, but also, I feel like you're missing my point: "let the Lambda talk to the table" is what quickstarts produce. To make a lambda talk to a table at scale in production, you'll want to setup your alerting and monitoring to notify you when you're getting close to your service limits.

If you're not hitting service limits/quotas, you're not running even close to running at scale.


> The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda - is something I've only heard from people who've done AWS quickstarts but never run anything at scale in AWS.

I'll bite. Explain exactly what work you think you need to do to get your pick of service running on Hetnzer to have equivalent fault-tolerance to, say, a DynamoDB Global Table created with the defaults.


Are you Netflix? Because is not theres a 99% probability you dont need any of those AWS services and just have a severe case of shiny object syndrome in your organisation.

Plenty of heavy traffic, high redundancy applications exist without the need for AWS (or any other cloud providers) overpriced "bespoke" systems.


To be honest I don't trust myself running a HA PostgreSQL setup with correct backups without spending an exorbitant effort to investigate everything (weeks/months) - do you ? I'm not even sure what effort that would take. I can't remember last time I worked with unmanaged DB in prod where I did not have a dedicated DBA/sysadmin. And I've been doing this for 15 years now. AFAIK Hetzner offers no managed database solution. I know they offer some load balancer so there's that at least.

At some point in the scaling journey bare metal might be the right choice, but I get the feeling a lot of people here trivialize it.


If youre not Netflix then just sudo yum install postgresql and pg_dump every day, upload to S3. Has worked for me for 20 years at various companies, side projects, startups …

> If youre not Netflix then just sudo yum install postgresql and pg_dump every day, upload to S3.

database services such as DynamoDB support a few backup strategies out of the box, including continuous backups. You just need to flips switch and never bother about it again.

> Has worked for me for 20 years at various companies, side projects, startups …

That's perfectly fine. There are still developers who don't even use version control at all. Some old habits die hard, even when the whole world moved on.


What happens when the server goes down ? How do you update it ?

you stand up another db server and load the last good dump into it i suppose

If it requires weeks/months to sort setting that up and backups then you need a news ops person as that's insane.

If you're doing it yourself, learn Ansible, you'll do it once and be set forever.

You do not need "managed" database services. A managed database is no different from apt install postgesql followed by a scheduled backup.

It genuinely is trivial, people seem to have this impression theres some sort of unique special sauce going on at AWS when there really isn't.


That doesn’t give you high availability; it doesn’t give you monitoring and alerting; it doesn’t give you hardware failure detection and replacement; it doesn’t solve access control or networking…

Managed databases are a lot more than apt install postgresql.


If you're doing it yourself, learn Ansible, you'll do it once and be set forever.

You do not need "managed" database services. A managed database is no different from apt install postgesql followed by a scheduled backup.

Genuinely no disrespect, but these statements really make it seem like you have limited experience building an HA scalable system. And no, you don't need to be Netflix or Amazon to build software at scale, or require high availability.


Backups with wall-g and recurring pg_dump are indeed trivial. (Modulo S3 outage taking so long that your WAL files fill up the disk and you corrupt the entire database.)

It's the HA part, especially with a high-volume DB that's challenging.


But that's the thing - if I have an ops guy who can cover this then sure it makes sense - but who does at an early stage ? As a semi competent dev I can setup a terraform infra and be relatively safe with RDS. I could maybe figure out how to do it on my own in some time - but I don't know what I don't know - and I don't want to do a weekend production DB outage debugging because I messed up the replication setup or something. Maybe I'm getting old but I just don't have the energy to deal with that :)

From your comment, you don't even have the faintest idea of what is the problem domain. No wonder you think you know better.

> Are you Netflix? Because is not theres a 99% probability you dont need any of those AWS services and just have a severe case of shiny object syndrome in your organisation.

I think you don't even understand the issue you are commenting on. It's irrelevant if you are Netflix or some guy playing with a tutorial. One of the key traits of serverless offerings is how it eliminates the need to manage and maintain a service or even worry about you have enough computational resources. You click a button to provision everything, you configure your clients to consume that service, and you are done.

If you stop to think about the amount of work you need to invest to even arrive at a point where you can actually point a client at a service, you'll be looking at what the value of serverless offerings.

Ironically, it's the likes of Netflix who can put together a case against using serverless offerings. They can afford to have their own teams managing their own platform services with the service levels they are willing to afford. For everyone else, unless you are in the business of managing and tuning databases or you are heavily motivated to save pocket change on a cloud provider bill, the decision process is neither that clear not favours running your own services.


> Plenty of heavy traffic, high redundancy applications exist without the need for AWS (or any other cloud providers) overpriced "bespoke" systems.

And almost all of them need a database, a load balancer, maybe some sort of cache. AWS has got you covered.

Maybe some of them need some async periodic reporting tasks. Or to store massive files or datasets and do analysis on them. Or transcode video. Or transform images. Or run another type of database for a third party piece of software. Or run a queue for something. Or capture logs or metrics.

And on and on and and on. AWS has got you covered.

This is Excel all over again. "Excel is too complex and has too many features, nobody needs more than 20% of Excel. It's just that everyone needs a different 20%".


You're right AWS does have you covered. But that doesn't mean thats the only way of doing it. Load balancing is insanely easy to do yourself, databases even easier. Caching, ditto.

I think a few people who claim to be in devops could do with learning the basics about how things like Ansible can help them as there's a fair few people who seem to be under the impression AWS is the only, and the best option, which unless you're FAANG really is rarely the case.


You can spin up a redundant database setup with backups and monitoring and automatic fail over in 10 mins (the time it takes in AWS)? And maintain it? If you've done this a few times before and have it highly automated, sure. But let's not pretend it's "even easier" than "insanely easy".

Load balancing is trivial unless you get into global multicast LBs, but AWS have you covered there too.


You could never run a site like hacker news on a single box somewhere with a backup box a couple of states away.

(/s, obviously)


And have the two fail at the same time because similarly old hardware with similarly old and used disks fails at roughly the same time :)

> You're right AWS does have you covered. But that doesn't mean thats the only way of doing it. Load balancing is insanely easy to do yourself, databases even easier. Caching, ditto

I think you don't understand the scenario you are commenting on. I'll explain why.

It's irrelevant if you believe that you are able to imagine another way to do something, and that you believe it's "insanely easy" to do those yourself. What matters is that others can do that assessment themselves, and what you are failing to understand is that when they do so, their conclusion is that the easiest way by far to deploy and maintain those services is AWS.

And it isn't even close.

You mention load balancing and caching. The likes of AWS allows you to setup a global deployment of those services with a couple of clicks. In AWS it's a basic configuration change. And if you don't want it, you just tear down everything with a couple of clicks as well.

Why do you think a third of all the internet runs on AWS? Do you think every single cloud engineer in the world is unable to exercise any form of critical thinking? Do you think there's a conspiracy out there to force AWS to rule the world?


If you need the absolutely stupid scale DynamoDB enables what is the difference compared to running for example FoundationDb on your own using Hetzner?

You will in both cases need specialized people.


> Hetzner offers no service that is similar to DynamoDB, IAM or Lambda.

The key thing you should ask yourself: do you need DynamoDB or Lambda? Like "need need" or "my resume needs Lambda".


> The key thing you should ask yourself: do you need DynamoDB or Lambda? Like "need need" or "my resume needs Lambda".

If you read the message you're replying to, you will notice that I singled out IAM, Lambda, and DynamoDB because those services were affected by the outage.

If Hetzner is pushed as a better or even relevant alternative, you need to be able to explain exactly what you are hoping to say to Lambda/IAM/DynamoDB users to convince them that they would do better if they used Hetzner instead.

Making up conspiracy theories over CVs doesn't cut it. Either you know anything about the topic and you actually are able to support this idea, or you're an eternal September admission whose only contribution is noise and memes.

What is it?


Well, Lambda scales down to 0 so I don't have to pay for the expensive EC2 instan... oh, wait!

> click your way into a HA NoSQL data store

Maybe not click, but Scylla’s install script [0] doesn’t seem overly complicated.

0: https://docs.scylladb.com/manual/stable/getting-started/inst...


TBH, in my last 3 years with Hetzner, i never saw a downtime to my servers other than myself doing some routin maitenance for os updates. Location Falkenstein.

And I have seen them delete my entire environment including my backups due to them not following their own procedures.

Sure, if you configure offsite backups you can guard against this stuff, but with anything in life, you get what you pay for.


You really need your backup procedures and failover procedures though, a friend bought a used server and the disk died fairly quickly leaving him sour.

THE disk?

It's a server! What in the world is your friend doing running a single disk???

Ate a bare minimum they should have been running a mirror.


Younger guy with ambitions but little experience, I think my point was that used servers with Hetzner are still used so if someone has been running disk heavy jobs you might want to request new disks or multiple ones and not just pick the cheapest options at the auction.

(Interesting that an anectode like above got downvoted)


> (Interesting that an anectode like above got downvoted)

experts almost unilaterally judge newbies harshly, as if the newbies should already know all of the mistakes to avoid. things like this are how you learn what mistakes to avoid.

"hindsight is 20/20" means nothing to a lot of people, unfortunately.


I do have HA setup and Backup for DB that run periodically to an S3.

What is the Hetzner equivalent for those in Windows Server land? I looked around for some VPS/DS providers that specialize in Windows, and they all seem somewhat shady with websites that look like early 2000s e-commerce.

I work at a small / medium company with about ~20 dedicated servers and ~30 cloud servers at Hetzner. Outages have happened, but we were lucky that the few times it did happen, it was never a problem / actual downtime.

One thing to note is that there are some scheduled maintenances were we needed to react.


We've been running our services on Hetzner for 10 years, never experienced any significant outages.

That might be datacenter dependant of course, since our root servers and cloud services are all hosted in Europe, but I really never understood why Hetzner is said to be less reliable


Haha, yeah that's a nugget

> 99.99% uptime infra significantly cheaper than the cloud.

I guess that's another person that has never actually worked in the domain (SRE/admin) but still wants to talk with confidence on the topic.

Why do I say that? Because 99.99% is frickin easy

That's almost one full hour of complete downtime per year.

It only gets hard in the 99.9999+ range ... And you rarely meet that range with cloud providers either as requests still fail for some reason, like random 503 when a container is decommissioned or similar


My recommendation is to use AWS, but not the US-EAST-1 region. That way you get benefits of AWS without the instability.

AWS has internal dependencies on US-EAST-1.

Admittedly they're getting fewer and fewer, but they exist.

The same is also true in GCP, so as much as I prefer GCP from a technical standpoint: the truth is, if you can't see it, it doesn't mean it goes away.


The only hard dependency I am still aware of is write operations to the R53 control plane. Failover records and DNS queries would not be impacted. So business workflows would run as if nothing happened.

(There may still be some core IAM dependencies in USE1, but I haven’t heard of any.)


We're currently witnessing the fact that what you're claiming is not as true as you imply.

We don't know that (yet) - it's possible that this is simply a demonstration of how many companies have a hard dependency on us-east-1 for whatever reason (which I can certainly believe).

We'll know when (if) some honest RCAs come out that pinpoint the issue.


I created a DNS record in route53 this morning with no issues

the Billing part of the console in eu-west-2 was down though, presumably because that uses us-east-1 dynamodb, but route53 doesn't.


I had a problem with an ACME cert terraform module. It was doing the R53 to add the DNS TXT record for the ACME challenge and then querying the change status from R53.

R53 seems to use Dynamo to keep track of the syncing of the DNS across the name servers, because while the record was there and resolving, the change set was stuck in PENDING.

After DynamoDB came back up, R53's API started working.


We have nothing deployed in us east 1, yet all of our CI was failing due to IAM errors this morning.

>Just a couple of days ago in this HN thread [0] there were quite some users claiming Hetzner is not an options as their uptime isn't as good as AWS, hence the higher AWS pricing is worth the investment. Oh, the irony.

That's not necessarily ironic. Seems like you are suffering from recency bias.


I’m more curious to understand how we ended up creating a single point of failure across the whole internet.

I don't have an opinion either way, but for now, this is just anecdotal evidence.

Looks fine for pointing an irony

In some ways yes. But in some ways this is like saying it's more likely to rain on your wedding day.

I'm not affiliated and won't be compensated in any way for saying this: Hetzner are the best business partners ever. Their service is rock solid, their pricing is fair, their support is kind and helpful.

Going forward I expect American companies to follow this European vibe, it's like the opposite of enshitification.


> the opposite of enshitification.

Why do you expect American companies to follow it then? >:)


How do you expect American....

Stop making things up. As someone who commented on the thread in favour of AWS, there is almost no mention of better uptime in any comment I could find.

I could find one or two downvoted or heavily critisized comments, but I can find more people mentioning the opposite.


I don't know how often Hetzner has similar outages, but outages at the rack and server level, including network outages and device failure happen for individual customers. If you've never experienced this, it is probably just survivor's bias.

Aws/cloud has similar outages too, but more redundancy and automatic failover/migrations that are transparent to customers happen. You don't have to worry about DDOS and many other admin burdens either.

YMMV, I'm just saying sometimes Aws makes sense, other times Hetzner does.


It still can be true that the uptime is better, or am I overlooking something?

Nah you're definitely correct.

Hetzner users are like the Linux users for cloud.

Btw I use Hertzner

Love, after your laptop's wifi antenna, it's Linux users all the way down.

Been using OVH here with no complaints.

I got a downvote already for pointing this out :’)

Unfortunately, HN is full of company people, you can't talk anything against Google, Meta, Amazon, Microsoft without being downvoted to death.

It's less about company loyalty and more about protecting their investment into all the buzzwords from their resumes.

As long as the illusion that AWS/clouds are the only way to do things continues, their investment will keep being valuable and they will keep getting paid for (over?)engineering solutions based on such technologies.

The second that illusion breaks down, they become no better than any typical Linux sysadmin, or teenager ricing their Archlinux setup in their homelab.


Can't fully agree. People genuinely detest Microsoft on HN and all over the globe. My Microsoft-related rants are always upvoted to the skies.

> People genuinely detest Microsoft on HN and all over the globe

I would say tech workers rather than "people" as they are the ones needing to interact with it the most


I'm a tech worker, and have been paid by a multi-billion dollar company to be a tech worker since 2003.

Aside from Teams and Outlook Web, I really don't interact with Microsoft at all, haven't done since the days of XP. I'm sure there is integration on our corporate backends with things like active directory, but personally I don't have to deal with that.

Teams is fine for person-person instant messaging and video calls. I find it terrible for most other functions, but fortunately I don't have to use it for anything other than instant messaging and video calls. The linux version of teams still works.

I still hold out a healthy suspicion of them from their behaviour when I started in the industry. I find it amusing the Microsoft fanboys of the 2000s with their "only needs to work in IE6" and "Silverlight is the future" are still having to maintain obsolete machines to access their obsolete systems.

Meanwhile the stuff I wrote to be platform-agnostic 20 years ago is still in daily use, still delivering business benefit, with the only update being a change from "<object" to "<video" on one internal system when flash retired.


AWS and Cloudflare are HN darlings. Go so far as to even suggest a random personal blog doesn't need Cloudflare and get downvoted with inane comments as "but what about DDOS protection?!"

The truth is one under the age of 35 is able to configure a webserver any more, apparently. Especially now that static site generators are in vogue and you don't even need to worry about php-fpm.


Lol, realistically you only need to care about external DDoS protection, if you are at risk of AWS bankrupting your ass.

Isn't it just ads?

Finally IT managers will start understanding that cloud is no difference than Hetzner.

When things go wrong, you can point at a news article and say its not just us that have been affected.

I tried that but Slack is broken and the message hasn't got through yet...

Well, we have a naming issue (Hetzner also has Hetzner Cloud, it looks people still equal cloud with the three biggest public cloud providers).

In any case, in order for this to happen, someone would have to collect reliable data (not all big cloud providers like to publish precise data, usually they downlplay the outages and use weasel words like "some customers... in some regions... might have experienced" just not to admit they had an outage) and present stats comparing the availability of Heztner Cloud vs the big three.


Thanks for that link. It seems with that introduction, they also lowered prices on the dedicated-core on their vservers - at least I was paying 15€/month and now they seem to offer it for 12€/month. I will try to see if shared performance is an option for the future.

> A lot of Apple hardware is impressive on paper, but I will never buy a Mac that can't run Linux.

They run Linux actually very well, have you ever tried Parallels or VMware Fusion? Especially Parallels ships with good softwaer drivers for 2d/3d/video acceleration, suspend, and integration into the host OS. If that is not your thing, the new native container solution in Tahoe can run container from dockerhub and co.

> I simply don't want to live in Apple's walled garden.

And what walled garden would that be on macOS? You can install what you want, and there is homebrew at your fingertips with all the open and non-open software you can ask for.


Last I looked... extensive telemetry and a sealed boot volume that makes it impractical to turn off even if theoretically possible. There are other problems of course.


You can disable SIP and even disable immutable kernel text, load arbitrary drivers, enable/disable any feature, remove any system daemon, use any restricted entitlements. The entire security model of macOS can be toggled off (csrutil from recoveryOS).


Aware of that. Way too big of a request just to make reasonable configuration changes, like shutting down daemons, etc.


No, it’s not that big a request. You literally have the capability. The average user does not need it.

What is hard about this?


Stopping/disabling a service should be a command, like it is on Windows or Linux. Not configured on a read-only volume bundled with other security guarantees.

It's pretty simple to keep these two things separate, like everywhere else in the present and history of the industry.


Just because Windows/Linux do things one way doesn't mean the rest of the industry has to follow it. ;P


Just out of curiosity, are these philosophical objections or do you have a practical use for disabling code signing and messing with your boot volume?


I have practical use for disabling telemetry and other misfeatures. (Maybe you meant to reply to your sibling comment?)


No, I meant to reply to you. I was curious about your practical use case for disabling code signing (which I think is what you refer to by telemetry) and messing with the boot volume.


Not what I am referring to. The goal is to disable misfeatures, not reduce security. Only Apple bundles the two.

He's a religious linux believer that will make you call him GNU/Linux believer - no point in argueing, there is not interest in the argument.


From what I checked, disabling SIP/AMFI/whatever it is now means I can't run iOS applications on macOS. The fact that there are restrictions on what I can run when doing that makes macOS more restrictive.

Also, what if I want to run eBPF on my laptop on bare metal, to escape the hypercall overhead from VMs or whatever? Ultimately, a VM is not the same as a native experience. I might want to take advantage of acceleration for peripherals that aren't available unless I'm bare metal.


That point is often brought up, but it kind of invalid. Because you can't run iOS on your Linux or Windows installation, too. So saying because of that usecase you are switching the OS, is kind of a spite reaction, not based on reason.

As in: "I can't run iOS on my macOS installation, so I am going to use a different OS where I can't run iOS either".


Well, it's less of a feature argument, and more of a "I philosophically don't support using an OS that prevents me from using parts of it, because I oppose losing control over the software my system runs."


Well it’s just one less plus in the macOS column.

I switched from pixel to iPhone in large part because pixel removed the rear fingerprint reader, headphone jack, and a UI shortcut I used multiple times a day. It’s not like the iPhone had those things but now neither did the pixel.


How does Asahi fare these days? For home use I am fine with my Fedora machine but as a former (Tiger-SL era) Mac user who's never used macOS, I am somewhat curious about this.


Remember Asahi works properly only on M1 and M2. More work is required to make it run well on later chips (its not just a faster ARM chip - it's new graphics card each time, motherboard chipset, every laptop peripheral changes from time to time, BIOS/UEFI, etc, and they all need reverse-engineered drivers for it work).


Would it be possible to run a whole linux OS on macos, even if through virtualization?


... or UTM. I have run windows and Linux on my M1 MB Pro with plenty of success.

Windows - because I needed it for a single application.

Linux - has been extremely useful as a compliment to small arm SBCs that I run. eg: Compiling a kernel is much faster there than on (say) a Raspberry Pi. Also, USB device sharing makes working with vfat/ext4 filesystems on small memory cards a breeze.


And here I am, selling my Macbook M4 Pro to buy a Macbook Air and a dedicated gaming machine. I've tried gaming on the Macbook with Heroic, GPTK, Whiskey, RPCS3 emu and some native. When a game runs, the performance is stunning for a Laptop - but there is always glitches, bugs and annoyances that take out the joy. Needles to mention lack of support from any sort of online multiplayer, due to the lack of anticheat support.

I wish Apple would take gaming more seriously and make GPTK a first class citizen such as Proton on Linux.


Off the top of my head, here is what that needs:

  1. Implementing PR_SET_SYSCALL_USER_DISPATCH
  2. Implementing ntsync
  3. Implementing OpenGL 4.6 support (currently only OpenGL 4.1 is supported)
  4. Implementing Vulkan 1.4 with various extensions used by DXVK and vkd3d-proton.
That said, there are alternatives to those things.

  1. Not implementing this would just break games like Jurassic World where DRM hard codes Windows syscalls. I do not believe that there are many of these, although I could be wrong.
  2. There is https://github.com/marzent/wine-msync, although implementing ntsync in the XNU kernel would be better.
  3. The latest OpenGL isn't that important these days now that Vulkan has been widely adopted, although having the latest version would be nice to have for parity. Not many things would suffer if it were omitted.
  4. They could add the things needed for MoltenVK to support Vulkan 1.4 with those extensions on top of Metal:
https://github.com/KhronosGroup/MoltenVK/issues/203

It is a shame that they do not work with Valve on these things. If they did, Proton likely would be supported for MacOS from within Steam and the GPTK would benefit.


> lack of anticheat support.

I just redid my windows machine to get at TPM2.0 and secure boot for Battlefield 6. I did use massgrave this time because I've definitely paid enough Microsoft taxes over the last decade. I thought I would hate this new stuff but it runs much better than the old CSM bios mode.

Anything not protected by kernel level anti cheats I play on my steam deck now. Proton is incredible. I am shocked that games like Elden Ring run this well on a linux handheld.


It's funny considering what people are telling me about the rampant cheating in that game. May settle out eventually but these anti cheat systems seem to not do much.


Good point. Many people (including me) switched to Apple Silicon with the hope (or promise?) of having just one computer for work and leisure, given the potential of the new architecture. That didn't happen, or only partially, which is the same.

In my case, for software development, I'd be happy with an entry-level MacBook Air (now with a minimum of 16GB) for $999.


I can't sell my MacBook Pro because the speakers are so insanely good. Air can't compare. The speakers are worth the extra kilos.


I have never once used my laptop speakers. Not saying youre wrong but its crazy how different priorities for products can be


I shocked when I tried out the 2019 MBP speakers, they were almost as good as my (low-end) studio headphones. I was even more shocked with the M2 speakers, which are arguably better (although not as flat frequency response, I think, there definitely is something a little artificial, but it sounds really good). I really could not imagine laptop speakers being even close to par to decent headphones. Perhaps they aren't on par with $400 headphones, I've never had any of those. But now by preference I listen on the laptop speakers. It's not a priority--I'm totally happy to go back to the headphones--more like an unexpected perk.


But why would you ever use the speakers?


I work alone- I can use the speakers at any volume without bothering anybody or wearing anything in my ears or on my head. It's wonderful.


Apple Audio is one of the best in consumer market. i've never found a laptop with better speaker, even if they cost a lot more.

I agree—the difference between the different compatibility layers and native games is very steep at times. Death Stranding on my M2 Pro looks so good it’s hard to believe, but running GTA Online is so brittle and clunky… Even when games have native macOS builds, it’s rare to find them with Apple Silicon support (and even rarer with Metal support). There is a notable exception though: Arma 3 has experimental Apple Silicon support, though it comes with significant limitations. (Multiplayer, flying & mods) Although I don’t believe it’s in Apple’s interest, gaming on Linux might become an option in the future, even on Mac, but the lack of ARM builds is an even bigger problem there…

Since I am playing mostly MSFS 2024 these days I currently use GeForce Now which is fine, but cloud gaming isn’t still quite there yet…


> Death Stranding on my M2 Pro looks so good it’s hard to believe,

Death Stranding is a great looking game to be sure, but it's also kinda hard to get excited about a 5 year old game achieving rtx 2060 performance on a $2000+ system. And that was apparently worthy of a keynote feature...


Many people blame the lack of OpenGL/Vulkan... but I really don't buy it. It doesn't pass the sniff test as an objection. PlayStation doesn't support OpenGL/Vulkan (they have their own proprietary APIs, GNM, GNMX, PSSL). Nintendo supports Vulkan but performance is so bad, almost everyone uses the proprietary API (NVN / NVN2). Xbox obviously doesn't accept OpenGL/Vulkan either, requiring DirectX. Understanding of Metal is widespread in mobile gaming, so it's weird AAA couldn't pull from that industry if they wished.


The primary reason is Apple's environment is too unstable for gaming's most common business model. Most games are developed, released, and then sold for years and years with little or no maintenance. Additionally, gamers expect the games they purchased to continue to work indefinitely. Apple regularly breaks backwards compatibility in a wide variety of ways (code signing requirements; breaking OS API changes; hardware architecture changes). That means software run on Apple OSes must be constantly maintained or else it will eventually stop working. Most games aren't developed like that.

No one who was forced to write a statement like [this](https://help.steampowered.com/en/faqs/view/5E0D-522A-4E62-B6...) is going to be enthusiastic about continuing to work with Apple.


Game developers make most of the money shortly after a game release, so having a 15 years old game not working anymore shouldn't make much difference in term of revenues.

Anyway, the whole situation was quite bad. Many games were still 32-bit, even if macOS itself had been mainly 64-bit for almost 10 years or more. And Valve didn't help either, the Steam store is full of 64-bit mislabeled as 32-bit. They could have written a simple script to check whether a game is actually 64-bit or not, instead they decided to do nothing and keep their chaos.

The best solution would have been a lightweight VM to run old 32-bit games, nowadays computer are powerful enough to do so.


I've heard this argument, but it also doesn't pass the sniff test in 2025.

1. When is the next transition on bits? Is Apple going to suddenly move to 128-bit? No.

2. When is the next transition on architecture? Is Apple going to suddenly move back to x86? No.

3. When is the next API transition? Is Apple suddenly going to add Vulkan or reinvigorate OpenGL? No. They've been clear it's Metal since 2014, 11 years ago. That's plenty of time for the industry to follow if they cared, and mobile gaming has adopted it without issue.

We might as well complain that the PlayStation 4 was completely incompatible with the PlayStation 3.


What happens when apple switches to riscv, or depreciates versions of metal in a backwards incompatible way, or mandates some new code signing technique?

The attitude in the apple developer ecosystem is that apple tells you to jump, and you ask how high.

You could complain that Playstation 4 software is incompatible with Playstation 3. This is the PC gaming industry, there are higher standards for the compatibility of software that only a couple companies can ignore.


Apple will never transition to RISC-V; especially when they cofounded ARM. They have 35 years of institutional knowledge in ARM. Their cores and techniques are licensed and patented with mixtures of their own IP and ARM-compatible IP. That is decades away, if ever. Even the assumption RISC-V will eventually achieve equality with ARM performance is untested; as sometimes ISAs do fail at scale (Itanium anyone? While unlikely to repeat; even a discovered 5% structural difference in the negative would handicap adoption permanently.)

"This is the PC gaming industry"

Who said Apple needed to present themselves as a PC gaming alternative over a console alternative?


Consoles are dying and PCs are replacing them. Like the original commenter suggested, people want to run PC games. The market has decided that the benefits of compatibility outweigh the added complexity. On the PC you have access to a massive expanding back-catalog of old software, far more competition in the market, mods, and you're able to run whatever software you want alongside games (discord, teamspeak, game streaming, etc.).

Macs are personal computers, whether or not they come from some official IBM Personal Computer compatibility bloodline.


Steam Deck - 6 million

Sega Saturn - 9 million

Wii U - 13 million

PlayStation 5 - 80 million

Nintendo Switch - 150 million

Nintendo Switch 2 opening weekend - 4 million in 3 days

Sure.


And in the last 48 hours, Steam peaked at 39.5M users online, providing a highly pessimistic lower-bound on how many PC gamers there are.

https://store.steampowered.com/stats/stats/

If you consider time zones (not every PC gamer is online at the same time), the fact that it's not the weekend, and other factors, I'd estimate the PC gaming audience is at least 100M.

Unfortunately, there's no possible way to get an exact number. There are multiple gaming PC manufacturers, not to mention how many gaming PCs are going to be built by hand. I'm part of a PC gaming community, and nearly 90% of us have a PC built by either themselves or a friend/family. https://pdxlan.net/lan-stats/


For comparison, the lifetime sales of the first Nintendo Switch would be considered a good year for iPhone sales -- six generations of phones sold >150MM units.

https://en.wikipedia.org/wiki/List_of_best-selling_mobile_ph...


I mean, I worked in this space, and I'm telling you why many of the people I worked with weren't interested in supporting Apple. I'm happy to hear your theories if you don't like mine, though.


I think the past bit people, but unlike the PS4 transition or gaming consoles in the past (which were rarely backwards compatible), there wasn't enough cultural momentum to plow through it... leaving "don't support Apple" as a bit of a institutional memory at this point, even though the odds of another transition seem almost nonexistent. What would it even be? 128 bit? Back to x86? Notarization++? Metal 4 incompatible with Metal 1?


Yeah, I buy that, so I think we are actually agreeing with each other. The very rough backwards support story Apple has had for the past decade, which I mentioned, has made people uninterested in supporting the platform, even if they're better about it now, as you claim (though I'm unconvinced about that personally, having worked on macOS software for more than a decade).

> What would it even be? 128 bit? Back to x86? Notarization++? Metal 4 incompatible with Metal 1?

Sure, I can think of lots of things. Every macOS update when I worked in this space broke something that we had to go fix. Code signature requirements change a bit in almost every release, not hard to imagine a 10-year-old game finally running afoul of some new requirement. I can easily see them removing old, unmaintained APIs. OpenGL is actively unmaintained and I would guess a massive attack vector, not hard to see that going away. Have you ever seen their controller force feedback APIs? Lol, they're so bad, it's a miracle they haven't removed those already.


> even though the odds of another transition seem almost nonexistent.

You see, the existence of that "almost" is already less confidence than developers have on every game console as well as Linux and Windows.


> I've heard this argument, but it also doesn't pass the sniff test in 2025.

I mean, it's at least partially true. I used to play BioShock Infinite on my MacBook in high school, there was a full port. Unfortunately it's 32 bit and doesn't run anymore and there hasn't been a remaster yet.


PlayStation, Nintendo, and Xbox all have 10s of millions of gamers each. Meanwhile MacOS makes up ~2% of steam users which is probably a pretty good proxy for the number of MacOS gamers.

Why would I do anything bespoke at all for such a tiny market? Much less an entirely unique GPU API?

Apple refusing to support OpenGL and Vulkan absolutely hurt their gaming market. It increased the porting costs for a market that was already tiny.


> Why would I do anything bespoke at all for such a tiny market?

Because there is a huge potential here to increase market share.


I don't buy it either, because Apples GPTK works similar as Proton - they have a DX12-to-Metal Layer that works quite well - if it works. And their GPTK is based on wine, just as proton. It is more other annoyances like lack of steam support. There are patched version of steam circulating that run in GPTK though (offline mode) but that is where everything gets finnicky and brittle. It is mostly community efforts, and I think gaming could be way better on Apple if they embrace the Proton-approach that they started with GPTK.


Apple collects no money from Steam sales, so they don't see a reason to support it.

You don't buy Apple to use your computer they way you want to use it. You buy it to use it the way they tell you to. E.g. "you're holding it wrong" fiasco.

In some ways this is good for general consumers (and even developers, with limited config comes less unpredictablilty)... However this generally is bad for power users or "niche" users like Mac gamers.


> Apple collects no money from Steam sales, so they don't see a reason to support it.

That is true, but now they are in a position where their hardware is actually more affordable and powerful than their Windows/x86 counterpart - and Win 11 is a shitload of adware and an annoyance in itself, layered ontop of a OS. They could massively expand their hardware sales to the gaming sector.

I'm eyeing at a framework Desktop with an AMD AI 395 APU for gaming (I am happy with just 1080p@60) and am looking at 2000€ to spend, because I wan't a small form factor. Don't quote me on the benchmarks, but a Mac Mini on M4 Pro is probably cheaper and more powerful for gaming - IF it had proper software support.


Apple collects no money from Photoshop, Microsoft, or anything else that runs on the Mac besides the tiny minority of apps sold on the Mac App Store.

Not to mention many subscription services on iOS that don’t allow you to subscribe through the App Store.


Sometimes I just feel like buying the latest and greatest game, I have an m4 too, the choices are usually quite abysmal. I agree.


My solution is cloud gaming in that case, such as GeforceNow (for compatible games), or Shadow (for a whole PC to do as you please).


Thanks, will check it out!


On top of that, what is the strategy from Apple on gaming? Advertise extra performance and features that you only get if you upgrade your whole device? This is non-sustainable to put it mildly. There are egpu enclosures with TB5, developing something like that for the Mac would make more sense if they really cared about gaming anyhow.


Honestly, gaming consoles are so much cheaper and "no hassle." I never games on my Mac.


More expensive on the long run, as the games are more expensive and you need some kind of subscription to play online.


Yep, I use Moonlight / Sunshine / Apollo to stream from my gaming PC, so I still use my Mac setup but get nearly perfect windows gaming with PC elsewhere in house.

This has been by far the best setup until Apple can take gaming seriously, which may never happen.


Going back to the Air's screen from your Pro will be a steep fall.


Not really, 95% of the time I use it in a dock with 2 external screens.


I'm gonna be looking for a 4080 in SFF form factor since my current gaming rig can't get upgraded to win 11. Also I wouldn't mind a smaller desktop.

edit: for now I'll get that win 10 ESU


What about wine flavor from crossdressers?


Pretty sure you don’t mean crossdressers!

Codeweavers?


Little of column A, little of column B ;) This was a fun day in the office: https://www.codeweavers.com/blog/jwhite/2011/1/18/all-dresse...


Yeah I agree. If it weren't for gaming I would have already uninstalled Windows permanently. It's really unfortunate because it sticks out as the one product in my house that I truly despise but I can't get rid of, due to gaming.

I've been trying to get Unreal Engine to work on my Macbook but Unity is an order of magnitude easier to run. So I'm also stuck doing game development on my PC. The Metal APIs exist and apparently they're quite good... it's a shame that more engines don't support it.


> I wish Apple would take gaming more seriously and make GPTK a first class citizen such as Proton on Linux.

Note that games with anticheat don't work on Linux with Proton either. Everything else does, though.


Several games with anticheat work. But it's up to the developers whether they check the box that allows it to work, which is why even though both Apex Legends and Squad use Easy Anticheat, Squad works and Apex does not.

Of course some anticheats aren't supported at all, like EA Javelin.


Apex Legends is an interesting case because EA/Respawn initially shipped with first-class support for the Steam Deck (going as far as to make changes to the game client so it would get a "Verified" badge from Valve) -- including "check[ing] the box that allows it to work". However, the observation was that the anti-cheat code on Linux wasn't as effective, so they eventually dropped support for it.

https://forums.ea.com/blog/apex-legends-game-info-hub-en/dev...


Many of them do, but it's a game of cat and mouse, so it's more hit and miss than I would like.


Curious about that too. In a modern web-app I always set HttpOnly cookies to prevent them being exposed to anything JavaScript, and SameSite=strict. Especially the later should prevent CSRF.


Erratum: What I'm saying here only applies for cookies with the attribute SameSite=None so it's irrelevant here, see the comments below.

(Former CTF hobbyist here) You might be mixing up XSS and CSRF protections. Cookie protections are useful against XSS vulnerabilities because they make it harder for attackers to get a hold on user sessions (often mediated through cookies). It doesn't really help against CSRF attacks though. Say you visit attacker.com and it contains an auto-submitting form making a POST request to yourwebsite.com/delete-my-account. In that case, your cookies would be sent along and if no CSRF protection is there (origin checks, tokens, ...) your account might end up deleted. I know it doesn't answer the original question but hope it's useful information nonetheless!


The SameSite cookie flag is effective against CSRF when you put it on your session cookie, it's one of its main use cases. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/... for more information.

SameSite=Lax (default for legacy sites in Chrome) will protect you against POST-based CSRF.

SameSite=Strict will also protect against GET-based CSRF (which shouldn't really exist as GET is not a safe method that should be allowed to trigger state changes, but in practice some applications do it). It does, however, also make it so users clicking a link to your page might not be logged in once they arrive unless you implement other measures.

In practice, SameSite=Lax is appropriate and just works for most sites. A notable exception are POST-based SAML SSO flows, which might require a SameSite=None cookie just for the login flow.


This page has some more information about the drawbacks/weaknesses of SameSite, worth a read: https://developer.mozilla.org/en-US/docs/Web/Security/Attack...

You usually need another method as well


Yes, you're definitely right that there are edge cases and I was simplifying a bit. Notably, it's called SameSite, NOT SameOrigin. Depending on your application that might matter a lot.

In practice, SameSite=Lax is already very effective in preventing _most_ CSRF attacks. However, I 100% agree with you that adding a second defense mechanism (such as the Sec header, a custom "Protect-Me-From-Csrf: true" header, or if you have a really sensitive use case, cryptographically secure CSRF tokens) is a very good idea.


Thanks for correcting me - I see my web sec knowledge is getting rusty!


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: