Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What is old is new again.

Over the years I tried occasionally to look into cloud, but it never made sense. A lot of complexity and significantly higher cost, for very low performance and a promise of "scalability". You virtually never need scalability so fast that you don't have time to add another server - and at baremetal costs, you're usually about a year ahead of the curve anyways.



A nimble enough company doesn't need it, but I've had 6 months of lead time to request one extra server in an in-house data center due to sheer organizational failure. The big selling point of the cloud really was that one didn't have to deal with the division lording over the data center, or have any and all access to even log in by their priesthood who knew less unix than the programmers.

I've been in multiple cloud migrations, and it was always solving political problems that were completely self inflicted. The decision was always reasonable if you looked just at the people the org having to decide between the internal process and the cloud bill. But I have little doubt that if there was any goal alignment between the people managing the servers and those using them, most of those migrations would not have happened.


I've been in projects where they're 'on the cloud' to be 'scalable', but I had to estimate my CPU needs up front for a year to get that in the budget, and there wasn't any defined process for "hey, we're growing more than we assumed - we need a second server - or more space - or faster CPUs - etc". Everything that 'cloud' is supposed to allow for - but ... that's not budgeted for - we'll need to have days of meetings to determine where the money for this 'upgrade' is coming from. But our meetings are interrupted by notices from teams that "things are really slow/broken"...


About sums up my last job. Desperation leads to micromanaging the wrong indicators. The results are rage-inducing. I am glad I got let go by the micromanagers because if not I would have quit come New Year.


Yeah, clouds are such a huge improvement over what was basically an industry standard practice to say oh you want a server fill out this 20 page form and will get you your server in 6 to 12 months.

But we don't need one minute response times from the cloud really. So something like hetzner that may just be all right. We'll get it to you within an hour. It's still light years ahead of what we used to be.

And if it makes the entire management and cost side and performance with bare metal or closer to bare metal on the provider side, then that is all good.

And this doesn't even address the fact that yeah, AWA has a lot of hidden costs, but a lot of those managed data center outsourcing contracts where you were subjected to those lead times for new servers... really weren't much cheaper than AWS back in the day.


in my experience i can rescale Hetzner servers and they'll be ready in a minute or two


Yes, sorry, I didn't mean to impugn Hetzner by saying they were an hour delay, just that there could be providers that are cheaper that didn't need to offer AWS-level scaling.

Like a company should be able to offer 1 day service, or heck 1 week with their internal datacenters. Just have a scheduled buffer of machines to power up and adapt the next week/month supply order based on requests.


The management overhead in requesting new cloud resources is now here. Multiple rounds of discussion and TPS reports to spin up new services that could be a one click deploy.

The bureaucracy will always find a way.


Worst is when one of those dysfunctional orgs that does the IT systems administration tries to create their own internal cloud offerings instead of using a cloud provider. It's often worse than hosted clouds or bare metal.

But I definitely agree, it's usually a self-inflicted problem and the big gamble attempting to work around infrastructure teams. I've had similar issues with security teams when their out of the box testing scripts show a fail, and they just don't comprehend that their test itself is invalid for the architecture of your system.


Running away from internal IT works until they inevitably catch up to the escapees. At $dayjob the time required to spin up a single cloud VM is now measured in years. I’ve seen projects take so long that the cloud vendor started sending deprecation notices half way through for their tech stacks but they forged ahead anyway because it’s “too hard to steer that ship”.

The current “runners” are heading towards SaaS platforms like Salesforce, which is like the cloud but with ten times worse lock in.


> At $dayjob the time required to spin up a single cloud VM is now measured in years.

We have a Service Now ticket that you can complete that spins the server up at completion. Kind of an easy way to do it.


Then you end up with too-large servers all over the place with no rhyme or reason, burning through your opex budget.

Also, what network does the VM land in? With what firewall rules? What software will it be running? Exposed to the Internet? Updated regularly? Backed up? Scanned for malware or vulnerabilities? Etc…

Do you expect every Tom, Dick, and Harry to know the answers to these questions when they “just” want a server?

This is why IT teams invariably have to insert themselves into these processes, because the alternative is an expensive chaos that gets the org hacked by nation states.

The problem is that when interests aren’t forced to align — a failure of senior management — then the IT teams become an untenable overhead instead of a necessary and tolerable one.

The cloud is a technology often misapplied to solve a “people problem”, which is why it won’t ever work when misused in this way.


Not GP, but at my previous job we had something very similar. The form did offer options for a handful of variables (on-prem VMware vs EC2, vCPU, RAM, disk, OS/template, administrators, etc), but once submitted, the ticket went to the cloud/architecture team for review, who could adjust the inputted selections as well as configure things like networks, firewall rules, security groups, etc. Once approved, the automated workflow provisioned the server(s), firewall rules, security groups, etc and sent the details to the requestor.


Those are all checkboxes on the form

The first time you do it, you can do a consult with a cloud team member

And of course they get audited every quarter so usage is tracked


Complexity? I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches. Or a highly available load balancer with infinite scale.


This is how the cloud companies keep you hooked on. I am not against them of course but the notion that no one can self host in production because "it is too complex" is something that we have been fed over the last 10-15 years. Deploying a production db on a dedicated server is not that hard. It is about the fact that people now think that unless they do cloud, they are amateurs. It is sad.


I agree that running servers onprem does not need to be hard in general, but I disagree when it comes to doing production databases.

I've done onprem highly available MySQL for years, and getting the whole master/slave thing go just right during server upgrades was really challenging. On AWS upgrading MySQL server ("Aurora") is really just a few clicks. It can even do blue/green deployment for you, where you temporarily get the whole setup replicated and in sync so you can verify that everything went OK before switching over. Disaster recovery (regular backups to off site & ability to restore quickly) is also hard to get right if you have to do it yourself.


If you are running k8s on prem, the "easy" way is to use a mature operator, taking care of all of that.

https://github.com/percona/percona-xtradb-cluster-operator https://github.com/mariadb-operator/mariadb-operator or CNPG for Postgres needs. They all work reasonable well, and cover all the basic (HA, replication, backups, recovery, etc).


It's really hard to do blue/green on prem with giant expensive database servers. Maybe if you're super big and you can amortize them over multiple teams, but most shops aren't and can't. The cloud is great.


Doing stuff on-prem or in a data centre _is_ hard though.

It's easy to look at a one-off deployment of a single server and remark on how much cheaper it is than RDS, and that's fine if that's all you need. But it completely skips past the reality of a real life resilient database server deployment: handling upgrades, disk failures, backups, hot standbys, encryption key management, keeping deployment scripts up to date, hardware support contracts and vendor management, the disaster recovery testing for the multi-site SAN fabric with fibre channel switches and redundant dedicated fibre, etc. Before the cloud, we actually had a staff member who was entirely dedicated to managing the database servers.

Plus as a bonus, not ever having to get up at 2AM and drive down to a data centre because there was a power failure due to a generator not kicking in, and it turns out the data centre hadn't adequately planned for the amount of remote hands techs they'd need in that scenario...

RDS is expensive on paper, but to get the same level of guarantees either yourself or through another provider always seems to end up costing about the same as RDS.


I have done all of this also, today I outsource the DB server and do everything else myself, including a local read replica and pg_dump backups as a hail mary.

Essentially all that pain of yonder years was essentially storage it was a F**ing nightmare running HA network storage before the days of SSDs. It was slower than RAID, 5X more expensive than RAID and generally involved an extreme amount of pain and/or expense (usually both). But these days you only actually need SANs or as we call it today block storage when you have data you care about, again you only have to care about backups when you have data you care about.

For absolutely all of us the side effect of moving away from monolithic 'pets' is that we have made the app layer not require any long term state itself. So today all you need is N X any random thing that might lose data or fail at any moment as your app servers and an external DB service (neon, planetscale, RDS), plus perhaps S3 for objects.


Database is one of those places where it's justified, I think. Application containers do not need the same level of care hence are easy to run yourself.


I guess that is the kicker right? "same level of guarantees".


I'd much rather deploy cassandra, admittedly a complex but failure resistant database, on internal hardware than on AWS. So much less hassle with forced restarts of retired instances, noisy nonperformant networking and disk I/O, heavy neighbors, black box throttling, etc.

But with Postgres, even with HA, you can't do geographic/multi-DC of data nearly as well as something like Cassandra.


> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

It's "only a few clicks" after you have spent a signficant amount of time learning AWS.


As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS.

Like, where do I go? Do i search for Postgres? If so where? Does the IP of my cluster change? If so how to make it static? Also can non-aws servers connect to it? No? Then how to open up the firewall and allow it? And what happens if it uses too much resources? Does it shutdown by itself? What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

Meanwhile, all that time finding out, and I could ssh into a server, code and run a simple bash script to download, compile, run. Then another script to replicate. And i can check the logs, change any config parameter, restart etc. no black box to debug if shit hits the fan


Having lived in both worlds, there are services wherein, yeah, host it yourself. But having done DB on-prem/on-metal, dedicated hosting, and cloud, databases are the one thing I'm happy to overpay for.

The things you describe involve a small learning curve, each different for each cloud environment, but then you never have to think about it again. You don't have to worry about downtime (if you set it up right), running a bash script ... literally nothing else has to be done.

Am I overpaying for Postgres compared to the alternatives? Hell yeah. Has it paid off? 100%, would never want to go back.


> Do i search for Postgres?

Yes. In your AWS console right after logging in. And pretty much all of your other setup and config questions are answered by just filling out the web form right there. No sshing to change the parameters they are all available right there.

> And what happens if it uses too much resources?

It can't. You've chosen how much resources (CPU/Memory/Disk) to give it. Run away cloud costs are bill by usage stuff like redshift, s3, lambda, etc.

I'm a strong advocate for self (for some value of self) hosting over cloud, but your making cloud out to be far more difficult than it is.


Actually... for Postgres specifically, it's less than 5 minutes to do so in AWS and you get replication, disaster recovery and basic monitoring all included.

I hated having to deal with PostgreSQL on bare metal.

To answer your questions should someone ask these as well and wish answers:

> Does the IP of my cluster change? If so how to make it static?

Use the DNS entry that AWS gives you as the "endpoint", done. I think you can pin a stable Elastic IP to RDS as well if you wish to expose your RDS DB to the Internet although I have really no idea why one would want that given potential security issues.

> Also can non-aws servers connect to it? No?

You can expose it to the Internet in the creation web UI. I think the default the assistant uses is to open it to 0.0.0.0/0 but the last time I did that is many years past so I hope that AWS asks you about what you want these days.

>Then how to open up the firewall and allow it?

If the above does not, create a Security Group, assign the RDS server to that Security Group and create an Ingress rule that either only allows specific CIDRs or a blanket 0.0.0.0/0.

> And what happens if it uses too much resources? Does it shutdown by itself?

It just gets dog slow if your I/O quota is exhausted, it goes into an error state when the disk goes full. Expand your disk quota and the RDS database becomes accessible again.

> What if i wanna fine tune a config parameter? Do I ssh into it? Can i edit it in the UI?

No SSH at all, not even for manually unfucking something, for that you need the assistance of the AWS support - but in about six years I never had a database FUBAR'ing itself.

As for config parameters, there's an UI for this called "parameter/option groups", you can set almost all config parameters there, and you can use these as templates for other servers you need as well.


This smells like “Dropbox is just rsync”. No skin in the game I think there are pros and cons to each but a Postgres cluster can be as easy as a couple clicks or an entry into a provisioning script. I don’t believe you would be able to architect the same setup with a simple single server ssh and a simple bash script. Unless you already wrote a bash script that magically provisions the cluster across various machines.


> As a self hosting fan, i cant even fathom how hard it would be to even get started running a Postgres or redis cluster on AWS. Like, where do I go? Do i search for Postgres? If so where?

Anything you don't know how to do - or haven't even searched for - either sounds incredibly complex, or incredibly simple.


It is not as simple as you describe to set up HA multi-region Postgres

If you don't care about HA, then sure everything becomes easy! Until you have a disaster to recover and realize that maybe you do care about HA. Or until you have an enterprise customer or compliance requirement that needs to understand your DR and continuity plans.

Yugabyte is the closest I’ve seen to achieving that simplicity with self host multi region and HA Postgres and it is still quite a bit more involved than the steps you describe and definitely more work than paying for their AWS service. (I just mention instead of Aurora because there’s no self host process to compare directly there as it’s proprietary.)


Did you try ChatGPT for step by step directions for an EC2 deployed database? It would be a great litmus test to see if it does proper security and lockdown in the process, and what options it suggests aside from the AWS-managed stuff.

It would be so useful to have an EC2/S3/etc compatible API that maps to a homelab. Again something that Claude should allegedly be able to vibecode give then breadth of documentation, examples, and discussions on the AWS API.


Your comment seems much more in the vain "I already learned how to do it this way, and I would have to learn something to do it the other way"

Which is of course true, but it is true for all things. Provisioning a cluster in AWS takes a bit of research and learning, but so did learning how to set it up locally. I think most people who know how to do both will agree it is simpler to learn how to use the AWS version than learning how to self host it.


A fun one in the cloud is "when I upgrade to a new version of Postgres, how long is the downtime and what happens to my indexes?"


For AWS RDS, no big deal. Bare metal or Docker? Oh now THAT is a world of pain.

Seriously I despise PostgreSQL in particular in how fucking annoying it is to upgrade.


Yep. I know folks running their own clusters on AWS EC2 instead of RDS. They're still on 3 or 4 versions back because upgrading Postgres is a PITA.


If you can self host postgres, you'll find "managing" RDS to be a walk in the park.


If you are talking about RDS and ElasticCache, it’s definitely NOT a few clicks if you want it secure and production-ready, according to AWS itself in their docs and training.

And before someone says Lightsail: is not meant for highly availability/infinite scale.


> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks and I don't have to worry about OS upgrades and patches

Last I checked, stack overflow and all of the stack exchange sites are hosted on a single server. The people who actually need to handle more traffic than that are in the 0.1% category, so I question your implicit assumption that you actually need a Postgres and Redis cluster, or that this represents any kind of typical need.


SO was hosted on a single rack last I checked, not a single box. At the time they had an MS SQL cluster.

Also, databases can easily see a ton of internal traffic. Think internal logistics/operations/analytics. Even a medium size company can have a huge amount of data, such as tracking every item purchased and sold for a retail chain.


They use multiple servers for redundancy, but they are using only 5-10% capacity per [1], so they say they could run on a single server given these numbers. Seems like they've since moved to the cloud though [2].

[1] https://www.datacenterdynamics.com/en/news/stack-overflow-st...

[2] https://stackoverflow.blog/2025/08/28/moving-the-public-stac...


If you don’t find AWS complicated you really haven’t used AWS.


If you were personally paying the bill, you'd probably choose the self host on cost alone. Deploying a DB with HA and offsite backups is not hard at all.


I have done many postgres deploys on bare metal. The IOPS and storage space saved (zfs compression because psql is meh) is huge. I regularly used hosted dbs but largely for toy DBs in GBs not TBs.

Anyway, it is not hard and controlling upgrades saves so much time. Having a clients db force upgraded when there is no budget for it sucks.

Anyway, I encourage you to learn/try it when you have opportunity


> I've never set up a highly available Postgres and Redis cluster on dedicated hardware, but I can not imagine it's easier than doing it in AWS which is only a few clicks

I haven ever setup a AWS postgres and redis, and know its more then a few clicks. there is simply basic information that you need to link between services, where it does not matter if its cloud or hardware, you still need to do the same steps, be it from CLI or WebInterface.

And frankly, these days with LLMs, its no excuse anymore. You can literally ask a LLM to do the steps, explain them to you, and your off to the races.

> I don't have to worry about OS upgrades and patches

Single command and reboot...

> Or a highly available load balancer with infinite scale.

Unless your google, overrated ...

You literally rent from places like Hetzner for 10 bucks a load balancer, and if your old fascion, you can even do a DNS balancing.

Or you simply rent a server 10x the performance what Amazon gives (for the same price or less), and you do not need a load balancer. I mean, for 200 bucks, you rent a 48 core 96 thread server at Hetzner... Who needs a load balancer again... You will do millions or requests on a single machine.


For anything "serious", you'll want a load balancer for high availability, even if there's no performance need. What happens when your large server needs an OS upgrade or the power supply melts down?


Well you can have managed resources on premises.

It costs people and automation.


People are usually the biggest cost in any organisation. If you can run all your systems without the sysadmins & netadmins required to keep it all upright (especially at expensive times like weekends or run up to Black Friday/Xmas), you can save yourself a lot more than the extra it'll cost to get a cloud provider to do it all for you.


Every large organization that is all in on cloud I have worked at has several teams doing cloud work exclusively (CICD, Devops, SRE, etc), but every individual team is spending significant amounts of their time doing cloud development on top of that work.


This. There's a lot of talk of 'oh you will spend so much time managing your own hardware' when I've found in practice it's much less time than wrangling the cloud infrastructure. (Especially since the alternatives are usually still a hosting provider that mean you don't have to physically touch the hardware at all, though frankly that's often also an overblown amount of time. The building/internet/cooling is what costs money but there's already a wide array of co-location companies set up to provide exactly that)


I think you are very right, and to be specific, IAM roles, connecting security groups, terraform plan/apply cycles, running Atlantis through GitHub, all that takes tremendous amounts of time and requires understanding a very large set of technologies on top of the basic networking/security/PostGRES knowledge.


The cost to run data-centers for a large company that is past the co-location phase, I am not sure where those calculations come out to. But yeah in my experience, running even a fairly large amount of bare metal nix servers in colocation facilities are really not that time consuming.


I can’t believe this cloud propaganda remains so pervasive. You’re just paying DevOps and “cloud architects” instead.


Exactly. It's sad that we have been brain washed by the cloud propaganda long enough now. Everyone and their mother thinks that to setup anything in production, you need cloud otherwise it is amaeteurish. Sad.


Exactly, for the narrowly defined condition of running k8s on digital ocean with a managed control plane compared to Hetzner bare metal:

AWS and DigitalOcean = $559.36 monthly or Hetzner = $132.96 The cost of an engineer to set up and maintain a bare metal k8s cluster is going to far exceed the roughly $400 monthly savings.

If you run things yourself and can invest sweat equity, this makes some sense. But for any company with a payroll this does not math out.


That argument is compelling only at a first glance IMO. If you take a look at it another way then:

1. The self-hosting sweat and nerves are spent only once, 80% of them anyway (you still have to maintain every now and then e.g. upgrade).

2. The cloud setup will require babysitting as well and as such the argument that you only pay someone salary when self-hosting does not hold water.

Ultimately it's a tradeoff between (a) the short- or long-term thinking of leadership, (b) in-house expertise and (c) how much money are you willing to throw at the problem for the promised shorter timelines -- and that one is assuming you'll find high-quality cloud hosting engineers which, believe me, is far from a given.


Wouldn't you want someone watching over cloud infra at those times too? So maybe slightly less, but still need some people being ready.


Yeah I always just kinda laugh at these comparisons, because it's usually coming from tech people who don't appreciate how much more valuable people's time is than raw opex. It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.


> It's like saying, you know it's really dumb that we spend $4000 on Macbooks for everyone, we could just make everyone use Linux desktops and save a ton of money.

Ohh idk if this is the best comparison, due to just how much nuance bubbles up.

If you have to manage those devices, Windows and Active Directory and especially Group Policy works well. If you just have to use the devices, then it depends on what you do - for some dev work, Linux distros are the best, hands down. Often times, Windows will have the largest ecosystem and the widest software support (while also being a bit of a mess). In all of the time I’ve had my MacBook I really haven’t found what it excels at, aside from great build quality and battery life, it feels like one of those Linux distros that do things differently just for the sake of it, even the keyboard layout, the mouse acceleration feeling the most sluggish (Linux distros feel the best, Windows is okay) even if the trackpad is fine, as well as stuff like needing DiscreteScroll and Rectangle and some other stuff to make generic hardware feel okay (or even multi display work), maybe creative software is great there.

It’s the kind of comparison that derails itself in the mind of your average nerd.

But I get the point, the correct tool for the job and all that.


Sorry for off-topic but IMO MacBooks started losing value hard since the release of macOS Tahoe.

They were super fast, now part of them are sluggish.

As much as people hate to hear it, Apple is finished. They peaked and have nowhere to go. AI bubble is not going to last more than 1-3 years still, and Apple's inability to make a stable OS upgrade that doesn't ruin people's machines performance puts them in a corner.

Combine this with the fact that MS announced end of support for Windows 10 and both these corporations ironically start to make a strong case for Linux.

Is Linux desktop quite there? Maybe not fully but it's IMO pushing beyond 80% and people who don't like Windows and macOS anymore are starting to weigh their options.


If "cloud" took zero time, then sure.

It actually takes a lot of time.


"It's actually really easy to set up Postgres with high availability and multi-region backups and pump logs to a central log source (which is also self-hosted)" is more or less equivalent to "it's actually really easy to set up Linux and use it as a desktop"

In fact I'd wager a lot more people have used Linux than set up a proper redundant SQL database


Honestly, I don't see a big difference between learning the arcane non-standard, non-portable incantations needed to configure and use various forks of standard utilities running on the $CLOUD_PROVIDER, and learning to configure and run the actual service that is portable and completely standard.

Okay, I lied. The later seems much more useful and sane.


What is this?!

You are self-managing expensive dedicated hardware in form of MacBooks, instead of renting Azure Windows VM's?!

Shame!


Don't be silly, - the MacBook Pro's are just used to RDP to the Azure Windows VMs ;)


That's how they can get away with such seemingly high prices.


What is more likely to fail? The hardware managed by Hetzner or your product?

I'm not saying that you won't experience hardware failures, I am just saying that you also need to remember that if you want your product to keep working over the weekend then you must have someone ready to fix it over the weekend.


Cloud providers and even cloudflare go down regularly. Relax.


Sure - but when AWS goes down, Amazon fixes it, even on the weekends. If you self-host, you need to pay a person to be on call to fix it.


Not only that. When your self-host goes down your customers complain that you are down. When AWS goes down your customers complain that internet is down


AWS doesn't have to pay people (LOTS OF PEOPLE) to keep things running over the weekends?

And they aren't...just passing those costs on to their customers?


They are of course, but it's amortized over many users. If you're a small company, it's hard to hire one-tenth of an SRE.


Not every business needs that kind of uptime.

How often is GitHub down? We are all just fine without it for a while.


I mean, yes, but also I get "3 nines" uptime by running a website on a box connected to my isp in my house. (it would easily be 4 or 5 nines if I also had a stable power grid...)

There's a lot, a lot of websites where downtime just... doesn't matter. Yes it adds up eventually but if you go to twitter and its down again you just come back later.


"3 nines" is around 8 hours of downtime a year. If you can get that without a UPS or generator, you already have a stable power grid.


except you now have your developers chasing their own tails figuring out how to insert the square peg in the round hole without bankrupting the company. cloud didn't save time, it just replaced the wheels for the hamsters.


Right, because cloud providers take care of it all. /s Cloud engineers are more expensive than traditional sysadmins.


I'm a designer with enough front-end knowledge to lead front-end dev when needed.

To someone like me, especially on solo projects, using infra that effectively isolates me from the concerns (and risks) of lower-level devops absolutely makes sense. But I welcome the choice because of my level of competence.

The trap is scaling an org by using that same shortcut until you're bound to it by built-up complexity or a persistent lack of skill/concern in the team. Then you're never really equipped to reevaluate the decision.


The benefit of cloud has always been that it allows the company to trade capex for opex. From an engineering perspective, it trades scalability for complexity, but this is a secondary effect compared to the former tradeoff.


"trade capex for opex"

This has nothing to do with cloud. Businesses have forever turned IT expenses from capex to opex. We called this "operating leases".


I’ve heard this a lot, but… doesn’t Hetzner do the same?


Hetzner is also a cloud. You avoid buying hardware, you rent it instead. You can rent either VMs or dedicated servers, but in both cases you own nothing.


If everything is properly done, it should be next to trivial to add a server. When I was working on that we had a written procedure, when followed strictly, it would just take less than an hour


If you’re just running some CRUD web service, then you could certainly find significantly cheaper hosting in a data center or similar, but also if that’s the case your hosting bill is probably a very small cost either way (relative to other business expenses).

> You virtually never need scalability so fast that you don't have time to add another server

What do you mean by “time to add another server?” Are you thinking about a minute or two to spin up some on-demand server using an API? Or are you talking about multiple business days to physically procure and install another server?

The former is fine, but I don’t know of any provider that gives me bare metal machines with beefy GPUs in a matter of minutes for low cost.


Weeks. I'm talking about multiple business weeks to spin up a new server. Sure, in a pinch I can do it in a weekend, but adding up all the stakeholders, talking it over and doing things right it takes weeks. It's a normal timespan for a significant chunk of extra power - a modern day server from Hetzner comes with over 1Tb of RAM and around 100 cores. This is also where all the reserve capacity comes from - you actually do have this kind of time to prepare.

Sure, there are scenarios where you need capacity faster and it's not your fault. Can't think of any offhand, but I imagine there are. It's perfectly fine for them to use cloud.


It’s kinda good if your requirements might quadruple or disappear tonight or tomorrow, but you should always have a plan to port to reserved / purchased capacity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: