Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Where "10x less money" is still less than the cost of a single developer. If this needs any extra work then it's just been busy work and change for the sake of it.

I'm all for monoliths and hosted solutions, but let's not pretend that saving $1k / month is going to make or break most businesses.

What OP appears to have done is take on risk to save costs.

Another thing is, even when the cloud goes down, the story is "Global outages caused by Azure/AWS/GCloud/etc going down", and people are generally understanding.

If you have an outage of your machine, the story is "<Your Company> services down".



I’m a business owner. It’s not just about the cost. It’s also about the improvement in control, visibility and efficiency. This has long term compounding effects, such that well-architected in-house applications may be literally millions of times more efficient than the cloud. The cloud has an incentive to charge you for every megabyte that goes in and out of a wire, whereas if you own that wire, the megabyte is basically free. This enables you to do things that aren’t possible otherwise.


> If you have an outage of your machine, the story is "<Your Company> services down".

That is silly for two reasons:

1. Cloud doesn't save you from outages

2. Outages happen, and are usually very rare, whatever your host

That said, I have fewer outages on my machines than AWS has on theirs. I run a service for a client, on one server, that has had 100% uptime during workdays for 7 years.


> That said, I have fewer outages on my machines than AWS has on theirs.

It gets even better (or worse, depending on your view point) if you factor in services you're dependent on. In my career I probably had more outages due external partners running their stuff on AWS than I ever did because our own servers where down.

That's not to say that AWS is bad, but it take a very skilled administrator to do AWS correct, and it costs a lot of money to get the required redundancy. The whole "I can't vacuum because US-EAST-1 is down" is because someone didn't want to pay what it costs to do redundancy in AWS.


> someone didn't want to pay what it costs to do redundancy in AWS

Seems a little pointless to pay a premium on everything else then.


Loss of energy because the grid and internet are down are probably the major reasons for outages which means small companies don't want to host.


What? UPS boards are cheap. LiFePO4 batteries are cheap. Metered ATS PDUs are cheap.

I've had far less downtime (read: zero) from my custom power redundancy solutions on my homelab than from power issues with my colo provider.

Small companies would be far better off with local power redundancy than some gargantuan complex industrial Liebert solution with a 4 hour SLA.


True story ^^^^^

Add to this, that you can have outages due to operational complexity, which tends to happen in complex cloud native setups.


> If you have an outage of your machine, the story is "<Your Company> services down".

Yes. Have you attributed a cost to these stories?

Downtime is rare no matter what kind of infra you have. When you get downtime, fix it and the business will continue. In my experience people way over estimate on many services how bad an outage is. People aren't going to your competitor because of a single outage. Likely 98% of your customers might not even notice it. If you fix it and communicate honestly with your customers, the amount of money you lose is minimal.


He's an indie developer from the looks of it. $1k is a decent sum of money and maybe the effort is worth it.


Well, it depends. There are companies or individuals with tight budgets. And if you set something like that yourself, from my experience it needs very little maintenance, if done right. My servers run without interruptions and interventions for months now. YMMV.


Until a disaster happens. If you’re running a commercial site that sells products and it goes down… that translates to money lost.

If you were paying 15k a year to put a portfolio site online, then I can see where it can be too much.


It's not like cloud providers don't have outages either. And for disasters you should always have backups and a migration plan, even if you are in the cloud, no? I'm not preaching against cloud, just saying that there are some cases where going bare metal is a better option instead of using cloud by default "because everyone does it".


I'm not arguing for or against cloud, but there are more costs to running bare metal than it would seem.

I'm a sysadmin and have run schools on bare metal and on cloud. There are things like drive failures, hardware failures, connectivity issues, networking, user access, and all of those things that are much easier in the cloud. Much easier to spin up or down if you no longer need them.

If you're running networked storage, then that needs to have both a fail-over node and backup solution. You probably need to run extra hardware, cabling, etc. Cloud trivializes this.

If you're saving 15k in terms of hard cost, but spending 55k/yr for a sysadmin on-prem to maintain, then you're not saving money at all.

A disaster can be as simple as a failed disk, or overheated server because you're not a sysadmin and you put the server in a cabinet (I've run into this, dealing with faculty). Dead computer, no lessons for the week - lost time for the school and the professor in question.

Every place is different; you have to do a cost analysis and it's not as simple as "I saved 10x!"


Just to be clear, I was not talking about on-prem bare metal, but more like hosted bare metal by providers like Hetzner or OVH. Like the guy in the article has. Speaking of drive failures, cables and other hardware failures - provider takes care of that and replaces failed parts.


This is the issue in cloud vs. metal discussions online.

All the time I think I'm reading apples and then realize people were talking bananas.


> Much easier to spin up or down if you no longer need them.

It takes 5 minutes to set up a Docker Swarm Mode cluster. It takes maybe 15 minutes for k3s or microk8s. After that, auto-scaling is dead simple, and no MORE complex than some shitty vendor locked-in cloud solution.

> that needs to have both a fail-over node and backup solution

ZFS pool replication, Ceph, GlusterFS, etc. Lots of options here. These are long-solved problems.

> A disaster can be as simple as a failed disk

Right, which is why you design your on-prem cluster with N+2 redundancy in the first place, and with a locked cabinet with spare parts. Cattle not pets, and all that. Do you think your EBS storage never fails? You'd need to do exactly the same thing in the cloud, anyway.

> but spending 55k/yr for a sysadmin on-prem to maintain

First, if you're paying only 55k for sysadmins, you should be planning to fail anyway. Competence is compensated quite a bit north of there.

Second, assuming the context is small business, you're going to have role crossover anyway, it's inevitable - chances are that your developer(s) is(are) administering this. Not every business is Facebook.


> If you’re running a commercial site that sells products and it goes down… that translates to money lost.

True! It's also true that if you control your own servers, then you can fix the problem yourself instead of waiting until some provider somewhere gets around to it.


>Until a disaster happens.

why not go cloud when the disaster happens while fixing your disaster, then close down cloud when disaster fixed?


> Where "10x less money" is still less than the cost of a single developer.

Okay? Did you factor into that not needing the "cloud administrator", or whatever it is called now?

When your cloud goes down because payment got blocked, or your account got suspended for suspicious activity (pressing F5 in browser) or something similar, do you still get understanding?


I think I agree. I host my own stuff on a home server, and I love that, but I'd never consider for a moment that my employer or clients should do anything similar.

If you've got some spare time and the skills, alright. This can be a great way to save money. If you're especially confident in your ability to make this work and your margins are very thin, this might even be a wise move. You really can save a lot.

But what if you get slammed with traffic? Can you scale in any direction? Do you have a means of balancing loads with your $1000 server? How will you ensure it's secure? What if it lights on fire (figuratively/literally)?

The cloud does some useful stuff. You shouldn't always pay with your limbs for that, but in some cases, that stability, redundancy, scalability, and flexibility is worth every penny.


Getting slammed with traffic on a cloud system is the stuff of nightmares. Suddenly you wake up with a huge bill, and the tools to manage and restrict the cost are miserable in cloud systems because that is something they don't find useful. It's extremely cheap to get a system with a high speed link that for most use cases you'll never come close to saturating and in the unlikely event you ever do, then something is wrong and your service going slow or failing is better than trying to keep up and getting a huge bill out of the blue. Maybe not for every situation, but for most.

Most of the truly skilled sysadmins I know recommend burst-to-the-cloud rather than pure cloud.


I only self-host stuff I can live without if it goes down for a week or two.

So no email, no important documents etc. live on my self-hosted services.

If the motherboard on my Unraid server shorted out right now, I'd be slightly inconvenienced, but I wouldn't lose sleep over it. Everything actually important is (also) on a service I pay monthly for.


Totally agree. Of course there are savings in monthly costs and perhaps a lot of learning on the way (the first time) but two factors are being missed here: cost of your time spent on this, and second, what I’ve been calling Day-2 costs: backups, disaster recovery, scaling, etc. what to do when a hard drive fails (they do, believe me!)


> backups, disaster recovery ...

Are you meaning people using AWS/Azure/GCP/etc don't need to take care of these?


Sometimes I wonder if people underestimated the effort and hours that go to get many of these functionalities to work in whatever cloud platform they are in. And then test them or fix issues when there is some unexpected thing there...


No they do but the amount of work that’s required is much less most of the time. You can always define the problem in such a way that there is no difference between a cloud and non-cloud solution but 12 years being in the business of deploying to cloud and non-cloud, the types of issues I’ve seen are almost always easier to get around on the cloud


It's really not. It takes about the same amount of time to add an rsync job to my crontab as it does to click through the backup scheduling options in the Linode control panel.

A lot of people here have fallen for the cloud marketing efforts. Nobody is saving any time.


Good point, adding cost of a team of professional cloud engineers as compared to a few underpaid sysadmins to host stuff onprem, the cost savings would be not x10 but x20 or x30 :)


Now all it needs is an outage on a Sunday evening and your savings for the next year will be eaten up by the people having to fix them at overtime rates on a weekend...

Yes, it's cheaper when everything is going well, it always is. That's not why we pay through the nose for AWS. We pay because Someone Else will be running around the colo facility swapping hardware and debugging networking issues if there's a problem.

Hiring a 24/7/365 redundant rotation of people doing that is a LOT more than $1000/month...


Exactly. If you work for 100$ an hour, which would be on the cheap side for skilled devops people and you put a 100 hours in to move a lot of servers offline; that's going to cost 10K$. That would be a small project. A bit over two weeks for a single person. Then add support, regular maintenance, etc. And it starts adding up more cost. And to get to 24x7 uptime, you need skilled people on stand by all the time. All that at 100$/hour/person. That's a lot of cost that you need to factor into your calculations. Mostly it doesn't make sense for small companies to be doing this.

With many companies, the realistic cost of moving off cloud in labor would be higher than years of cumulative bills for cloud hosting. Even if you don't value your own time, you might want to consider doing something more valuable with it than devops.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: