Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

PLEASE DON'T DOWN VOTE ME TO HELL THIS IS A DISCLAIMER I AM JUST SHARING WHAT I'VE READ I AM NOT CLAIMING THEM AS FACTS.

...ahem...

When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.

OTH opinions on Proxmox were very measured.





Yeah, I think that makes solutions like Proxmox better is that there’s no reason to try and copy Amazon’s public cloud on your own could.

I find that the main paradigms are:

1. Run something in a VM

2. Run something on in a container (docker compose on portainer or something similar)

3. Run a Kubernetes cluster.

Then if you need something that Amazon offers you don’t implement it like open stack, you just run that specific service on options #1-3.


I think the utility really comes from getting an accessible control plane over your company's data centers/server rack.

Kubernetes clusters doesn't really solve the storage plane issue, or a unified dashboard for users to interact with it easily.

Something like harvester is pretty close IMO to getting a kubernetes alternative to Proxmox/open cloud.


> When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.

And according to every ex-Amazoner I've ment: the core of AWS is a bunch of Perl scripts glued together


It doesn't matter when there's an entire amazon staff keeping it running.

I think you know as well as I do that it very much does matter. Even if you have an army of engineers around to fix things when they break, things still break.

I think the point is that for Amazon it's their own code and they pay full time staff to be familiar with the codebase, make improvements, and fix bugs. OpenStack is a product. The people deploying it are expected to be knowledgeable about it as users / "system integrators" but not developers. So when the abstraction leaks, and for OpenStack the pipe has all but burst, it becomes a mess. It's not expected that they'll be digging around in the internals and have 5 other projects to work on.

That explains a lot

The reason there were so many commercial distributions of open stack was because setting it up reliably end to end was nearly impossible for most mere mortals.

Company’s like meta cloud or mirantis made a ton of money with little more than openstack installers and a good out of the box default config with some solid monitoring and management tooling


This matches my personal experience having worked with OpenStack.

> One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.

A 'childish set scripts' that manages (as of 2020) a few hundreds of thousands of cores, 7,700 hypervisors, and 54,000 VMs at CERN:

* https://superuser.openinfra.org/articles/cern-openstack-upda...

The Proxmox folks themselves know (as of 2023) of Proxmox clusters as large as 51 nodes:

* https://forum.proxmox.com/threads/the-maximum-number-of-node...

So what scale do you need?


CERN is the biggest scientific facility in the world, with a huge IT group and their own IXP. Most places are not like that.

Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.


> Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.

I currently work in AI/ML HPC, and we use Proxmox for our non-compute infrastructure (LDAP, SMTP, SSH jump boxes). I used to work in cancer with HPC, and we used OpenStack for several dozen hypervisors to run a lot of infra/services instances/VM.

I think that there are two things determine which system should be looked at first: scale and (multi-)tenancy. More than one (maybe two) dozen hypervisors, I could really see scaling/management issues with Proxmox; I personally wouldn't want to do it (though I'm sure many have). Next, if you have a number internal groups that need allocated/limited resource assignments, then OpenStack tenants are a good way to do this (especially if there are chargebacks, or just general tracking/accounting).


vast vast (vaaast) majority of businesses are in that 1-100 nodes range.

> vast vast (vaaast) majority of businesses are in that 1-100 nodes range.

Yes, but even the Proxmox folks themselves say the most they've seen is 51:

* https://forum.proxmox.com/threads/the-maximum-number-of-node...

I'm happily running some Proxmox now, and wouldn't want to got more than a dozen hypervisor or so. At least not in one cluster: that's partially what PDM 1.0 is probably about.

I have run OpenStack with many dozens of hypervisors (plus dedicated, non-hyperconverged Ceph servers) though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: