Fantastic post-mortem that I regularly share out with others. A great cautionary tale on the importance of backup testing procedures (aka why "set and forget" with backups is not sufficient).
Funny enough, I used to think that, but overall having the people that actually build the software make a hosted version ends up being better value IMO.
Our self hosted Gitlab instance needed a lot of hand holding over the years, occasional memory upgrades and involved much more work than initially anticipated. Also keep in mind that you need to spend time to keep the thing up to date and get latest features, which there have been quite a few good ones added over the years.
When our server is down, I know why. Out of Memory, Storage is full. Service is down. We can do something about it.
If a hosted service is down there is nothing to do but wait.
> Out of Memory, Storage is full. Service is down.
Is that all you have seen? I have seen many self hosted services fail in our company and the answer is almost rarely any of these simple things but complex things like data loss, data corruption, random restarts, network partition, configuration sync issues etc. That is why companies pay other companies for critical things like gitlab even when self hosting.
Issues don't mean downtime though, and GitHub has lots more features. For example an issue noted here was stale commits of up to 7 minutes in 9% of new pull requests... I'm willing to bet such issues aren't acknowledged by most platforms