For what it is worth, I am handling about 130k views and registering ~1k paying users per day with a t2.large instance server running node, redis and nginx behind a free tier Cloudflare proxy, a db.t2.small running postgres 14, plus CloudFront and S3 for hosting static assets.
Everything is recorded in the database and a few pgcron jobs aggregate the data for analytics and reporting purposes every few minutes.
As someone who isn't a programmer (mechanical engineer) but has some programming ability, the idea of designing something like the article author did and sharing it with the world intrigues me.
How much does a setup like you described cost per month? (Or per x/number of users, not sure how pricing works in this realm)
The article's a pretty simple design, and how most people used to do it pre-cloud platforms. The article's pretty much saying "you don't have to go AWS/Azure". Although back in the day we didn't really have managed DB instances, you'd often just run the DB on the same server as the app.
The parent, however, is paying a lot more for a t2.large.
For that kind of money you could almost get a dedicated machine which would be 10x more powerful than a T2.large, but with the hassle of maintenance (though it's not a lot of hassle in reality).
The advantage of AWS is not price. AWS is quite a bit more expensive if you want the same performance. It's either that you can scale up and down on demand, or the built-in maintenance or deployment pipelines.
So you can save money if you have bursty traffic, or save dev time because it's super easy to deploy (well, once you learn how).
Cloud platforms can also have weird limits and gotchas you can accidently hit, like your DB can suddenly start going slowly because you've got a temporary heavy CPU query running, as they don't actually give you very much CPU with the cheaper stuff.
Everything is recorded in the database and a few pgcron jobs aggregate the data for analytics and reporting purposes every few minutes.