Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deployment and infrastructure for a bootstrapped webapp with 150k monthly visits (casparwre.de)
252 points by wolfspaw on Sept 26, 2022 | hide | past | favorite | 161 comments


Great write-up. Note that the author is only serving 70 requests per second at peak with 15 thousand registered users and 3 thousand in revenue per month. This just shows that you don't always have to plan to scale to thousands (or millions) of requests per second.

This blue-green deployment workflow reminds me of a similar setup used by the creator of SongRender[1], which I found about via the running in production podcast[2]. One thing to be aware of with smaller VPS providers like Linode, DO, and Vultr is that they charge per hour rather than per second. So if you boot up a new VM every time you deploy you're charged for a full hour each time.

[1] https://jake.nyc/words/bluegreen-deploys-and-immutable-infra...

[2] https://runninginproduction.com/podcast/83-songrender-lets-y...


I think we can safely put docker images (not k8s) in the "boring technology" category now. You don't need k8s or anything really. I like docker-compose because it restarts containers. Doesn't need to be fancy.


I needed something that would restart containers automatically when I pushed to a branch, so I wrote a few lines of code to do it:

https://gitlab.com/stavros/harbormaster

As far as PaaSes go, it's probably the simplest, and works really well.


> It also cleanly stores data for all apps in a single data/ directory, so you always have one directory that holds all the state, which you can easily back up and restore.

Ha! I'm store my data in /data for ages. The only difference is what at first I bothered with tinkering with datadir paths in configuration files, nowadays I just symlink /var/lib/whatsitsname to /data/whatsitsname. I'm lazy and boring.


Funny I did something similar but only much easier. A cron job that ran every 5 minutes doing a git pull on the production branch and since it was a PHP stack the new version was live instantly without needing rebuild or restart of any service

It's basically a oneliner in bash. KISS


That's good when you only want updates, but I wanted to be able to launch new services on the server by just pushing a new YAML file to git. Harbormaster does it wonderfully, and it lets me see exactly what's running on the server and how with one look at the YAML.

I can also back everything up by just copying one directory, so that's great too.


This to me is boring and simple technology. Well done.


Thank you! I really like it, it's been working great for a while now, with no hiccups.


Disagree. The over-engineering rot starts with Docker. Solo devs do not need Docker. It's not a stability issue, it's an appropriateness issue.


If you produce a static binary and can copy it to the server, sure, Docker might be overkill. But how do you reproduce an ensemble of services and their exact version in prod environment? Even just having a stable dev environment by using Docker images with fixed version for database and other auxiliary services is very valuable.


You can use something like Nix or Guix for example.

Bonus: it produces OCI images also.


While this is definitely a valid option I'm not sure that introducing Nix/Guix simplifies the stack compared to using Docker. Depends on the setup and experience, I guess.


I love Docker as a solo dev:

- builds and dependency management, all in one

- works “the same” on my laptop and server

- there’s an easy way to reset services when they break


How do you `docker push` from your laptop to the server with consumer broadband? I've heard docker times out after about 50-75%. Great fun.


This might be a "person in a country with pretty fast internet connection" opinion but I don't think I ever had that issue. Might also depend on the container size, pushing > 1GB containers might be more error prone than pushing small(ish) containers.


Sounds like you need a better internet connection.


Why use Docker on a managed container service when you can just pay more to manage your own VMs, VPC, reverse proxy and blue/green deployment infrastructure?


agree at a gut level, but look how productive people are .. ps- checkout podman


Agreed.

Containers... just work? Actually, Docker just works. It is rock solid and I'm very delighted I learnt so much of it.

The only thing I'm missing about Docker is a bigger focus on on-the-fly image editing. Like, I love to have these container development environments. I'd love to give some command inside the container and make it magically appear on the Dockerfile if it was successfully executed, so I don't have to manually copy things over when I want to have reproducibility for later.


Docker is incredibly stable, and since it turns 10 years next year, OP will finally be able to migrate away from having a build step on production servers! ;-)


> I like docker-compose because it restarts containers.

Still sounds like over-engineering to me. It's trivial to make systemd supervise a service and automatically restart it on failure.


Honestly they're both easy technologies to learn and master.


Can say the same for ci/cd. Author talks about integration tests, setting a barebones "build this and deploy it" pipeline is far more trivial and skips having to do it manually every time.


It's interesting that some engineers look at docker and ci/cd and say "that's too complicated" and others look at a system without it and say "it needs to be simplified"


Inexperience or difference in experiences. I wouldn't necessarily advocate containers as my experience has been mostly "this takes far longer than the gain" (given the OP's scenario).

Meanwhile, my experience is a minimal CICD pipeline takes half an hour once you know manual deployment, and I lose far more having to SSH and run scripts myself than telling a machine to do it for me (not just time but mental satisfaction too).

Though I wouldn't call either fancy in this year. Only past that stage, where the benefits drop sharply for individuals.


Containers are more than just Kubernetes


I definitely not believe that Docker images are boring, they are full of constant issues and stuff you have to keep track on the top of the fact they are not composable in many ways.

Rootfs is the boring technology, their semantics is clear and most kernels have support for them.

Better: use systemd which has became truly boring and restarts your services automatically and if you don't like systems, plenty of alternatives such as s6 do exist.


For few of the small API-mostly apps I am particular to just embedding the statics into binary and have one blob to run and deploy, go:embed making it particularly easy.

Container with it is just a few files in etc and binary blob itself. Same with container-less deployment.


When you're a 1 man show, you need to both save your own time, and compute costs.

To save your time, use the simplest thing you know how to use. Whatever you can set up easily and gets you serving a hello world on the internet.

To save compute, just don't do stupid things. In particular, you should consider what happens if this project isn't a runaway success. One day, you might want to leave it running in 'maintenance mode' with just a few thousand users. For that, you'd prefer to be running on a $10/month VPS than a $2000/month cloud setup which is going to require constant migrations etc.

Things like automatic failover and loadbalancing are seldom required for 1-man shows - the failure rate of the cloud providers hardware will be much lower than your own failure rate because you screwed up or were unavailable for some reason.


Im secretly a fan of the boring tech statement. Sometimes all of the new containerization and microservice paradigms just feel like an excuse to overengineer everything. Running solo and starting from scratch (no code, no infra, no cloud subscriptions) means you'll have to simplify and reduce moving parts.


Microservices are difficult to justify with a small team (or a single person team in this case). If your monolith has obvious modules that can be carved out, you _need_ to scale them independently and for some reason you can't just deploy multiple monolith copies, then it may be worth it to do microservices.

At large companies, the calculations are different. Even when monoliths would be the best fit, they may still carve out microservices. How else would they be able to ship their org chart? :)


I still believe the LAMP stack is the future for most non-realtime applications.

With PHP you just FTP copy a file to the server and it works instantly, no builds, no compilation, no restarts. Compared to Node.js the server feels a lot more robust by default and an uncaught exception doesn't bring down the whole server until restarted.

The biggest plus: you can host a lot of applications on the same stack and by default ALL parts of the stack (Apache, MySQL, PHP) can handle concurrency and load distribution well (with Node.js, for example, by default each application will be mostly single-threaded, you have to manually spawn multiple Node.js instances/workers for it to properly use all the server resources).


Yes! Boring tech is working tech, never choose an untested technology if there's any real stakes in the work.


KISS but unironically


Just a small comment, blue/green usually implies some sort of load balancing, here OP is just flipping a switch that changes a hostname and flips the roles of blue/green from staging/production.

Nothing bad with that, thought, and part of its genius is how simple it is.


You can still do that in this kind of setup, just put loadbalancer like HAProxy on both that sends traffic to both nodes but prefers "the one that's local" in default config. Then you can use the LB to move traffic as fast or as smoothly as possible, and it also masks failures like "app on current node dies", as the traffic near-instantly switches to the other, at some small latency cost.

This also gives you a bit more resilience, if the app on node dies (out of memory/disk/bug/whatever), the proxy will just send the traffic to the other node without having to wait for DNS to switch


well balancing is done on DNS level, and it’s not required to have literal loadbalancer.


Blue/green usually involves moving traffic from blue <-> green gradually. You usually do this with a load balancer. That is all.


Usually, yes. But now some[one|thing] will have to do the monitoring and gradually ramp up traffic, rollback, etc.

It's perfectly ok to immediately flip the entire traffic if you are a one man shop and your customers will tolerate it.


I've never heard that the gradual moving of traffic was a requirement for B-G deployment (though it is helpful), just that you 'flip a switch' between one running server and another one.


Looks like all of you in this thread missed this line:

> I’ve already mentioned my 2 application servers. But the magic thing that makes it all possible is a floating IP address from DigitalOcean.

To the user nothing changes, including IP.


For what it is worth, I am handling about 130k views and registering ~1k paying users per day with a t2.large instance server running node, redis and nginx behind a free tier Cloudflare proxy, a db.t2.small running postgres 14, plus CloudFront and S3 for hosting static assets.

Everything is recorded in the database and a few pgcron jobs aggregate the data for analytics and reporting purposes every few minutes.


... Could you translate please?

As someone who isn't a programmer (mechanical engineer) but has some programming ability, the idea of designing something like the article author did and sharing it with the world intrigues me.

How much does a setup like you described cost per month? (Or per x/number of users, not sure how pricing works in this realm)


The article's a pretty simple design, and how most people used to do it pre-cloud platforms. The article's pretty much saying "you don't have to go AWS/Azure". Although back in the day we didn't really have managed DB instances, you'd often just run the DB on the same server as the app.

The parent, however, is paying a lot more for a t2.large.

For that kind of money you could almost get a dedicated machine which would be 10x more powerful than a T2.large, but with the hassle of maintenance (though it's not a lot of hassle in reality).

The advantage of AWS is not price. AWS is quite a bit more expensive if you want the same performance. It's either that you can scale up and down on demand, or the built-in maintenance or deployment pipelines.

So you can save money if you have bursty traffic, or save dev time because it's super easy to deploy (well, once you learn how).

Cloud platforms can also have weird limits and gotchas you can accidently hit, like your DB can suddenly start going slowly because you've got a temporary heavy CPU query running, as they don't actually give you very much CPU with the cheaper stuff.


How many CPUs has your t2.large and how are you clustering Node across them?


Four. Using pm2. But it is a overkill. This server is running other applications next to my node server.


I love this simple setup. Big fan. I also do everything simple. Add more workers? Just enable another systemd service, no containers. Let the host take care of internal networking. The biggest cost is probably a managed db, but if you are netting $$ and want some convenience, why not?


How is adding another systemd service easier than starting another container in this instance?


It's simpler, not easier. Both are easy. But one is a lot less complicated in terms of the extra layers it introduces.


Well, for one, you don't need docker in the first place


I just prefer using whatever is already present instead of forcing another layer.


Using two servers, one for production and the other for staging/failover, then switching upon release is a neat technique.

Been using it for our API backends for about ten years.


I'm doing about 1M database writes per day. DB is sqlite3, server is a Hetzner instance + extra storage that costs about $4 / month total.

Computers are fast.


Fascinating.

I have the exact same number of visits and run the site with a boring PHP/MySQL setup from a cheap but very reliable 200€/y shared hoster. Deployment via git over ssh as well.


Well done! I work on antique brass clocks.


Just because it's an old method doesn't mean it's a bad method.


Simplicity always wins over complexity. Complexity is a liability, but it does make you feel smart.


> simplicity always wins over complexity

S/he wrote on a hand shaped microprocessor/wireless transmitter connected to a billion servers via 7 layers of network protocol. Ultimately her taps on the micro capacitator infused glas surface resulted in light pulses through cross ocean cables encoding her (encrypted ) words carried to hundreds of people all over the world.


Is this sarcasm?


Not gonna lie, I was triggered by the no CI/CD and shared database between staging and prod. But those concerns where very satisfactorily addressed. I'd miss some form of CI/CD if it was a team, but I suppose that for a single person show, running tests locally is enough.

I do miss infra as code mentioned. If shit goes tits up with the infrastructure and everything was setup with clicking around in control panels, ad-hoc commands and maybe a couple of scripts, your recovery time will be orders of magnitude bigger than it has to be.


Maybe I'm missing something here, but what are the advantages of having two identical servers with a floating IP that switches between them instead of just running two instances of the app on the same server and switching between them by editing the nginx proxy config?


So you can do maintenance/kernel upgrades etc. on the server that's not in production at the moment.


Less work. You'd need to have a different context for environment variables that adds another level of complexity. Likewise if you had a dependency that needs updating (SSL, etc), you can do it safely on the idle machine without worrying about production traffic.


Eh, it's pretty trivial to do. The reason for 2 servers is high availability, not that it is easier in meaningful way.

I run something similar on personal site and it's literally just 2 identical configs but with different port/directory and some haproxy config.


Oh my all the opinions again. Software is not, believe it or not, a True or False game. TIMTOWTDI, folks. This guy rocks a solid process that he is comfortable with and that works. I for one applaud him for it.


Something not mentioned but I’d like to know is the running cost?


He gives his server specs and his cloud provider, he pays most likely a bit less than $100 per server per month: https://www.digitalocean.com/products/droplets

Which is absolutely insane for a website that serves 200 visitors per hour.


8c/16g seems absolutely bonkers for an web/app server. I could understand if there was also a data & caching layer shoved onto the same box, but that's a LOT of memory for a python app. That's more inline with "mystery meat ASP.NET app built by some a couple guys in Slovenia 12 years ago".


> mystery meat ASP.NET app built by some a couple guys in Slovenia 12 years ago

??


FastComments is on track to serve 1 billion pages loads next year and we spend less than $500/mo for US and EU deployments combined (+about 15m uniques a month and each user gets a live websocket connection).


Just to be clear are you saying the costs are HIGH or LOW? That cost seems high given his relatively low traffic count but I don't know what sort of processing is going on in the background.


High. Although the assumption here is that there are no spikes… which might or might not be true, but for a web-app, 200 visits per hour is not much.


70 requests per second at peak-time


probably that's why he already got white hair.


"Around 500 USD. I also use Sentry.io, Papertrail, Twillio and a few other tools that cost money."

From: https://www.reddit.com/r/Python/comments/xobba8/comment/iq05...


This seems to me to be a hefty bill for the traffic received. I run devops on a site with 1.5-2mm unique users a month, bursty traffic, and our monthly bill is just a couple hundred higher than his. We are overprovisioned.


The author says the product is hosted on two VPS servers, a managed database, and a floating IP; I doubt that'll be a significant part of the costs.

If that bill includes (business) licenses for software and subscriptions, it's pretty reasonable. I'm sure it can be done cheaper and I'm sure the author has looked into this as well. Maybe it's not worth the effort, or maybe there's some other blocker preventing migration to cheaper alternatives.

Maybe $2500 of monthly profit is enough for the author not to bother messing with the code base? Maybe the fact the reported revenue has doubled [1] over two months has something to do with it?

[1]: https://casparwre.de/blog/python-to-code-a-saas/ vs https://casparwre.de/blog/webapp-python-deployment/


according to his bio https://twitter.com/wrede he makes about 1.5k a month so infra cost is hopefully is not much more than that. for sure the labor is the biggest part of this operation


if it was mine i would most certainly opt for ansible or something likewise, the overhead of logging into a machine and doing all the things by hand/manual is more complicated and error prone than a playbook would be (at least for me and me forgetting all the steps all the time ^^).

But who are we to judge, impressive to earn 2,5k every month with it, kudos.


This is pretty close to my current (hypothetical) plan for how I'd stand up a small full-stack app as a solo dev. Only thing I didn't think about was blue/green deployment which sounds great. Glad to see a real-world case study showing that the overall strategy works well


> The trick is to separate the deployment of schema changes from application upgrades. So first apply a database refactoring to change the schema to support both the new and old version of the application

How do you deploy a schema that serves both old and new versions? Anyone got any resources on this?


You only do additive changes. Only add columns etc. never remove them ... until all your old application code is no longer running.


And you have to weaken your schema to allow nulls for the duration that old and new code are both running (and possible foreign key constraint failures).


Thanks. I thought it would be something... smarter. Not sure I like that, seems to come with a lot of caveats.


"smarter" means more code and that's rarely more maintainable.

It's what you have to do if you want less downtime. Normally it shouldn't span much more than 2-3 versions of application and it's rare that the annoying kind (say writing same data to "old" and "new" column) happens.


> until all your old application code is no longer running.

Or until it's been disabled by feature flags.


The application needs to be developed with this constraint in mind. I’ve spent quite a lot of time over the years doing blue/green deployment advocacy and awareness for developers in some orgs. It does sometimes substantially complicate the code to build both backward and forward compatibility, but it is so nice when done well.


Love this: “absolutely No Kubernetes”.


FAQ says an upgraded board (~10$) lasts forever... surely it wont be here in 2150 CE ? I'm always curious about the word 'forever' in legal agreements, when truly do we expect a 'forever' product to go out of service.


I came here to see more website links with one-man shows that are beating 150k/m.


It's a lot more work than say Next.js and Vercel, but I suppose it costs a lot less to maintain in cash (trading cash for time).


I'm sure everyone's mileage will vary based on experience / comfort with the particular stack here... but in my experience shipping a monolith (Python, Ruby, whatever) to a VPS is vastly simpler than a comparable Next/Vercel setup, especially for a CRUD app as the OP is talking about.

Dealing with the details of things like SSR vs client side, Apollo caching (cuz w/ Next you are _probably gonna go w/ GQL, and so you probably use Apollo), and monitoring/observability in the Next/Vercel world is hard. You don't have these problems with a simple monolith, and you can sprinkle in your front end magic with whatever lean JS approach you want as you need it.

And to be fair Vercel has some logging and datadog integration, but it is very limited compared to what you can get and can configure as-needed with real agent based monitoring.

I'm increasingly feeling like the 'benefits' of server-side rendering in all its flavors for the big JS frameworks are just a boondoggle and not worth the complexity and cost for _most_ apps, even ones where the cost of an SPA are worth it.


- SSR vs client side: NextJs does this automatically. https://nextjs.org/docs/basic-features/pages

- GQL: the only case where I'd use GQL is where he would likely want to too: building native mobile apps. Otherwise, I'd use tRPC and have fullstack type safety. This is included in my 5 minute estimate because I'd use create t3 app, which has this configured out of the box https://trpc.io/ https://github.com/t3-oss/create-t3-app

- monitoring/observability: not quite sure what you mean, but it's only a few clicks in vercel to have a logdrain or metric drain and configure alerts and dashboards in your tool of choice. And if I wanted to roll the whole thing myself... that's what he did. Rolled it all custom by hand. The difference is I have the option of not reinventing the wheel.

At the end of the day I can get into composing a docker image that is configured just like his prod servers if I need that access, and there's likely already a docker image pre-done that has 99% of what I need, including metrics/monitoring. At which point I could launch my service on any number of hosts that allow direct pushing of built docker images, if my build process goes there.

To each their own, but I think my solution requires far less time to setup and maintain, but costs more, as I said. Manually engineering each system yourself might be fun to you, but it's not to me.


> SSR vs client side: NextJs does this automatically. https://nextjs.org/docs/basic-features/pages

Uh-huh. Do some simple stackoverflow searches for how to configure a real Next/Vercel app with error-handling, passing thru cookie/session state, or how to wire up third party services or components. Or check out next.js issue search for "window undefined" - https://github.com/vercel/next.js/search?q=window+undefined&... -> 1667 issues right now. Even if you discount 3/4 of those as duplicates, developer error, etc, you have a ton of issues where figuring out server vs client state is confusing for many folks.

It sounds like has worked great for you out of the box, which is great! But that isn't the case for many other developers.

This is not my definition of "automatic".

> I'd use tRPC

This looks pretty compelling if you are all-in with TS everywhere, including your server/data access layer. Which doesn't apply in the OP's post, since it's a traditional CRUD app with python in front of good ole' PSQL DB.

> monitoring/observability: not quite sure what you mean, but it's only a few clicks in vercel to have a logdrain or metric drain

We are using Vercel routing its logs to datadog right now. It works fine but is extremely limited. Want to wire up full-stack transactions w/ the rest of your app, or modify _what_ it logs? Good luck.

Also, trying to use a real APM (Datadog, Scout, open source, whatever) is a complete non-starter. You need a persistent process running somewhere to take the data, buffer, and transport it _somewhere_, and so far you can't do that with Vercel's custom server support. You _can_ do this with just NextJS (no vercel), but it requires some frightening hacks I would not be comfortable adding to apps I maintain: https://jake.tl/notes/2021-04-04-nextjs-preload-hack.

I get that Vercel provides a very quick setup and a smooth dev experience, especially if you are in their sweet spot of something like a static site or blog engine. It just seems more trouble than its worth for most CRUD apps, especially anything that isn't 100% in the javascript/typescript world.


I don't think that the prescence of stack overflow questions is a bad thing, quite the opposite, that demonstrates that there is a community and you'll likely find every answer to each problem you have

Comparing that to a completely bespoke web app you stitiched together with various things you've liked over the years... I bet there are very few stack overflow questions that cover his stack. To me, that's a very bad thing.

The fact that you can quickly list out shortcomings in nextjs but not his bespoke stack demonstrates how much bigger the community is and how many more people are working on those problems. I prefer known problems to unknown problems, personally.

>This is not my definition of "automatic".

And a completely bespoke app is? Come on. Obviously a framework backed by thousands of apps, tens of thousands of developers and contributors, and multiple corporations, is going to be more automatic than "this bespoke thing I made".

>I get that Vercel provides a very quick setup and a smooth dev experience, especially if you are in their sweet spot of something like a static site or blog engine. It just seems more trouble than its worth for most CRUD apps, especially anything that isn't 100% in the javascript/typescript world.

I just can't disagree more. I do agree that as a web app grows in complexity you can outgrow Vercel -- but I'd MUCH rather be in the position of spending a few minutes pointing my repo at one of the dozen other services that scale with complex webapps better, or composing a simple docker setup and pushing that.

The alternative of having some bespoke app on a VPS... you're looking at a total rewrite to scale, imo. That bespoke app has a hard ceiling. That or you just keep stitching new stuff to the side of your frankenapp. Meanwhile Netflix, Hulu, Twitch, Uber, Nike, McDonalds, Apple and many others are launching massive scale web apps on nextjs, demonstrating that the technology is primarily for interactive full stack react web apps. The idea of using nextjs and launching a full react app for your static site sounds very over-engineered. Just do something on Github Pages or Jekyll.

I would go the exact opposite as you: playing around with your bespoke stack on a VPS is great for a static site or a very simple app, but the second you have any complexity at all, any real business logic, any real work being done... your bespoke app will turn into a nightmare, and nextjs is proven around the industry to do just fine!


I definitely agree that there’s less magic going on with a traditional style deployment, but I think you’re strawmanning Next a bit with this comment. Not everyone uses graphql, and a vanilla Next API route is similar in complexity to a vanilla node http server.

For a basic CRUD app, especially for a solo dev, the monitoring story on Vercel is sufficient and it’s hard to say that their single repo push to deploy setup is more complex than the traditional alternatives.


It does not look like a lot of work at all? A few days of initial setup and that's it.


I can go from a blank folder to deploying a git hosted next.js app on vercel in about 5 minutes, so comparatively, it is to me.


Quite weird to compare infrastructure of a static frontend to a backend with db and stuff?


Quite weird to rebuke the assessment with an objectively wrong statement. Next.js isn't a static frontend. You can statically generate pages w/ or w/o a DB, and server side render dynamically. A DB is literally 3 clicks and a secret string away (starting for free) with Planetscale or Railway.

I've setup Django + Postgres on a VPS and Next on Vercel w/ hosted DB. Next is leagues simpler and faster to spin up.

Here's Django Postgres on Digital Ocean: https://www.digitalocean.com/community/tutorials/how-to-set-...

Next on Vercel is:

1. npx create-next-app

2. Click button for DB spinup

3. Copy secret string to .env

4. git push


Comparing Digital Ocean to Vercel is apples to oranges. One is an app platform, the other isn't.


My comment is on a thread debating the speed-to-deliver and complexity of this traditional VPS Python deployment vs. other options. For many people on Hacker News trying to deliver a product fast, that's very relevant.


Except that the time it takes to set up the project is negligible compared to actually developing it. So if it takes you 5 minutes and me 5 hours to have a new project up & running, that doesn't really matter in the end.


I’ve done a 1 month MVP on each of these stacks, it’s not just the initial deployment. React is better integrated in Next, and you can get more done faster and simpler. DRF has poor async and is not an ergonomic practice in 2022, Flask requires lots else to build and maintain with weaker integration / community, and if you’re arguing why HTML templates are just as good, well, Fortune 500 and every HN startup has left it behind for good reason.


In one instance you're talking about "HNers making a quick product", and the other you're talking about fortune500s. Two vastly different things with different needs.

I'm also not arguing about the specific tech. It's irrelevant. If I know tech X, choosing tech Y because it's hipper or "faster to set up" helps nothing if it's slower because it's unknown to me.


>Quite weird to compare infrastructure of a static frontend to a backend with db and stuff?

I suppose you don't know what you're talking about, but Next.js is a backend and frontend supporting a wide variety of data sources. I personally like using Prisma and postgres with my Next.Js deployments.

I'm literally rolling out a full stack database connected react app in minutes that includes a full functional API that can be accessed by a variety of clients.


But then you suddenly have complexity again. In your various comments here you're talking about ten different tools and concepts being woven together. It's only fast for you because you've done it before.

Not inherently faster than OP's approach.

Like, I can spin up a k8s cluster faster than I can even understand what parts of Nextjs I should use. Doesn't mean that me selecting k8s is the best choice for all problems.


What are you talking about? That's not complexity, those are out of the box features that come with a single script run (create next app or create t3 app) Vercel will even launch that for you without a script, just a button click. Five minutes to launch with my preferred stack, though.

And if we're really racing, Railway.app can spin up deployments AND databases in under 60 seconds. If I tried a speed run, I could get the time down.

If OP can hand configure a python app running on ngnix in manually managed DO droplets in 5 minutes... good on them! The greatest devops of our era.

Also -- I guarantee that a newbie developer can launch a nextjs app faster than hand rolling some DO droplet with python and other technologies. I guarantee I can have a newbie in a bootcamp with a working app on day1 (since it's literally a button click, if we want), while I doubt you could get that newbie out of a bootcamp on his stack at all.


Why so hostile in your comments? It's not a competition about being the fastest. If anything, it only shows immaturity on your part. Time spent setting up a project is negligible compared to other things surrounding it.

You're anyways missing my point. Nextjs isn't a silver bullet, nothing is. And that how long it takes to set up a project is a function of familiarity with the stack.

Edit: thanks for deleting all your personal attacks on me. But also remember, I'm not the one claiming to know everything here (as you wrote in your now deleted post), you're the one that critiqued OP.


Please don't do this sort of tit-for-tat flamewar (or any flamewar) on HN. It's tedious, nasty, and not what this site is for.

We've had to ask you this kind of thing before. If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


I honestly don't believe I did that in this instance. I felt I argued on technical merits. If you can find their deleted post (not their flagged posts, but a deleted one), you'll see my responses were quite level headed in comparison. After I read that one, asking why they were so hostile feels justified (and quite weak in comparison) on my end.


I'm not saying your comments were symmetrical, but (a) you kept the flamewar going, which the site guidelines explicitly ask commenters not to do here; and (b) once people start arguing about what they are or are not saying, or feel compelled to include swipes like "Why so hostile in your comments?", "If anything, it only shows immaturity on your part", "You're anyways missing my point", it's clear that the thread has degenerated into the tit-for-tat state.


[flagged]


Please don't do this sort of tit-for-tat flamewar (or any flamewar) on HN. It's tedious, nasty, and not what this site is for.

We've had to ask you this kind of thing before. If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


Why stop there? Why not ditch Next.js altogether and teach your bootcamp students low-code?


Why stop there? Why not ditch computers and teach my bootcamp students to farm?

Because nextjs is an industry standard technology letting small teams rollout complex and great apps in record setting time, used by dozens of major corporations, hundreds of start-ups, and is a wonderful addition to any resume, and a proven way to get your next job.



150k/m - I thought it meant kilometers per hour. It's obviously thousands per month.


Love the simplicity and clear description, thanks for sharing!


Do u have 15k paying users or are most of them free?


It’s so simple, I love it.


Use use fly.io or heroku.


Hallelujah - a bit of sense regarding the appropriate use of Docker and Kubernetes.


What is the app?


He does say, but:

https://keepthescore.co/


Why not simply run the app locally with the production database?

(either a copy of production or directly connecting to production instance)


You lose the convenience of being able to immediately switch back to the previous version as well as testing on identical hardware and software.


If you're running it in local, there's no need to switch back to the previous version.

It's true about the hardware, but if you use a container it can run on the exact same software.


I'm a strong advocate of boring technology too but I'm also very much in favor of keeping things off my dev machine. In that case they have to run ssh, git and run a script to switch endpoints.

My current boring system for a simple Rails app (https://getbirdfeeder.com) is that I push to GitLab, the CI builds a docker image and does the ssh & docker-compose up -d dance. That way I can deploy / roll back from anywhere even without my computer as long as I can log into GitLab (and maybe even fix things with their web editor). Seems a lot "more boring" to me and having the deploy procedore codified in a .gitlab-ci.yml acts as documentation too.


My rule is: SSH to a machine and running manual commands for any non-toy systems is an antipattern. Every time you SSH to do something, it means you are missing automation and/or observability.

In small teams it's bad because it's a waste of time. In large teams it's bad because it creates communication overhead (it's also a waste of time but that can be often absorbed if things don't get too crazy). For a single person team it is bad because now they have to depend on their memory and shell history.


For a single person team it is bad because now they have to depend on their memory and shell history.

There is this thing called shell scripts.


The scripts can SSH to whatever.

If someone is connecting via SSH and then running scripts, that's an issue.


Yeah it's really straightforward to setup Github/Gitlab/Circle/etc. to run your various deploy / ssh commands for you, then you don't need actually access to the box.

Really nice being able to open up GitHub and manually kick off a deploy -- or have it deploy automatically.


This needs to be on a plaque somewhere. Reproducible doesn't just mean build systems!


With December approaching, sort your hampers for the dear ones in Germany with a dash of creativity and a bit of utility. That’s what makes the best gift combos. If you’re not good at handling the finer balances, take online suggestions from gifts2germany.com and send across amazing gift baskets to pals and people in Germany, at least costs. We have gourmet hampers, cakes, baked goods, wines, champagnes and many more in our Christmas Hampers to Germany , with the promise of 100% sure deliveries of 1-2 days to Germany, with absolute free shipping. Enjoy the handpicked baskets, boutique Christmas Gifts to Germany with our 24*7 customer care and seamless delivery updates. Source: www.gifts2germany.com/Christmas_Germany.asp


I don't see why the author is so proud of avoiding tooling that would make their build and deploy process simpler? Even something like digitalocean's own buildkit based "apps" would be an upgrade here. Deploying your app using ssh and git is not magic or simple it's just a refusal to learn how actual software delivery is done.

Even totally dodging docker/k8s/nomad/dagger or anything that's even remotely complicated platforms like AWS/DO/Fly.io/Render/Railway/etc obsolete this "simple" approach with nothing but a config file.

I also theorize that the author is likely wasting a boatload of money serving ~0 requests on the staging machine almost all the time, due to him literally switching a floating IP rather than using two distinctly specced machines for production and staging


> just a refusal to learn how actual software delivery is done.

One day you will be promoted to 'senior engineer' and revise this statement.

Software exists to solve problems. Adding more complexity has to serve a purpose. Just because you read somewhere that Netflix does their deployment in some way, doesn't mean that it's the right way for your environment.

The only thing I disagree with his approach is that they do "ssh <host> git pull". They should just "git push <host>" from their CI machine(or laptop since they have no CI/CD). No reason to allow access to code from the actual server and git is distributed.

They could surely turn off the non-active server but they might be using that for failover. In that case serving zero requests is totally fine.


Way past senior engineer, not sure how git/ssh acrobatics is less complex in your mind than "fly deploy" or clicking a button in render/vercel/etc. Premade github actions / gitlab / all exist for these things and the author would be paying less money for a more robust deployment process. DIY with tools that aren't meant for the job isn't better because you already use git/ssh.

Author is running gunicorn/flask/nginx, this is trivial for a buildpack and would take like 5-10 minutes tops to set up a robust build/deploy flow with any recent PaaS.


I disagree, avoid CI/CD tools is the right call until this grows enough that you can't continue writing small scripts. I think all the CI/CD tools are garbage in that by being generic they do a poor job at addressing your specific needs.

I don't know who needs to hear this, but you don't always need to start a project by reaching for the most complex tool possible. Build up to it and then you won't need convincing to use it.


Not every CI/CD tool is general purpose. I operate a small SaaS that runs on laravel. I have an incredibly simple CI/CD pipeline with static analysis and tests in GitHub Actions (one small workflow file) and Laravel Forge + Digital Ocean for infrastructure management and automated deployments from master. It took 5 minutes to set up and works perfectly


Alternatively, learn how CI/CD works and have a comprehensive tool in your toolbox basically forever


I'm not saying never learn CI/CD tooling, I'm saying don't use it until you need it. In an enterprise setting, yes it's probably a good idea to use such tools from day 1 because you don't ever want to be solely responsible for committing and shipping a bug. In other contexts, like the author's one-man website, the cost is not worth it.


You can't really say that without knowing the author's skills and how much they lose in opportunity costs to deploy manually.

If the author has already deployed 100-200 times and knows enough to set up a basic build, it would be close to break-even with a fairly wide margin.


Are there any CI/CD tools that are likely to stand the test of time in the same way the basic unix ones have?

I'm not against making things easier. But I feel like it's so easy to over-engineer and rely on proprietary crapware.

Is there a conservative path forward here?


Well, our Jenkins jobs still work decade after. Jenkins itself requires some care and feeding tho, mostly upgrades, especially if you use some plugins.

Then again jenkins job can be as simple as "download a repo and run some code in it" without anything fancy so even if SHTF you can migrate away relatively painlessly


No, they are all horrible monstrosities. The ones that are hosted by others (github actions) are still monstrosities, only that other people are taming the monsters.

The best way I've found is to keep the CI/CD tools as just simple task runners. They should only have to figure out when to run things, inject the necessary environment(and secrets), and figure out dependencies (if applicable) and run those. Whatever they run, you should be able to run the same scripts from your own laptop or a similar.



Jenkins/Travis are getting to this point, not that they are the "cutting edge" or anything. I'd personally rely on projects like woodpecker/drone/dagger so that there's no real "rely" on anything closed source or proprietary.


I've actually used OP's system in way higher traffic environments and was happy to read I was not alone. Sometimes all you need are simple tools and if git pull/ssh covers your basis, what's the big deal.

How actual software delivery is done is a relative term to the systems/needs and the scale you have. You probably have not been in environments where build/deploy tools are over engineered to death, to the point where adjusting it one bit brings down the entire delivery to a halt. If you were able to set those up without such issues, kudos to you.


I think what happens is some engineers invest so much of their professional identity in their Docker/K8s/Jenkins/AWS lifestyle that anything less is seen as dirty or inferior. The irony is that many of them are too young to remember the rationale for these tools when they were released, namely the management of massive fleets of servers by huge corporations. You see the same thing with the inappropriate use of React in the most minimally dynamic websites.


I normally would not want to comment short "I Agree"s. But this is music in my ears.


> how actual software delivery is done

the sheer arrogance and ignorance of this comment is remarkable


You're asking for net zero benefits at cost of adding hundreds of thousands of lines of code to your deployment path.

That really smells of desperate developer that need to feel relevant just because they use the latest thing

Like, how the fuck it would be "simpler" if the deploy is literally just running a single script ?


I could fit an entire droneCI/fly deployment setup in a single HN comment, no idea what you've been scarred by.


Calling include on someone's elses code is not programming nor ops no matter what lies you tell yourself.

And we use CI for 12 years and yes, our node definitions can also fit in your utterly idiotic metric

Want to tweet "our app is so small you can just npm get it" next ?


No you cannot fit their whole tree of dependencies, their semantics and their failures modes in a single HN comment.

People use basic tools because they have much more understandable semantics, everyone do not need a gigaton pile of Go code to deploy something.


What?

Manual git pushes ensure the highest level of security and error avoidance.

I routinely find myself having to do one-off post-deployment things that would be a nightmare to try and script into the ci flow.



Say what? I can't tell if this is satire or not.


I had a similar thought. If there was a little more automation here, that staging server could be shut down or even completely destroyed until the next testing/deployment window. The only reason I'd have multiple always on is to load balance for the same "color".


git push live is pretty simple I think. You're right that ssh and git aren't magic, and that's why the author is using them. I agree that they don't work in a team environment, but for a one man shop, I don't see why it's not OK.


AWS's CodeDeploy is magic for stuff like this. I got so many things for free when I began using it for deployments, it's kind of amazing...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: