Hacker News new | past | comments | ask | show | jobs | submit login

Right and it does it in a way where everything doesn't fall apart if your stack is more involved than a web server.

For example, in the author's script wanting to run a background worker ramps up the complexity by a lot but with Kubernetes this would be adding 1 more deployment and you're done.

For most of my own stuff I just run 1 copy on 1 server and configure nginx to queue up requests that fail due to a 502 and then release them in the order they were received when the back-end is available again. This way you don't have hard down time. While your app restarts during a deploy the user only gets a busy mouse cursor for a few seconds while your app boots up. No load balancer needed. Lua scripts and nginx are a powerful combo.




Hi! Author here. It's true that my method can get complex very fast depending on what you want to do. I also wrote this to explain some concepts about git / docker.

If I were running more than 1 web service I would maybe take the reverse proxy out of docker-compose to manage it separately. Each service would be its own git remote.

If I had even more stuff, I would probably run containers separately and explicitly create docker networks. But yes, now I may have reached a point where a standard k3s deployment is easier. This is just a method I found useful for my use case and I believe that to just run 1 or 2 web services on a VPS it's easier to set up than k8s/k3s.

At the same time, I would argue that running a single monolithic service on a single (powerful) VPS is more than enough for more cases than people believe.

I would also like to learn more about your approach!


> At the same time, I would argue that running a single monolithic service on a single (powerful) VPS is more than enough for more cases than people believe.

This is very true and I'm in the same camp as you in that regard but a lot of popular web framework tech stacks include both a web server and a background worker to process tasks outside the request / response cycle. The background worker isn't exposed over an HTTP port. It's a process that uses the same code base / Dockerfile as your web server but runs a different command. It would need to be up during deployments and also get updated to the new version during your deploy.

Even with a monolithic app a lot of apps using Flask, Django, Rails, Laravel and others will at least end up having a web + worker, and then you have the usual postgres / mysql + redis too.

Then there's also certain frameworks like Rails where you may want to run a separate websocket service that also uses your same code base but runs in a dedicated process to handle broadcasting websocket events. This one would also want to be proxied since it's accessible over the internet in order for it to work.

I'd be curious to see what your shell script and overall strategy looks like with the above set of requirements because IMO the above (minus the Rails websocket server) applies to a huge array of apps out there which use the web + worker + db + redis + maybe websocket server combo.


I'd love to read more about your approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: