Typical web server reliability / maintenance hassle

Hi all – I don’t have much experience running production Linux or BSD web servers, for static sites or otherwise. Netlify and Aerobatic look nice because they seem to take away a lot of the hassle of running my own server.

But how much of a hassle is it really? I’ve installed and played with Alpine, Arch, Fedora 28 Server, and FreeBSD 11, so I’m most familiar with those, plus Ubuntu 18.04 on WSL. Are server crashes going to be extremely rare with most distros at this point, running nginx dishing out a static site? Cloudflare or KeyCDN will be serving the images and fonts most of the time, and probably cached HTML pages too, so it seems like my own VPS would have a light load.

What kind of reliability do you typically see on your static servers, for those of you who are running your own servers (VPS, bare metal, or other)? Is it something you can forget about for months at a time, with nothing bad happening? I also wonder if regular folks have to deal with DDoS attacks. If this is going to depend on cloud provider, I use Vultr right now, but not for production yet. I’ve got one core and 1 GB of RAM.

My opinion, unless your purpose is to learn nginx, apache or caddy, I’d say there is risk in taking on the admin of any of those those systems, since they need ongoing care, to do right. Therefore I’d prefer to focus energy on other things. Probably, as stable as many Linux variants are, it’s not enough to simply set and forget them. You’d need to develop some SOPs to take care of them every week or month or whatever.

Recently, for one site, I switched from Webfaction (shared server, they manage the nginx) to Amazon AWS S3 and CloudFront CDN, also using CircleCI for the CI side. Hosting on S3 is by no means hard, but there were a few things to learn compared to using straight nginx. You take different approaches to things like permissions, httpauth and so on. Then I added CloudFront and CircleCI, and I have plenty of new stuff to learn, without also having to manage the server itself.

My two cents.

2 Likes

My Hugo pages are usually hosted on Github Pages and I use Travis CI to get the builds done. Recently Github also enabled HTTPs on their service, so you don’t even have to use CloudFlare or something similar for that.

I’m quite happy with that solution, because it’s almost maintenance free and also I don’t have to pay more than the domain. In one case I need to host backend services for a SPA. In this project I have a machine at DigitalOcean, but I’m still using Github Pages for the website. For the backend I’ve created a subdomain, which routes to the server.

In my opinion this is where a static website generator like Hugo really shines.

2 Likes

Oh, that’s good news!

I’ve been hosting my latest Hugo site on an S3 bucket and deploying from the command line for about two years. I have had Pingdom checking the site monthly and only saw downtime when Amazon went down for a few hours during the CloudFlare CloudBleed incident. So I’d say it’s super-reliable.

If you decide to serve your static site from an S3 bucket and enable CloudFlare TLS cert renewal is automatic you can serve for as little as $2-3 a month depending on traffic so you’ll pretty much be able to forget about it until your credit card expires.

I’ve documented the entire set-up procedure as well as other benefits in case you’d like to take a look. Then you can spend your time writing about the things your tinkering with instead of tinkering with the things you want to write about.

If you’re looking to go indie get yourself a Raspberry Pi and host inside a container using resinOS.

2 Likes

Nice tutorial @anon94969202. I’d only seen your hack-cabin stuff.

Netlify is fantastic - especially as it natively supports Hugo.

Well, I’m maintaining a couple of VPN’s and until I managed to set up auto-updates via Webmin, it was a pain. Not, perhaps, a great deal of work but just a regular drag as you have to remember to go check it, install the latest fixes - especially security ones - check that nobody has hacked it, make sure the settings are right, …

As Rick says, unless you really want a server to mess about with, don’t bother. Over the last few years I’ve moved my email to a low cost 3rd party service and I’m moving my PHP/Wordpress sites to Hugo/Netlify. If I need to play with Linux, I can now use a cheap Raspberry Pi. My VPS’s will most likely be decommissioned as soon as I’ve finished migration of the sites.

Well reliability on a Linux VPS should be excellent. I typically get a few minutes downtime a year.

BUT, you can’t forget about it - that’s because you have to keep on top of security fixes.

I’ve not experienced any DDOS in all the years I’ve been running servers but then none of my sites and services are at all high profile. What I did regularly see when getting going were constant attacks on my SSH service. That was easily fixed by moving it off port 22 onto something less obvious. I use Cloudflare to mitigate attacks on my websites.

1 Like

I host my main site, built by Hugo, on shared hosting.

I professionally host enterprise WordPress sites. As in I am the service provider and maintain the warez.

I think having a VPS is fun, and something anyone should have if they want. But I got into hosting because I don’t recommend folks run a server unless they want to.

If you use a CDN to serve your assets, you might consider serving the entire site; this works if you serve it over a sub-domain, even www).

Edit: wanted to add something. I think folks should want to run a server, it really helps with quality of life issues. That said, an Ubuntu VPS with auto-updates turned on and serving a static web site… that is pretty stable. You still need a backup strategy, though git may serve that case for your site. Otherwise, practically any hosting will work with your site. :slight_smile:

1 Like

@RickCogley and @anon94969202, thanks for your answers. By the way, do you have any idea what the latency is like for S3 buckets? I’ve not read any benchmarks on it, but I’ve always been a bit concerned that S3 buckets would be slow. Maybe it’s just that “bucket” doesn’t sound like a real web server to me. I thought it was mostly for storage. And I don’t know what an S3 bucket actually is, in terms of the whole stack involved. What’s the OS? What’s it running on top of? Is it like an AMI running on Xen?

@anon94969202, you mean have a Pi server physically in my home, running resinOS? Wouldn’t that suck? Well, I’d be most worried about my ISP dropping the hammer on me for running a web server. I think pretty much all ISPs prohibit that, even Google Fiber.

That reminds me of something. I never see bandwidth advertised or reported by VPS providers. I mean the actual NIC and line bandwidth, like 10 GbE or SFP+, etc. Instead they use the word “bandwidth” to describe the total amount of data downloaded from the server per month, like 1 TB. I wish I knew more about how the network bandwidth relates to web server latency, rps, etc.

@RickCogley, what do you actually do with CircleCI? I’m having trouble visualizing what it does in this context.

@openscript, thanks for the answer. Do you have comments on your GitHub Pages site? How could comments be handled? Disqus?

@JoeWeb, lol, no idea what S3 actually is. It’s like voodoo magic!
I have been running it behind AWS CloudFront and have no complaints about performance. Direct, I’m not sure. There are a couple other people in here running on S3.

CircleCI triggers after I push to master in github. It provisions a docker container (iirc) that does the needful, and pushes to S3 using https://github.com/bep/s3deploy, which is using etags to send deltas instead of the whole site every time.

This is the script it uses:

CircleCI add a test by default using htmlproofer but, I am doing that on my laptop separately. It slows the process down and uses build minutes.

I added some lines to the default script - 26 to 29 to install s3deploy, and 49 to 58, to do all the stuff I was doing locally in a zsh function.

1 Like

About 240 milliseconds is the time it takes for the server to respond with the first byte of the response when using S3 (using the cheaper reduced redundancy storage option) with CloudFront.

51%20AM

If you host on your own server most of your requests can be cached via CDN. And if your ISP tries to tell you how to use the bandwidth you’ve purchased from them I would question your rights as a consumer.

1 Like

Some do, but in some enlightened countries, ISPs actually have to compete :wink: Of course, you may get less up bandwidth than down and domestic connections do tend to drop out from time-to-time.

If you look in the right place, many VPS providers will provide a test link so that you can get an idea of their performance.

1 Like

Look no further. Includes link to script used to produce metrics on GH:

If you want to stress test though Load Impact has you covered with k6:

I did a stress test on the Singapore Vultr $5 VPS node with 10,000 simultaneous connections to a WordPress site backed by a Redis Object Cache (no CDN). If you’d like those metrics just let me know.

Sorry, had to rush to a meeting before - I was going to mention that https://lowendbox.com/ often has links to performance tests as well.

1 Like