
AWS Agony
Did you notice that Both.org was up and running fine yesterday morning — despite the global side-effects of the Amazon Web Services (AWS) outage that started about 3AM EDT? Yes, we here at Both.org were up and running without a hitch and there’s an excellent reason for that.
Both.org runs on its own server. One. Out of my house. On a 7-year old Linux computer — the one on the left in the lead image — with very modest specs you can see in Figure 1.
#######################################################################
# MOTD for Mon Oct 20 03:11:26 AM EDT 2025
# HOST NAME: yorktown.both.org
# Machine Type: physical machine.
# Host architecture: X86_64
#----------------------------------------------------------------------
# System Serial No.: Default string
# Motherboard Mfr: Gigabyte Technology Co., Ltd.
# Motherboard Model: Z370 HD3-CF
# Motherboard Serial: Default string
# BIOS Release Date: 03/01/2018
#----------------------------------------------------------------------
# CPU Model: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
# CPU Data: 1 Six Core package with 12 CPUs
# CPU Architecture: x86_64
# HyperThreading: Yes
# Max CPU MHz: 4600.0000
# Current CPU MHz: 4299.973
# Min CPU MHz: 800.0000
#----------------------------------------------------------------------
# RAM: 31.212 GB
# SWAP: 7.999 GB
#----------------------------------------------------------------------
# Linux Distribution: Fedora 42 (Adams) X86_64
# Kernel Version: 6.16.12-200.fc42.x86_64
#----------------------------------------------------------------------
Figure 1: The specs of the Both.org server are quite modest.
Last month Both.org served 67,674 visitors and 143,910 page views using the Apache HTTP server. I checked the performance of the server using SAR between for the period October 1 and 20, I find that the busiest 10-minute interval still shows 83.94% idle time and that was during backups. Another way to state that is that maximum CPU usage for that entire 20-day period was 16.06% during one ten-minute interval out of 2,280. To be fair, there were 4 other intervals that were less than 90% but more than 84% during that entire time. The rest were all 90% or better idle time with most being around 97%.
I also host my own SMTP and IMAP email server. And yes, it is on that same host as the Both.org web site. As are all my other servers, NTP, DHCP, and more. All of that activity is contained in the statistics cited in the previous paragraph.
That is all just to show you that self-hosting is feasible for anyone who wants to do it.
But back to the main point I’m trying to make. Being completely self-hosted, Both.org seldom suffers from the problems of the mega-providers and their concentrated resources. I am mostly free from dependencies on external organizations. The primary being the root DNS servers because the Both.org server provides name services for my network, both for internals hosts, and as a caching name server for external hosts. The second is my Gigabit fiber-optic Internet connection. But even if one of those prevents me accessing the outside world, I can still work and play while totally isolated as everything is here.
I keep all my files here on my own primary workstation. All my backups are created and stored on two HDDs, one of which is permanently mounted on my workstation, and the other is an external USB HDD that I take to my safe deposit box on a regular schedule, returning the previous one home.
I am dependent upon nothing external for much of what I do. The biggest issue if my external access were to be blocked, is that you wouldn’t be able to access my web site. Meanwhile, I could keep working and have more posts available for you when connectivity returned.
There are two reasons why I self-host and have set up all these servers. First and foremost, when I was starting out with Linux, I wanted to learn as much as I could. As a result I set up my own network and, starting with DHCP, NTP, and DNS, I eventually added HTTP for a web server, and SMTP for email. I learned a lot that way.
The second reason is that I like to be in control. I’ve learned over the years that I like having as much control over my network and the computers on it as possible. That all started because the ISPs I used for connectivity in the 1990’s experienced constant problems with crap DNS services. They’d go out for hours, or just provide the wrong data as their DNS databases would fail to update.
Don’t get me started on things like Google fonts.
Most of those organizations using external services for major components are probably re-evaluating their decisions now. As well they should.