Getting my blog online
This week I decided to write my own blogging framework (to host this blog!).
The setup is very simple. I describe the process in more detail here.
While the code is simple and the process is simple, getting it online wasn't; allow me to explain...
Once I SCP'd the tarball over to my VPS, I created a system user using sudo adduser --system --group --home /path/to/system/user/home <blog_user>. Easy enough.
Afterwards, a little bit of sudo -u <blog_user> mkdir -p /path/to/blog/app to give my app a place to live, followed by which python3 to make sure Python is installed on the VPS.
Given that I'm using uv to manage dependencies, I needed to install this as well. Luckily, Astral makes it pretty easy to do, simply curl -LsSf https://astral.sh/uv/install.sh | sh.
After I run this, I do a quick check which uv and I see it's installed properly... cool!
Installing dependencies
Now, when I run /path/to/blog $ sudo -u <blog_user> /path/to/uv sync to install the project dependencies, I get sudo: unable to execute /path/to/uv: Permission denied... ¯\(ツ)/¯ classic, I installed uv as <other_user>, so <blog_user> doesn't have access to run it due to it's path.
Turns out, uv supports a special environment variable, UV_UNMANAGED_INSTALL, to override the install path. Lucky me!
I run curl -LsSf https://astral.sh/uv/install.sh | env UV_UNMANAGED_INSTALL="/path/to/system/user/home" sh. Then I run /path/to/blog $ sudo -u <blog_user> /path/to/system/user/home/uv sync & now it works - the dependencies are installed!
Service configuration
After this, I configure a service to be managed via systemctl. I create a file in /etc/systemd/system/<my_service>.service
[Unit]
Description=FastAPI Blog
After=network.target
[Service]
User=<blog_user>
Group=www-data
WorkingDirectory=/path/to/blog/app
Environment="PORT=<PORT>"
ExecStart=/path/to/venv/bin/uvicorn main:app --proxy-headers --host 127.0.0.1 --port 8000 --workers 2
Restart=always
RestartSec=5
TimeoutStopSec=30
[Install]
WantedBy=multi-user.target
I tell systemctl to reload it's config files via sudo systemctl daemon-reload and then I enable my new service using sudo systemctl enable --now <my_service>.
Once I've done this, I check the status of the service using sudo systemctl status <my_service> and I can see that it's loaded & active:
Loaded: loaded (/etc/systemd/system/<my_service>.service; disabled; preset: enabled)
Active: active (running) since Thu 2025-10-09 03:04:41 UTC; 2 days ago
Main PID: 2979420 (uvicorn)
Tasks: 8 (limit: 2260)
Things are shaping up. A this point, I run a quick curl -I https://localhost:<PORT> just to make sure I can hit the app... I get a 200. Great!
DNS configuration
Next up, I added a new subdomain to freeland.dev via my DNS provider. This was simple, I ran a quick ifconfig to get the IPv4 address of the server, copied it, and created a new record that points to the desired subdomain (in this case, newb.freeland.dev).
From my laptop, I try to ping newb.freeland.dev and I see that the hostname resolves (hooray!), so I'm all set (or so I think).
Caddy configuration
Here's where the real fun begins. I know that if I want to expose the application to the internet over HTTPS, I need to set up a reverse proxy with TLS. I decided to use Caddy in this case (instead of nginx). So I install Caddy onto the server
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo tee /path/to/user/keyrings/keyring.gpg >/dev/null
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install -y caddy
I configure the Caddyfile in /etc/caddy/Caddyfile
newb.freeland.dev {
encode zstd gzip
handle_path /static/* {
root * /path/to/blog/app
file_server
}
reverse_proxy localhost:<PORT>
}
Then I bounce the caddy service sudo systemctl reload caddy. In addition, I need to update the firewall on the server to allow TCP traffic over the standard HTTP/S ports (80, 443), so I do this using ufw.
At this point, I think I'm all set and that my site will be live. Oh how naive I was :)
I tried to curl the root endpoint on the site via the domain name from my local machine curl -I https://newb.freeland.dev but I get
> curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.se/docs/sslcerts.html
Hmmm, strange. I check sudo journalctl caddy and I see that Caddy is unable to acquire the cert via ACME (it's getting 403s and failing the challenge). I added a static respond /.well-known/acme-challenge/test 200 entry to the Caddyfile to hijack the default and confirm I can reach this path. If I curl -I http://newb.freeland.dev/.well-known/acme-challenge/test from my local machine and get a 200 response, then it's confirmed I can reach Caddy.
So I run curl -I http://newb.freeland.dev/.well-known/acme-challenge/test and I get a 200, right? Nope. I get a 404 XD.
After some time, I check my VPS provider to see if maybe I'm not allowing TCP traffic on 80/443. Sure enough, I'm not! I open up the firewall to allow traffic on these ports. Now I'm sure it's going to work... not!
I try again, and I still get a 404. WTH? I'm so confused now.
- I've opened the firewall on the server, opened it at the VPS provider.
- I have the Caddy service running.
- nslookup shows the correct IP address, ping resolves to the correct IP
- I can reach the application from the server (i.e. curl -I http://localhost:
For whatever reason, I cannot curl https://newb.freeland.dev/ and get a 200 for the life of me. I'm losing my mind!
I use ngrep to monitor the network traffic to see if the my requests are making it to the server from my laptop, i.e.
sudo ngrep -d any -W byline '^(GET|HEAD) ' tcp and port 80
Oddly enough, I see the requests hitting the system
T <my laptop's IP address> -> <my server's IP address>:80 [AP] #4
HEAD / HTTP/1.1.
Host: newb.freeland.dev.
User-Agent: curl/8.7.1.
Accept: */*.
.
So, I stop Caddy, and I curl again. I still get a 404. Huh... so it's not even reaching Caddy. That's strange.
I guess it's time to check iptables
sudo iptables -t nat -S | egrep -i 'PREROUTING|dport 80|REDIRECT|DNAT' || true
I see tons of entries for KUBE-*... that's odd, because I had disabled k3s using sudo systemctl disable k3s before I started (old me really wanted to run my own k3s cluster). After some research, I learned that disabling the k3s service doesn't remove the iptables rules, so the requests from my laptop were hitting the kubernetes NAT rules and getting filtered out. Sneaky SOB.
So, I ran
sudo /usr/local/bin/k3s-killall.sh 2>/dev/null || true
sudo /usr/local/bin/k3s-uninstall.sh 2>/dev/null || true
After this, I re-ran curl -I https://newb.freeland.dev and I got a 200!!! What a painstaking process, but also a very valuable lesson: DON'T USE KUBERNETES (jk, the real lesson was learning that despite configuring the firewall, despite setting up the reverse proxy, one also needs to check iptables chains to learn why traffic isn't reaching the web server).
Thanks for coming to my rant!