The Joy of Tinkering: Why I Built My Own Cloud

Creation has always been one of the core pillars of my life. For as long as I can remember, I've loved building and tinkering with new things. My pseudo-ADHD mind struggles to stick with any one track, just look at my resume or interests and you'll see me scattered across a dozen different fields. But over the years, I've recognized a pattern: no matter what skill or technology I pick up, I end up using it to create something of my own. Music compositions, game prototypes, data science experiments—the medium changes, but the impulse remains.

So when I decided to dive into cloud infrastructure, it was inevitable that I'd eventually ask: could I build my own? Not just deploy to AWS or spin up some Docker containers, but actually construct a personal cloud from the ground up? This is the story of why I said yes, what I learned, and why you might want to try it too.

Why Tinkering Matters

Before I tell you what I built, I want to explore the idea of tinkering and why it's been so valuable to me. Tinkering, in a nutshell, is exploration without guardrails. That might sound chaotic, but it's precisely this freedom that forces you to learn from failures in ways no structured tutorial can replicate. You learn through breaking things, troubleshooting, and piecing solutions together organically.

Throughout this project, I've spent countless hours debugging problems I created myself—and that's exactly where the learning happens. Like the time I spent an hour figuring out why Tailscale suddenly refused to expose my Docker containers over HTTPS, when it had worked perfectly two minutes earlier. (Spoiler: I had magically exposed the right routes.) No course teaches you that specific problem, but debugging it taught me more about Docker networking than any documentation could.

Starting Small: The Hardware

With that philosophy in mind, I started with the most tangible part: the hardware.

I wanted to keep costs reasonable while still having enough power to run multiple services. After some research, I settled on a Raspberry Pi 5 (8GB model) as my foundation. Sure, I could have gone with a used enterprise server off eBay, but there's something appealing about the Pi's low power consumption and small footprint. This wasn't going to be a rack-mounted beast in my closet—just a small computer tucked behind my monitor.

I paired it with:

  • A 1TB M.2 NVMe SSD for storage

  • An Argon ONE case with active cooling and NVMe support

    The case choice turned out to be crucial. I learned early on that transcoding video in Plex pushes the Pi hard, and thermal throttling was killing performance. The active cooling solved that problem, though I'll admit the fan noise occasionally reminds my partner that yes, I am indeed running a tiny server on our desk.

Building the Foundation: Containers and Networking

With hardware sorted, I needed to figure out how to actually run multiple services on this thing. Enter Docker.

Docker turned out to be perfect for this use case. Instead of installing each service directly on the Pi's operating system and dealing with conflicting dependencies, I could isolate everything in containers. A single `docker-compose.yml` file defines my entire stack, and bringing everything up is just one command. When something breaks, I can nuke a container and redeploy without affecting anything else. This felt like having superpowers.

But then came the networking challenge. I had services running locally—I could access Pi-hole at `http://192.168.1.100/admin` from my laptop—but what about remotely? And more importantly, how could I get proper HTTPS with valid certificates instead of browser security warnings?

I stumbled onto Tailscale almost by accident while searching for SSL solutions, and it turned into one of those "where have you been all my life?" discoveries. Tailscale creates a secure mesh network (they call it a "tailnet") between all your devices. Think of it as a VPN where every device can talk to every other device, no matter where you are physically. The killer feature? Their MagicDNS automatically provisions HTTPS certificates for services on your tailnet. That red padlock turned green, and I felt like I'd unlocked a hidden level.

No port forwarding through my router. No exposing services to the public internet. No manually managing SSL certificates. Just secure, encrypted access to my services from anywhere.

What I'm Actually Running

So what's actually running on this little machine? A few services that have genuinely changed how I use technology at home.

Plex Media Server has become our household's Netflix replacement. I've been using Plex for years, so migrating my library was straightforward. There's something deeply satisfying about curating our own media collection, free from algorithmic recommendations and the frustration of shows disappearing from streaming platforms. Plus, watching a Raspberry Pi successfully transcode 4K video feels like witnessing a minor miracle.

Pi-hole might be my favorite discovery. It blocks ads and trackers at the DNS level—before they even load on your device. I was genuinely shocked when I first saw the statistics: 30-40% of my DNS queries were to advertising and tracking domains. After enabling Pi-hole across my network, browsing the web felt like stepping into an alternate universe. No pop-ups. No autoplay video ads. Just content. My partner noticed the difference immediately and now asks if "the ad blocker thing" is working whenever a banner sneaks through on their phone.

n8n is a workflow automation tool that's become quietly essential. I've automated everything from backing up notes to GitHub, to scraping RSS feeds and sending me daily digests. It's like having a personal assistant who never sleeps and doesn't judge my weird organizational systems.

I'm also experimenting with **Obsidian** for self-hosted note-taking. The idea of having my entire knowledge base version-controlled and under my control appeals to the data ownership side of this project. Early days still, but I'm intrigued by the possibilities.

Keeping an Eye on Things

Running multiple services on limited hardware means you need visibility into what's actually happening under the hood.

Portainer gives me a web UI for managing Docker containers. Instead of memorizing docker commands or SSHing into the Pi to check logs, I can see everything through a clean dashboard. When a container refuses to start, the logs viewer has saved me hours of troubleshooting. It's the difference between flying blind and having instruments.

NetData is the heavyweight monitoring tool I probably didn't need but absolutely love. It's resource-intensive for what it does, but watching real-time telemetry graphs of CPU usage, memory, disk I/O, and network traffic satisfies something deep in my engineer brain. It helped me identify that my n8n workflows were spiking CPU usage at 3 AM (oops, bad cron scheduling), and it showed me exactly when Plex transcoding pushes the system to its limits. Also, if you ever want to feel like a Hollywood hacker, opening NetData's dashboard with dozens of time-series graphs scrolling in real-time is perfect for that aesthetic.

The Learning Curve (Or: How I Broke Everything)

This all sounds smooth in retrospect, but the journey was anything but. Tinkering means breaking things, and I broke things spectacularly.

The Partition Disaster

Early on, I decided I wanted to separate my Plex media library from the system data. Different partitions for different purposes—seemed logical, right? My instincts screamed at me that this was a bad idea. The system literally warned me multiple times. I ignored both.

I created partitions on my live, working drive without a backup. Rookie mistake number one. But worse, I miscalculated the space allocation and gave the system partition way too little room. Within days, I couldn't install updates or deploy new containers because the system partition was full. My beautiful, carefully configured cloud was effectively bricked.

The fix? Reformat the entire SSD and start over. Hours of configuration, gone. I learned two lessons that day: always create backups before messing with partitions, and trust your instincts when they're screaming "THIS IS A BAD IDEA."

The DNS Blackout

The other memorable disaster came from my misunderstanding of how DNS and static IPs work together. I'd configured my router to use Pi-hole as the network's DNS server—great for blocking ads network-wide. What I didn't realize was that I needed to give the Pi a static IP *before* pointing all my devices at it.

When the Pi rebooted and DHCP assigned it a new IP address, every device on my network was suddenly asking a nonexistent DNS server for addresses. No internet. No streaming. No Zoom calls.

My partner's work meeting dropped mid-presentation. I heard "What happened to the internet?" from the other room and felt my stomach drop. Three hours of frantic troubleshooting later, I finally understood DHCP reservations, static IPs, and why you don't make your entire household's internet dependent on a device with a dynamic IP address.

But here's the thing: those failures taught me more than any tutorial could. I now understand system design, redundancy, and the importance of patience while troubleshooting. More than that, I've demystified technologies that power massive cloud architectures like AWS and GCP. When you've manually configured DNS, wrestled with networking, and debugged your own infrastructure, "the cloud" stops being magic and becomes just someone else's computer that you're renting.

Why Not Just Use AWS?

The obvious question: why not just use AWS, Google Cloud, or any of the dozens of managed services that do all this better, faster, and more reliably?

Fair question. My goal was never to compete with professional cloud providers. I'm not delusional enough to think my Raspberry Pi can match AWS's infrastructure. Instead, this project was about understanding how these platforms work under the hood. When you've manually configured networking, debugged container orchestration, and dealt with resource constraints, you understand cloud architecture in a way that clicking "deploy" in a web console can't teach you.

There's also the data ownership angle. I've recently fallen down the rabbit hole of privacy and data ownership, and Pi-hole was my gateway drug. Seeing how much of my network traffic was just surveillance made me want to reclaim control. Looking forward, I want to migrate more of my data off platforms like Google Photos and Apple iCloud onto infrastructure I control. It's not about paranoia—it's about ownership.

And honestly? The convenience of "click to deploy" has always paled against the satisfaction of "I built that from scratch." This project is rooted in curiosity and independence, not an outright rejection of modern tools. I'll still use AWS for work projects. But for my personal infrastructure, I'll take the tinkering and the learning over convenience any day.

What's Next

So where does this go from here? I'm eyeing a few next steps:

  • Jellyfin as a Plex replacement. The fully open-source approach appeals to me, and I want to see how it compares.

  • Automated backups to an offsite location. Yes, I learned that lesson the hard way with the partition disaster.

  • Home automation integration. The Pi has room for more services, and controlling lights through my own infrastructure sounds like the next logical rabbit hole.

But more than any specific service, I'm excited to keep breaking things, learning from failures, and building. That's the real point of this whole exercise.

If you've ever been curious about building your own infrastructure, I'd say go for it. Start small—a Raspberry Pi, a few Docker containers, and the willingness to break things and learn from it. You'll make mistakes. Your partner might get annoyed when you take down the internet. But you'll come out the other side understanding technology in a way that no amount of tutorials or courses can provide.

The joy of tinkering isn't in the destination. It's in the messy, frustrating, exhilarating process of building something yourself.

Next
Next

Future of AI in Gaming