Madhav Khanna Madhav Khanna

The Joy of Tinkering: Why I Built My Own Cloud

Creation has always been one of the core pillars of my life. For as long as I can remember, I've loved building and tinkering with new things. My pseudo-ADHD mind struggles to stick with any one track, just look at my resume or interests and you'll see me scattered across a dozen different fields. But over the years, I've recognized a pattern: no matter what skill or technology I pick up, I end up using it to create something of my own. Music compositions, game prototypes, data science experiments—the medium changes, but the impulse remains.

So when I decided to dive into cloud infrastructure, it was inevitable that I'd eventually ask: could I build my own? Not just deploy to AWS or spin up some Docker containers, but actually construct a personal cloud from the ground up? This is the story of why I said yes, what I learned, and why you might want to try it too.

Why Tinkering Matters

Before I tell you what I built, I want to explore the idea of tinkering and why it's been so valuable to me. Tinkering, in a nutshell, is exploration without guardrails. That might sound chaotic, but it's precisely this freedom that forces you to learn from failures in ways no structured tutorial can replicate. You learn through breaking things, troubleshooting, and piecing solutions together organically.

Throughout this project, I've spent countless hours debugging problems I created myself—and that's exactly where the learning happens. Like the time I spent an hour figuring out why Tailscale suddenly refused to expose my Docker containers over HTTPS, when it had worked perfectly two minutes earlier. (Spoiler: I had magically exposed the right routes.) No course teaches you that specific problem, but debugging it taught me more about Docker networking than any documentation could.

Starting Small: The Hardware

With that philosophy in mind, I started with the most tangible part: the hardware.

I wanted to keep costs reasonable while still having enough power to run multiple services. After some research, I settled on a Raspberry Pi 5 (8GB model) as my foundation. Sure, I could have gone with a used enterprise server off eBay, but there's something appealing about the Pi's low power consumption and small footprint. This wasn't going to be a rack-mounted beast in my closet—just a small computer tucked behind my monitor.

I paired it with:

  • A 1TB M.2 NVMe SSD for storage

  • An Argon ONE case with active cooling and NVMe support

    The case choice turned out to be crucial. I learned early on that transcoding video in Plex pushes the Pi hard, and thermal throttling was killing performance. The active cooling solved that problem, though I'll admit the fan noise occasionally reminds my partner that yes, I am indeed running a tiny server on our desk.

Building the Foundation: Containers and Networking

With hardware sorted, I needed to figure out how to actually run multiple services on this thing. Enter Docker.

Docker turned out to be perfect for this use case. Instead of installing each service directly on the Pi's operating system and dealing with conflicting dependencies, I could isolate everything in containers. A single `docker-compose.yml` file defines my entire stack, and bringing everything up is just one command. When something breaks, I can nuke a container and redeploy without affecting anything else. This felt like having superpowers.

But then came the networking challenge. I had services running locally—I could access Pi-hole at `http://192.168.1.100/admin` from my laptop—but what about remotely? And more importantly, how could I get proper HTTPS with valid certificates instead of browser security warnings?

I stumbled onto Tailscale almost by accident while searching for SSL solutions, and it turned into one of those "where have you been all my life?" discoveries. Tailscale creates a secure mesh network (they call it a "tailnet") between all your devices. Think of it as a VPN where every device can talk to every other device, no matter where you are physically. The killer feature? Their MagicDNS automatically provisions HTTPS certificates for services on your tailnet. That red padlock turned green, and I felt like I'd unlocked a hidden level.

No port forwarding through my router. No exposing services to the public internet. No manually managing SSL certificates. Just secure, encrypted access to my services from anywhere.

What I'm Actually Running

So what's actually running on this little machine? A few services that have genuinely changed how I use technology at home.

Plex Media Server has become our household's Netflix replacement. I've been using Plex for years, so migrating my library was straightforward. There's something deeply satisfying about curating our own media collection, free from algorithmic recommendations and the frustration of shows disappearing from streaming platforms. Plus, watching a Raspberry Pi successfully transcode 4K video feels like witnessing a minor miracle.

Pi-hole might be my favorite discovery. It blocks ads and trackers at the DNS level—before they even load on your device. I was genuinely shocked when I first saw the statistics: 30-40% of my DNS queries were to advertising and tracking domains. After enabling Pi-hole across my network, browsing the web felt like stepping into an alternate universe. No pop-ups. No autoplay video ads. Just content. My partner noticed the difference immediately and now asks if "the ad blocker thing" is working whenever a banner sneaks through on their phone.

n8n is a workflow automation tool that's become quietly essential. I've automated everything from backing up notes to GitHub, to scraping RSS feeds and sending me daily digests. It's like having a personal assistant who never sleeps and doesn't judge my weird organizational systems.

I'm also experimenting with **Obsidian** for self-hosted note-taking. The idea of having my entire knowledge base version-controlled and under my control appeals to the data ownership side of this project. Early days still, but I'm intrigued by the possibilities.

Keeping an Eye on Things

Running multiple services on limited hardware means you need visibility into what's actually happening under the hood.

Portainer gives me a web UI for managing Docker containers. Instead of memorizing docker commands or SSHing into the Pi to check logs, I can see everything through a clean dashboard. When a container refuses to start, the logs viewer has saved me hours of troubleshooting. It's the difference between flying blind and having instruments.

NetData is the heavyweight monitoring tool I probably didn't need but absolutely love. It's resource-intensive for what it does, but watching real-time telemetry graphs of CPU usage, memory, disk I/O, and network traffic satisfies something deep in my engineer brain. It helped me identify that my n8n workflows were spiking CPU usage at 3 AM (oops, bad cron scheduling), and it showed me exactly when Plex transcoding pushes the system to its limits. Also, if you ever want to feel like a Hollywood hacker, opening NetData's dashboard with dozens of time-series graphs scrolling in real-time is perfect for that aesthetic.

The Learning Curve (Or: How I Broke Everything)

This all sounds smooth in retrospect, but the journey was anything but. Tinkering means breaking things, and I broke things spectacularly.

The Partition Disaster

Early on, I decided I wanted to separate my Plex media library from the system data. Different partitions for different purposes—seemed logical, right? My instincts screamed at me that this was a bad idea. The system literally warned me multiple times. I ignored both.

I created partitions on my live, working drive without a backup. Rookie mistake number one. But worse, I miscalculated the space allocation and gave the system partition way too little room. Within days, I couldn't install updates or deploy new containers because the system partition was full. My beautiful, carefully configured cloud was effectively bricked.

The fix? Reformat the entire SSD and start over. Hours of configuration, gone. I learned two lessons that day: always create backups before messing with partitions, and trust your instincts when they're screaming "THIS IS A BAD IDEA."

The DNS Blackout

The other memorable disaster came from my misunderstanding of how DNS and static IPs work together. I'd configured my router to use Pi-hole as the network's DNS server—great for blocking ads network-wide. What I didn't realize was that I needed to give the Pi a static IP *before* pointing all my devices at it.

When the Pi rebooted and DHCP assigned it a new IP address, every device on my network was suddenly asking a nonexistent DNS server for addresses. No internet. No streaming. No Zoom calls.

My partner's work meeting dropped mid-presentation. I heard "What happened to the internet?" from the other room and felt my stomach drop. Three hours of frantic troubleshooting later, I finally understood DHCP reservations, static IPs, and why you don't make your entire household's internet dependent on a device with a dynamic IP address.

But here's the thing: those failures taught me more than any tutorial could. I now understand system design, redundancy, and the importance of patience while troubleshooting. More than that, I've demystified technologies that power massive cloud architectures like AWS and GCP. When you've manually configured DNS, wrestled with networking, and debugged your own infrastructure, "the cloud" stops being magic and becomes just someone else's computer that you're renting.

Why Not Just Use AWS?

The obvious question: why not just use AWS, Google Cloud, or any of the dozens of managed services that do all this better, faster, and more reliably?

Fair question. My goal was never to compete with professional cloud providers. I'm not delusional enough to think my Raspberry Pi can match AWS's infrastructure. Instead, this project was about understanding how these platforms work under the hood. When you've manually configured networking, debugged container orchestration, and dealt with resource constraints, you understand cloud architecture in a way that clicking "deploy" in a web console can't teach you.

There's also the data ownership angle. I've recently fallen down the rabbit hole of privacy and data ownership, and Pi-hole was my gateway drug. Seeing how much of my network traffic was just surveillance made me want to reclaim control. Looking forward, I want to migrate more of my data off platforms like Google Photos and Apple iCloud onto infrastructure I control. It's not about paranoia—it's about ownership.

And honestly? The convenience of "click to deploy" has always paled against the satisfaction of "I built that from scratch." This project is rooted in curiosity and independence, not an outright rejection of modern tools. I'll still use AWS for work projects. But for my personal infrastructure, I'll take the tinkering and the learning over convenience any day.

What's Next

So where does this go from here? I'm eyeing a few next steps:

  • Jellyfin as a Plex replacement. The fully open-source approach appeals to me, and I want to see how it compares.

  • Automated backups to an offsite location. Yes, I learned that lesson the hard way with the partition disaster.

  • Home automation integration. The Pi has room for more services, and controlling lights through my own infrastructure sounds like the next logical rabbit hole.

But more than any specific service, I'm excited to keep breaking things, learning from failures, and building. That's the real point of this whole exercise.

If you've ever been curious about building your own infrastructure, I'd say go for it. Start small—a Raspberry Pi, a few Docker containers, and the willingness to break things and learn from it. You'll make mistakes. Your partner might get annoyed when you take down the internet. But you'll come out the other side understanding technology in a way that no amount of tutorials or courses can provide.

The joy of tinkering isn't in the destination. It's in the messy, frustrating, exhilarating process of building something yourself.

Read More
One Shots Madhav Khanna One Shots Madhav Khanna

Future of AI in Gaming


A disclaimer before everything. This is article might not be an academic article, which will give you an insight on the way AI is used in the Games and storytelling industry; but this might present to you a new way of thinking on what the future of AI might be in these industries.


Since the time I can remember I’ve been fascinated by the world of video games. I’ve pursued this interest to its core. A video game, in its purest form, is an art that is the culmination of various different arts. From the art of programming, graphics, design to visual artistry, and psychological manipulation of a player by the designers in the slightest of the ways, that makes you feel more connected to the lore.

Where is AI in the Video Game Industry Currently?

I believe, AI in video games hasn’t really progressed for the last 15 years. This is evident by the fact that whenever we want to benchmark the AI in video games, we have to go back to the game F.E.A.R that was released back in 2005. Even though there have been minor advancements in the AI department of the video games since the release of F.E.A.R, I believe a new revolution has yet to come. More so, because we have always associated AI in Video games with how NPC reacts to our movements. We have continuously tried to model the NPCs to react more and more like humans.

Don’t interpret me wrong, there have been games that have shown that indeed the role of AI can be expanded more than what the norm of the industry has been. This was shown in the game released in 2005 called Façade which really has been a game that broke some of the most stringent stereotypes of the industry regarding the definition of ‘AI’.

Even though the game only lasts for about 20 minutes, it makes it different than all other games because of the way it interacts with the users. Making use of natural language processing and a custom Animation system that animates the actors based on the movement that is defined by a drama manager that controls the flow of the whole game. This game gives you control over how your character responds to the various scenarios by letting you input your responses straight from your keyboard into the game.

A still from Event[0] as the player interacts with the game interface using their keyboard.

A still from Event[0] as the player interacts with the game interface using their keyboard.

Another game that follows in its stead is Event [0], which has a similar interaction system that utilizes player’s input from the keyboard and runs natural language processing before outputting a relevant response, so that player can progress forward in the game. Even though this is ground-breaking it still ties up with our previous observations of the game industry still using AI to model NPC behavior as close to human behavior as possible.

The Next Big Thing?

For me, the next big AI Advancement would come from utilizing AI to elevate the player experience. Since the video game is not just restricted to a player interacting with an NPC (some might be like chess, and other original board games), but most of the games coming out now are with players interacting with the simulation of the video game as a whole. What we can do, is to employ AI techniques which can sense how the player is reacting with the simulation and then enhance the experience of the player by changing various factors.

One of my favorite bits where a very basic version of this same ideology is used is in Marvel’s Spider-Man Game which was released in 2018. The game in itself is a masterpiece, but what made me really appreciate it, even more, was this single detail. Where, while swinging you hear the background score of the game and, as your swinging speed increases the intensity of the score increases as well until eventually loosening the tension as you slow down. This made the simple mechanic of swinging so much more immersive and thrilling.

A still from Marvel’s Spider-Man

A still from Marvel’s Spider-Man

I believe using AI to heighten these experiences for the player would be a step forward. Obviously, with different genres of games, the amount of AI modification of the simulation will vary. AI Managers can be thought of as Drama Managers, that controls the lighting and mood to suit the player’s personal preferences. This can be remarkably effective in giving each player a unique experience and yet at the same time making the progression of the story be linear.

For example, consider a game like Last of Us, an AI manager can sense where the player’s position currently is in the game world. If a player is scavenging an area for resources and has been spending most of their time at a location. The AI Manager can spawn some, infected around that area, pushing the player forward to progress through the story. At the same time making this experience unique for the player.

This implementation of an AI Drama Manager can be tailored to each specific game. Bringing in varying gameplay while maintaining the integrity of the story, and giving each player a unique yet, a similar experience. This I believe might be the best way forward as we move towards an ever-developing branch of AI Computing and its integration with the Video Game Industry.

Read More

Recent Posts