Building My Personal AI Infrastructure with MCP Servers

Introduction

Over the past few weeks, I’ve been building out my own personal AI infrastructure. Not for machine learning training or serving models at scale, but something more practical: a set of AI agents that can help me manage my day-to-day tasks. The core of this setup is the Model Context Protocol (MCP) - Anthropic’s open standard for connecting AI assistants to data sources and tools.

What started as a simple experiment with Claude Desktop has evolved into a full-fledged Kubernetes cluster running multiple MCP servers, GitOps workflows, and automated DNS management. It’s been a journey of learning what works, what doesn’t, and where the sweet spot is between complexity and utility. Here’s how I built it and what I learned along the way.

The Foundation: K3s on a Single Node

I’m running everything on a single-node K3s cluster. Yes, Kubernetes for a personal project might seem like overkill, but hear me out. K3s is lightweight, it gives me a consistent deployment environment, and most importantly, it lets me manage everything through GitOps using Argo CD.

My setup is straightforward:

  • Domain: chanwoo.pro with Cloudflare Tunnel for secure external access
  • GitOps Repository: A private GitHub repository for infrastructure-as-code
  • Key Services: Argo CD at argo.chanwoo.pro, Grafana at compute.chanwoo.pro, Harbor registry at registry.chanwoo.pro

The beauty of this architecture is that I can define all my infrastructure as code. Every MCP server, every service, every DNS record is declared in YAML manifests in my GitOps repo. When I push changes, Argo CD automatically syncs them to the cluster. No manual kubectl commands, no “it works on my machine” problems.

I chose K3s over Docker Compose or bare containers because I wanted to learn Kubernetes patterns for future implementations. While I don’t have immediate plans for GPU nodes, this architecture can scale to support small teams with multiple users and over 100 agents. Plus, having real Kubernetes means I can use the same tools and patterns I use professionally, which keeps my skills sharp.

Building the Cloudflare DNS MCP Server

The first custom MCP server I built was for Cloudflare DNS management. This might sound niche, but it’s been incredibly useful. I can now ask Claude to create DNS records, update tunnels, or check my domain configuration just by typing a request in natural language.

Here’s what the MCP server configuration looks like in my ~/.claude.json:

{
  "mcpServers": {
    "cloudflare-dns": {
      "url": "http://localhost:<port>/sse",
      "transport": "sse"
    }
  }
}

The server runs inside my K3s cluster and is exposed via a NodePort. I upgraded the MCP SDK from version 0.6.0 to 1.24.0 specifically to get the StreamableHTTPTransport feature, which allows the server to communicate over HTTP with Server-Sent Events instead of requiring stdio pipes.

The implementation was surprisingly straightforward. The MCP server wraps the Cloudflare API and exposes tools like create_dns_record, list_dns_records, and delete_dns_record. When Claude needs to manage DNS, it calls these tools through the MCP protocol, and my server translates those calls into Cloudflare API requests.

Building this taught me a lot about the MCP specification. The protocol is really well designed - it’s JSON-RPC 2.0 based, which means it’s language-agnostic and easy to implement. The hardest part wasn’t the protocol itself, but handling the containerization and getting the authentication right. I store my Cloudflare API credentials as a Sealed Secret in the cluster, which Argo CD deploys automatically.

One gotcha I ran into: the MCP SDK upgrade required some changes to how I structured the transport layer. The older stdio-based approach was simpler for local development, but the HTTP transport is much more flexible for containerized deployments. The tradeoff was worth it.

My Daily Workflow with MCP Servers

So what does this actually look like in practice? My current MCP server lineup includes:

  • Cloudflare DNS: Manages my chanwoo.pro domain records
  • Notion: Task management, career planning, and tech blog wikis
  • Playwright: Web scraping and browser automation
  • Context7: Vector search across my documentation

Here’s a typical workflow: I’ll be working on an infrastructure project and need to document it in my wiki, create a task to track the work, and set up a new subdomain for the service. Instead of switching between multiple browser tabs and tools, I just tell Claude what I need:

“Create a wiki entry in Notion about setting up monitoring, add a task to track the implementation, and add a DNS record for grafana.chanwoo.pro pointing to the cluster ingress.”

Claude coordinates across the MCP servers - creating the wiki entry and task through the Notion MCP, and the DNS record through the Cloudflare MCP. Everything happens in one conversation, and I can verify the results or ask for modifications without leaving my terminal.

The Notion MCP has been particularly useful for PhD application planning. I have a “Career Team” workspace in Notion with tasks, wiki entries, and handoff notes. Claude can read from and write to these databases, helping me track application deadlines, research potential advisors, and organize my thoughts on research interests.

Lessons Learned (and Things I Ditched)

Not everything worked out as planned. I spent a few days experimenting with agent frameworks like AgentOS and AgentScope, thinking they’d provide a nice orchestration layer on top of MCP. They didn’t. Both frameworks had a critical issue: they consumed 100% CPU due to asyncio busy loops, and their MCP connection implementations were flaky at best.

After debugging for longer than I’d like to admit, I decided to rip them out entirely. I deleted the namespaces, removed the DNS records, updated my documentation to mark them as DISCONTINUED, and simplified back to using Claude Code directly with MCP servers. Sometimes the simpler approach is the right one.

Other lessons:

  • Start small: I began with one MCP server (Cloudflare DNS) and only added more once I had a solid pattern
  • GitOps everything: Having all configuration in Git has saved me countless times when experiments went wrong
  • Documentation matters: I keep detailed notes in Notion about each service. Future me will thank present me
  • Security first: Sealed Secrets for credentials, Cloudflare Tunnel for external access, no exposed API keys

The infrastructure still has room to grow. I’m CPU-only right now but might add GPU support later. It’s single-node but could scale to high availability if needed. But it does what I need it to do today, and I can evolve it as my needs change.


If you’re interested in building something similar, the MCP specification is open and well-documented. Start with the official MCP documentation, pick one use case that would actually save you time, and build from there. You don’t need Kubernetes - a simple Docker Compose setup or even local stdio-based servers work great. The key is finding the tools that actually make your life easier, not just adding complexity for its own sake.