Projects

A mix of homelab infrastructure, personal projects, and professional work. Each one taught me something I couldn't have learned from a tutorial.

Pi-hole

Live

Raspberry Pi 4 · Pi-hole · PiOS Lite · DNS Filtering

Network-wide DNS filtering running on Moat, a Raspberry Pi 4. The first node I stood up — I wanted to use the Pi for something, and a DNS server is about the most universally useful thing you can run on one. PiOS Lite keeps it headless and light.

Blocks ads and tracking at the DNS level for every device on the network. It was also the project that made me realize how much was phoning home on a typical home network, which made security feel a lot more concrete.

Blu3m Lab Portfolio Site

Live

Flask · NGINX · Rocky Linux · SELinux · systemd

This site, running on Truss — a Rocky Linux server in my homelab. Flask app behind an NGINX reverse proxy, HTTPS, SELinux-tuned, managed with systemd.

The interesting part isn't the Flask code. It's getting all the moving pieces — NGINX, SELinux policy, systemd service units, reverse proxy config — to work together correctly. Every broken config taught me a neat new lesson.

Gaming Servers

Live

Pop!_OS · Minecraft · Valheim · Kodiak

Kodiak — a repurposed AlienWare laptop — runs dedicated game servers for friends and family. Minecraft, Valheim, whatever someone gets an itch for. I chose Pop!_OS for its NVIDIA support and its ability to swap cleanly between a desktop environment and a headless server depending on what the machine needs to be doing.

It's not the most technically complex thing in the lab, but there's a real joy in watching people you care about use and enjoy something you built. Running a persistent game server also teaches you things about uptime and user expectations that are hard to get from infrastructure work alone.

Home Network

Live

OPNsense · Physical Segmentation · Tailscale · Managed Switch

OPNsense firewall on a repurposed HP desktop — Hodor — physically segmenting the network through its four LAN port interfaces. Tailscale handles remote access. There's a small managed switch in the mix too, though it's currently acting as an unmanaged one — bought for the PoE and future expandability more than anything I needed right then. Tailscale I picked almost on a coin flip; it and WireGuard are similar enough that I decided not to agonize over it and went with the name I liked better.

Hodor started as a pile of parts: a batch of old HP and Dell desktops, a drive that needed swapping before it'd even boot, and a PCIe NIC that turned out to be incompatible with the motherboard — so I sourced a different one that gave four RJ45 ports. Then there was a BIOS-to-UEFI migration to work through before OPNsense would run at all. It was a lot of fun learning to repurpose hardware and mod a machine to fit a completely different role.

Getting hands-on with OPNsense while also managing Fortigate at work has been genuinely interesting — firewall concepts are universal but the implementation details very much aren't. Every time I had to figure out how to do something in OPNsense that I already knew how to do in Fortigate, I came out of it understanding both better.

Backup & File Sync

Live

Restic · Syncthing · Ansible · Rocky Linux

Span — a repurposed desktop — runs Restic backups from Truss, Syncthing file sync across lab nodes, and hosts the Ansible control node. The impetus was unglamorous: Truss started making sounds like a plane trying to take off at 2am. A reboot fixed it, but that was enough motivation to make sure the lab was recoverable.

Restic handles the portfolio site backups. Syncthing keeps files in sync across machines without a central cloud dependency. Ansible is a work in progress — I've been learning it slowly and honestly find it harder to get right than I expected. The control node is ready; the playbooks are a project in themselves.

CMMC Level 1

Certified

Compliance · Security Controls · CUI Handling · iParametrics

Led the CMMC Level 1 certification effort at iParametrics, a government contractor handling CUI. Owning the technical side meant mapping controls to actual system configurations, not just producing documentation.

The work that matters in compliance isn't the paperwork — it's making sure the controls are real. That meant auditing access controls, hardening endpoints, documenting what we actually do, and being honest about gaps before the assessors found them.

M365 Automation

Work

PowerShell · Power Automate · Microsoft 365 · iParametrics

PowerShell and Power Automate tooling for IT operations at iParametrics — user provisioning, license management, conditional access policy enforcement, and reporting.

Manual IT processes create inconsistency. Automating the repeatable work means things happen the same way every time, and frees up capacity for work that actually requires thinking.

802.1x / FreeRADIUS

In Progress

802.1x · FreeRADIUS · Entra ID · Network Access Control · iParametrics

Implementing 802.1x network access control for the office network at iParametrics, targeting the Workstation and Networking VLANs. Pre-shared keys aren't good enough once you care about who's actually on the network.

The Workstation VLAN is getting certificate-based auth via FreeRADIUS backed by Entra ID — authentication tied to actual user identities. The Networking VLAN is a different problem; the devices on it are infrastructure rather than user endpoints, so I'm considering going fully MAC address-based there instead. Still working through the right approach for each.

Autobattler

In Progress

Go · Multiplayer · Game Development

A multiplayer autobattler game, built in Go. I've played Hearthstone Battlegrounds, Storybook Brawl, and others for years, and I think the genre fails in a consistent way: what players call scam — units that can swing the outcome with very little way to play around it — ends up being the dominant form of counterplay. Positioning exists in these games, but it's shallow.

My theory: add rows to the battlefield, design units where positioning actually matters, and let players see their opponent's board layout live. That creates real outplay potential beyond just having a numerically superior board, certain keywords, or the presence of scam that answers either of the previous.

I'm approaching it infrastructure-first — game state management, network protocol, and Go's concurrency model before the game loop. Go's goroutines and channels are a good fit for the real-time state sync a multiplayer game needs. I expect containers to come into this meaningfully once there's something real to deploy.