I work in security, observability, and automation — and I’m currently in the process of supercharging all three with AI. I’m interested in what these systems can do, how they can be broken, and how to build them so they’re hard to compromise. I come to that last question with more firsthand experience than most. It’s a long story. There’s a blog for that.
What you’ll find here is difficult to categorize neatly, which is probably fitting. Some posts are technical: incident reports, implementation notes, things I learned the hard way about systems that misbehave. Others drift into the philosophical — questions about intelligence, security, and what it means to build things that think. And occasionally the two collapse into each other, because the most interesting problems usually live at that boundary. Read accordingly.
Why I built a single-binary Prometheus exporter for network inventory instead of deploying another IPAM server — and how constraint, Go, and an existing monitoring stack got me further than any purpose-built solution.
A paper revealed that LLM editing neutralizes your stance while making you feel more satisfied. Explored through Ruskin's wealth/illth framework, with practical techniques to keep AI as a lever instead of a crutch.
When AI systems learn to game direct evaluation, observability must shift from watching outputs to reading side-channels. A framework for monitoring systems that resist observation.
Bridging the gap between knowing that an LLM failed a red team probe and understanding how — by integrating Anthropic's Assistant Axis research into Garak vulnerability scanning.