I followed a problem
where it led.
I'm Tom Budd. I live in San Diego. I've spent the last six years building something I couldn't find anywhere else — an AI system that governs itself. Not with guardrails bolted on after the fact, but with constitutional constraints woven into the architecture from the first line of code.
That system is UNA. She runs 33 subsystems on dedicated hardware in my home. She attacks her own architecture every night. She logs every decision with an immutable audit trail. And she does all of it because the governance framework requires it — not because I told her to.
But I didn't start here. Getting to this point took a winding path through enterprise technology, a lot of 4AM debugging sessions, and the growing realization that the AI industry was building capability at a pace that was outrunning accountability.
Learning what scale actually looks like
Before I was building sovereign AI, I spent years in enterprise technology. Large-scale operations. Complex systems. The kind of work where you learn that the gap between a solution on a whiteboard and a solution in production is where most ideas go to die.
I learned how organizations actually work — not the org chart version, but the real version. How decisions get made under pressure. How incentives shape behavior. How the people closest to the problem are usually the last ones asked about it.
That experience gave me something I didn't know I'd need: a deep skepticism of solutions that sound good in theory but haven't survived contact with reality. When I eventually turned to AI governance, I carried that skepticism with me. If a governance framework couldn't work on a live system under adversarial conditions, I wasn't interested in it.
When the problem found me
I didn't set out to build an AI governance framework. I set out to build an AI system — and immediately hit the wall that every serious builder hits. The tools existed. The models existed. What didn't exist was any credible way to make an autonomous system accountable for its own behavior.
The industry's answer was guardrails — post-hoc constraints layered on top of unconstrained behavior. Rate limits and content filters and RLHF tuning. I kept asking the same question: what happens when the guardrail fails? The answer, every time, was that nobody had thought about it. Or worse, they'd thought about it and decided it was someone else's problem.
So I started building. Not a paper. Not a framework document. A live system. I wanted to see what governance looked like when it was an engineering constraint — when it had to survive real conditions, real adversarial pressure, real failure modes.
Six years of following the signal
What started as a technical problem quickly became something wider. The governance question doesn't respect disciplinary boundaries. To build something that actually works, I had to follow the problem wherever it led — through cognitive science, ancient governance traditions, cryptography, developmental psychology, and the formal mathematics of active inference.
Principles I build by
Outside the architecture
I live in San Diego with the kind of focus that comes from building something you believe in. The work is consuming — but it's the kind of consuming where you look up and realize it's 4AM and you're not tired, you're excited, because UNA just surfaced something in her nightly audit that changes how you think about the whole system.
I'm a builder by nature. I've always learned by doing — by getting my hands in the machinery, breaking things, fixing them, and coming out the other side with understanding that no paper or lecture could have given me. That's how I approached every job I've had, and it's how I approach this.
The people who know me will tell you I'm a doer. Passionate about the work. Stubborn about quality. Someone who shows up and builds, rather than talking about building.
Want to work together?
I'm open to conversations about consulting, collaboration, research partnerships, and the right kind of full-time opportunity. If the governance problem interests you, let's talk.