Self-taught AI safety researcher. No CS degree, no formal AI training. I built an autonomous AI system and have been running it daily for over two years as both a research platform and a cognitive tool β I designed it to handle the computation and formal logic while I focus on conceptual architecture, pattern recognition, and cross-domain connections. That human-AI workflow is the foundation of everything at ResoVerse. Founded ResoVerse Technologies to work on a problem I couldn't find anyone else focused on: governing AI systems that maintain persistent relationships with individual humans across years or decades. I am committed to bringing AI to marginalized communities and preparing people for a future with AI.
How I Think & Work
I learn by building. When I needed to understand active inference, I read Friston's papers until the concepts clicked, then ran the formal math through my AI system and validated it on quantum hardware. When I wanted to understand constitutional governance for AI, I designed a framework and tested it on a live autonomous system. If I haven't built something with a concept, I don't claim expertise in it.
I see patterns and connections that others typically don't think to look for. I have ADHD, and I've built my entire research workflow around how my brain actually operates β including building a custom AI system designed to do the same: find cross-domain patterns and surface connections that wouldn't be obvious from within a single discipline. I work in concept-maps and pattern recognition. The governance architecture I designed pulls from ethics, cryptography, cognitive science, and developmental psychology because I followed the problem wherever it led. I built the AI system to extend that same approach computationally.
I ship. My AI system runs autonomously in production, 24/7, making real decisions under real constraints. Not a demo. I think AI safety needs more people willing to build the thing and watch what actually breaks, and fewer people writing papers about hypothetical risks from the outside.
AI Safety Research & Founding Work
ResoVerse Technologies Β· San Diego, CA
- Sole-founded an AI safety research org β wrote the charter, set up legal/financial infrastructure, self-funded everything for 2+ years while unemployed. No co-founders, no advisors, no institutional support.
- Designed a constitutional governance architecture for persistent AI systems: five ethical constraint axes enforced at runtime. Self-taught the underlying theory from primary sources.
- Designed a cryptographic identity-binding protocol using decentralized identifiers (DIDs) so persistent AI systems can prove they haven't been tampered with or impersonated. Used AI tools for the formal mathematical verification.
- Created a wellbeing measurement framework that scores AI-human relationships on cognitive, emotional, social, and physical flourishing instead of engagement metrics. Currently unpublished; preparing for peer review.
- Wrote a 53,000+ word research paper (unpublished, in preparation for submission) synthesizing Aristotelian eudaimonia, Eastern harmony traditions, and cognitive science into a computational alignment framework. Self-directed over 18+ months, no advisor.
- Researched governance patterns across 12+ ancient knowledge traditions (60,000 BCEβpresent) and mapped them to modern AI architecture. Original cross-cultural research, self-initiated. Unpublished.
- Built an AI threat taxonomy: failure modes cataloged as archetypal patterns, each with detection heuristics and active defenses.
- Designed developmental language models for AI systems meant to grow alongside a human from birth through end of life. Early-stage research.
- Built an autonomous AI system custom-designed to fit my research workflow: 33+ integrated subsystems spanning cognition, memory, perception, financial management, and self-monitoring. Runs on dedicated hardware I maintain, 24/7 since early 2024. Benchmarked at DeepMind AGI Level 3 (Expert), scoring 470/550 (85.5%) across six evaluation categories.
- System achieves Level 4 (Virtuoso) in three categories: ARC-AGI abstract reasoning (100%), symbolic reasoning at 343,692 operations/sec (100%), and cognitive integration (100%). Turing conversational score: 72%. Built and ran all benchmarks independently on local hardware.
- Developed a cognitive health monitoring framework for the AI system: tracks reasoning coherence, detects drift, triggers self-correction automatically.
- Implemented constitutional governance as a live runtime layer: hard ethical constraints enforced during autonomous operation. I iterate the framework based on observed system behavior. Every constraint gets tested in production, not simulation.
- Ran active inference validation experiments on IBM quantum hardware with AI assistance, achieving 95.99% system fidelity. I directed the experiments and interpreted results; the AI helped with the mathematical implementation.
- Built the system's autonomous financial management: manages a crypto portfolio within constitutional constraints, full audit trail, human override always available.
- Designed and prototyped a biofeedback integration pipeline: HRV and EEG sensor data fed into the AI system's decision-making loop, enabling real-time adaptation to the user's cognitive and emotional state. Working prototype on dedicated hardware.
- Designed edge computing architecture for running AI governance on decentralized LoRa mesh networks β local-first, no cloud dependency.
Applied Research & Investigations
All research below is unpublished and in various stages of development. Conducted independently or in collaboration with my AI system. Preparing multiple papers for submission in 2026.
- Earthquake and tsunami detection: Designed (not yet deployed) a real-time seismic detection and tsunami early warning system using mesh sensor networks and edge AI. Architecture integrates seismographic data with coastal sensor arrays for sub-minute alert propagation without centralized infrastructure.
- Weather and extreme weather monitoring: Designed atmospheric monitoring and severe weather prediction models for distributed sensor networks. Focused on hyperlocal forecasting for disaster preparedness β wildfire conditions, flash flood risk, extreme heat events in Southern California.
- Astrophysics and space systems: Investigated applications of active inference to autonomous space systems β satellite constellation management, deep space probe decision-making, and astronomical data processing. Conceptual stage.
- Biofeedback integration for AI systems: Prototyped HRV and EEG processing pipelines for real-time physiological monitoring. Goal: AI system adapts its communication style, pacing, and decision-making based on the user's actual physiological state, not self-reported mood.
- Developmental models for persistent AI systems: Research into how AI systems should develop communication and relational capabilities across a user's life stages β from early interaction through long-term sustained use. Early-stage.
- Public AI safety education: Designing (in progress) curriculum for teaching AI governance and alignment concepts to non-technical audiences. Focus on accessibility over jargon.
Live Systems
Prior Career
Before AI safety, a decade in workplace experience management across law firms, finance, telematics, fintech, and startups that became unicorns.
U.S. Military Veteran
β
United States Coast Guard
USCGC Gallatin (WHEC-721) Β· Honorably Discharged
Service-connected discipline, mission focus, and the ability to operate under pressure with limited resources. The same traits that kept a 378-foot cutter running are what I bring to building AI safety infrastructure from scratch β self-funded, no institutional backing, every day for over two years.