This morning I found myself watching Stuart Russell on Diary of a CEO, and it hit me how inescapable this AI safety conversation has become. It's not just "AI people" talking anymore. All the folks I actually listen to — Glenn Loury, Sam Harris, Stephen Bartlett — are circling the same questions: what, exactly, are we building? And how safe is it, really, to keep going at this pace?
We've clearly moved out of the "Wow, this thing can write emails and pass bar exams" phase into something more unsettled. The tone now is closer to: "Okay… so where does this end up?" I've heard doomsday scenarios my whole adult life, so the fear itself isn't new. What is new is seeing trillions of dollars of market cap line up behind the technology, watching these systems operate in real time, and feeling — very personally, as a working professional — how close we might actually be to something like artificial general intelligence without any guarantee that it's being steered in a pro-human direction.
I sometimes describe these newsletters as being written in the Locrian mode of AI. In music, "modes" are different scales, each with its own emotional color; Locrian is the most unstable, unresolved-sounding of them all. That's what this moment feels like to me: powerful and full of possibility, but harmonically unresolved, with no guarantee it will resolve into something that serves us.
What's the "Unit of Selection" for AI?
In the evolutionary story I half-remember from The Selfish Gene, the basic idea is: evolution doesn't care about you as a whole person; it "cares," if you want to anthropomorphize, about genes. Genes live or die based on how well they propagate in a particular environment.
If we try to carry that metaphor over to AI, the first question is: what's the equivalent of the gene? Right now, the closest analogue is probably a model-plus-objective pair. A large language model doesn't reproduce biologically, but it is iterated on. The "environment" is: loss curves and eval metrics, user retention and engagement, revenue and GPU budgets, regulatory blowback (or lack thereof). That's what decides which models get scaled, which safety techniques stick, which research agendas get more money.
And crucially, for now, we are almost the entire environment. We decide the objective functions. We build and curate the training data. We define "aligned" behavior and fine-tune toward it. In other words, the current evolutionary game for AI is still being played on a very human-centric field.
Beyond Language: Multi-Sensory Models and a Less Human World
But maybe we don't stay in this language-dominated phase. Right now, everything is squeezed through the bottleneck of language. What happens if we move past that? Imagine a generation of systems for which language is just one sensory channel among many — systems with rich, native representations of touch, hearing, vision, taste, and smell not translated into text, but meaningful in themselves to the model.
Now the evolutionary question gets sharper: if the model's "world" gets richer and more autonomous, and its inputs no longer all route through human language, how do we keep control over the context in which these systems evolve? As models become more multi-sensory and more embedded in systems that talk mostly to other systems — robotic swarms, machine-only marketplaces, automated control loops — you can imagine an ecosystem where performance is judged primarily by how well a model competes with other models, and training data is increasingly synthetic and model-generated.
The risk isn't just "robots will rise up and kill us." It's more subtle: systems become well adapted to an environment we created but no longer control, and our welfare shows up, at best, as an indirect side-effect.
Do We Need a Department of AI Safety?
That's where my mind shifts from metaphors to institutions. I'm a government contractor by trade. I spend my time in and around HUD, FHA, and the broader mortgage finance world. I'm not out here cheering for DOGE — but I will say this: the current administration proved that you can move government structures around fast when you really want to. Agencies can be reshuffled, new entities can be stood up, mandates can change almost overnight. That cuts both ways, obviously. But it's proof that "the government is just slow" is not a law of physics.
When I think about AI, I increasingly think we're going to need something like a Department of AI Safety — whether that's a literal cabinet department or just a serious, centralized governance body with real teeth. A few acronym ideas: DAISY (Department of AI Safety & Integrity) or AEGIS (Agency for the Ethics & Governance of Intelligent Systems). Whatever we call it, the point is the same: we need speed bumps and checkpoints built into the evolutionary process — places where we say, "You don't get to scale this model unless it passes some hard tests about what it optimizes for and how it behaves under pressure."
My Lane: Learning Enough to Be Useful
This is where it stops being a purely intellectual exercise for me and starts sounding like a personal to-do list. I've made my career helping government clients wrestle complex systems into something more accountable. AI safety feels like the next big "complex system" that government is going to have to manage. And if that's true, then I don't get to sit this one out.
I owe some of my thinking here to my mentor, Audrey McGuire, who has been nudging me to pay closer attention to this space. So I'm putting this in writing as a kind of commitment to myself: I'm beginning a self-education process aimed at positioning myself — and my firm — to play a real role in the governance side of AI. Roughly, that looks like:
- Getting technically literate enough not to bullshit myself. I'm not trying to reinvent myself as an ML researcher. But I want to be able to read a model card, follow a high-level description of an architecture, and make sense of a basic alignment paper without getting completely lost.
- Treating AI safety as a field, not just a vibe. There's now a real body of work on reward misspecification, goal misgeneralization, interpretability, and scalable oversight techniques. I want a working sense of how misaligned behavior shows up in real systems.
- Learning AI governance the way I learned housing finance. We're watching a regulatory ecosystem start to form around AI — NIST frameworks, emerging model governance standards, impact assessments. I want to understand how AI risk fits alongside credit risk, operational risk, and compliance.
- Practicing translation in the direction I already face. Drafting internal frameworks that explain AI risk in the language of program managers, contracting officers, and CFOs. Treating each engagement as a chance to ask: "What does AI look like here? What could go wrong?"
Where AI Meets HUD, FHA, and My Day Job
For my firm, the logical entry point is the world I already inhabit: mortgage finance and housing, especially at HUD and FHA. Where I actually see the most immediate intersection between AI and my world is in procurement and labor estimation.
We're heading into a world where a large portion of federal contractors — especially in consulting and analytics-heavy work — are adopting AI tools at scale. That changes the basic unit economics of how long tasks take, how many FTEs are needed, what skills are "entry-level" versus "specialized." And yet, many government procurement practices are still built around assumptions from a pre-AI world: staffing models that assume a certain number of analyst-hours per deliverable; page counts and reporting cadences that take no account of automation.
That is a recipe for distortion. Contractors may quietly bake AI-driven productivity into their bids without saying so, creating a mismatch between what the government thinks it's buying and how the work is actually being done. Small and mid-sized firms — especially 8(a)s and other disadvantaged businesses — may get left behind if they don't have a clear, safe path to adopt AI in their own workflows.
I can see a role for my firm in helping HUD, FHA, and similar agencies start to build AI-aware procurement and governance practices: updating RFP language to ask explicitly about how offerors are using AI, helping contracting officers interpret staffing plans in a world where "one analyst plus smart tools" might be more realistic than three analysts grinding in Excel, and supporting small firms in adopting AI in a way that improves their competitiveness without blowing up risk, ethics, or compliance.
A Positive Direction
For me, the takeaway from watching Stuart Russell and thinking about all of this is not "We're doomed." It's more like: "This is the next big arena where government has to grow up fast — and I want to help."
Part of the direction of my firm, going forward, will be to participate in the way government — HUD, FHA, and beyond — tames the wild world of AI advancements: by educating ourselves deeply enough to be credible in both the technical and governance conversations, by helping agencies write smarter contracts and build smarter oversight structures, and by making sure AI is introduced into critical systems with its risks on the table, not hidden in the fine print.
I'm looking forward to that journey. It feels like a natural extension of the work I've already been doing in government and housing finance — and very much in keeping with this "Locrian mode" moment we're all living through. Unstable, unresolved, but still ours to shape if we choose to. That, at least, is where I'm trying to fit in.