"AI won't replace humans, but people who can use it will." I remember back in 2019, when I first heard about Andrew Yang. The first thing that came to mind as I watched him deliver his rapid stream of predictions about the future of work, Universal Basic Income, and human-centered capitalism was: this guy feels like Ron Paul felt. Both made you feel like they were saying things they were not supposed to be saying. The draw was how true it all sounded.
I think of both men as figureheads — embodiments of ideologies forged not by mass movements but by narrow, fervent coalitions of thinkers. For Paul, that coalition stretched back to the Austrian School of Economics, to Ludwig von Mises and Murray Rothbard. For Yang, the lineage was newer but no less earnest: thinkers like Erik Brynjolfsson and Andrew McAfee, whose Race Against the Machine warned that automation was outpacing adaptation.
The Untestable vs. the Unfolding
The challenge in assessing Ron Paul and his ideas is that they were, by definition, too epistemically deviant to ever be meaningfully tested within the machinery of government. His worldview stood so far outside the bounds of political plausibility that it was impossible to implement, let alone measure. His hypotheses were too radical to falsify, too unorthodox to prove.
By contrast, the opportunity presented by Andrew Yang lies in the fact that his diagnosis is already proving itself. The automation anxiety he voiced in 2019 — once dismissed as techno-utopian hand-wringing — now reads less like science fiction and more like headline reporting. Generative AI has begun doing precisely what Yang warned: displacing the cognitive middle class, compressing creative and analytical work into algorithms, and eroding the wage floor faster than policy can adjust.
The future rarely announces itself with trumpet blasts — it arrives as a thousand small subtractions. A task automated here, a position consolidated there, until one morning we wake to find the economy humming perfectly — and us no longer necessary to its tune.
The Disruption as Lived Experience
I didn't have to look far to see what that future might look like. When I was at Guidehouse, I worked alongside a group of deeply capable consultants — people who had spent years supporting the successful administration of key government mortgage and housing programs: HUD's asset sales, Ginnie Mae's capital markets, Treasury's housing initiatives. Then, seemingly all at once, everything changed. The firm made significant reductions, and our group was disbanded.
At the time, it felt impossible to imagine how the work we had been doing could continue without that institutional memory and specialized expertise. And yet, a year later, those same programs appear to be running just fine. Deliverables still go out. Clients still receive what they need. The vacuum we expected never fully materialized. That realization was sobering — and revealing.
What the Numbers Show
Across the professional-services landscape, the numbers tell a consistent story. Studies estimate that between 27 and 45 percent of consulting tasks are now automatable with current technology. Nearly three-quarters of consulting clients report that they expect their partners to integrate AI directly into service delivery, and more than two-thirds say they would consider switching firms if those capabilities weren't in place. In other words, the market itself has begun rewarding automation — sometimes faster than firms can re-skill their people.
The consulting model — long dependent on human bandwidth and incremental labor — is being quietly rewritten. Where teams of analysts once pulled data, built models, and produced reports, now a smaller group of professionals, armed with advanced tools, can do the same in a fraction of the time. The disruption doesn't feel like collapse; it feels like continuity without us.
What Remains: The Interpretive, the Relational, the Strategic
Still, I don't see only loss in that transformation. I see a narrowing — a redefining — of what it means to add value. If automation has taken over the routine and the procedural, what's left is the interpretive, the relational, the strategic. The part that involves judgment, empathy, and narrative — the skills that connect numbers to meaning. Machines can model policy effects; they cannot explain them to Congress, or help a client see how a compliance change will ripple through communities.
That's precisely where House Strategies Group is heading. We've watched the same forces Yang warned about reshape our industry, and we're building our response not with resistance but with adaptation. We're embracing an AI-enabled consulting model — one that pairs automation with expertise, technology with trust. It's an approach that will make our pricing more competitive, our products more compelling, and our work less prone to error while remaining deeply human in purpose.
The Direction for HSG
We're investing in tools that accelerate the administrative so our people can focus on the analytical — in systems that surface insight instead of burying it. We're doing this because the consulting firm of the future isn't the one with the most people; it's the one that knows how to make machines work for people.
That's the direction for HSG: to meet this era not as a casualty of automation, but as a proof of concept for how small, mission-driven firms can evolve faster than the bureaucracies they serve. In a thinking economy being rewritten by algorithms, we intend to stay what the best consultants have always been — translators between complexity and consequence, helping our clients not just keep up with the machines, but lead them.
Full disclosure: I used ChatGPT to refine these thoughts — though I'll claim every poetic line as my own: the forest and the trees, the thousand small subtractions, the arithmetic in motion. Depending on how you read that, it either proves our point about augmentation or makes me a proud, willing participant in the slow, graceful demise of the thinking economy — and the birth of whatever comes next.