Why Agentic AI Needs a Human Heart and a Historical Compass
- Chris McNulty
- 1 day ago
- 4 min read
Recently, I joined Philippe Trounev at Docsie.io for a wide-ranging conversation on his podcast, “So What About AI Agents”. We dove into the promise and peril of agentic AI, and how businesses can navigate this new frontier with clarity, strategy, and heart.
It was a chance to reflect on what we’ve learned from past waves of innovation. Since that conversation, I’ve been thinking more about how agentic AI fits into the broader arc of technological history.

Because if there’s one thing we know for sure, it’s this: Technology is never neutral.
The Double-Edged Sword of Innovation
Every major technology breakthrough brings both opportunity and risk. Here are a few:
19th Century: The telegraph, dubbed the “Victorian Internet” in Tom Standage’s 1998 book, accelerated communication, sparked romance, and created new commercial opportunities. It required coding and security. It facilitated wire fraud. It also put entire industries out of business, like the Pony Express.
20th Century: The Internet gives schoolchildren remote access to Shakespeare’s works—and gives gambling addicts 24/7 access to online betting.
21st Century: Social media connects us to faraway family and long-lost friends. It also fuels radicalization, online dissension, and bullying of middle school children.
Agentic AI is no different. It can amplify human judgment or drown it out. It can streamline operations or create chaos. The outcome depends on how we design, deploy, and govern it.
What Agentic AI Is (and Isn’t)
Agentic AI refers to systems that can reason, act, and adapt autonomously. These aren’t just chatbots with better grammar. They’re designed to take initiative, make decisions, and execute tasks—ideally without constant human oversight.
But let’s be honest: most of what’s being marketed as “agentic” today is anything but. We’re seeing a flood of “Hello World” agents—low-value automations that spin endlessly, consuming compute and budget without delivering meaningful outcomes.
Multiply that by 10,000 agents and you’ve got a $10M operating bill with no ROI. That’s not just a technical problem—it’s a strategic one.
Strategy First, Always
Back at Microsoft, I met with IT leaders weekly. No matter what we showed — features, roadmaps, demos — they always asked:
“Do we already own this?”
“Which of these should I turn on first?”
And my answer was always: “I can’t tell you until I understand your business.”
Are you entering new markets? Cutting costs? Managing risk? Trying to acquire? Your technology roadmap, including AI, must be rooted in your business strategy. Otherwise, you’re just chasing hype.
Governance Isn’t a Bottleneck—It’s a Compass
Governance isn’t about slowing down. It’s about scaling the right things.
Ask yourself:
What business challenge are we solving?
Is the process well understood?
Does it lend itself to autonomy?
If the answer is no, don’t build an agent. Not yet.
And if you do build one, monitor it. Measure it. Maintain it. Otherwise, you’re just adding noise.
Building Antibodies to AI Slop
We’re not just building systems—we’re building immune systems. “AI slop” is everywhere: generic content, bots praising bots, agents sending useless updates. It’s pollution. And it’s growing.
Just like our bodies fight viruses, our organizations need behavioral and systemic antibodies to reject low-quality AI. That means:
Human-in-the-loop design: Agents should amplify judgment, not replace it.
Signal prioritization: Active human decisions should outweigh machine popularity metrics.
Governance frameworks: Every agent must be accountable to a business process, not just a prompt.
Quality isn’t about volume. It’s about validation.

Interoperability: The Next Frontier
Remember the early days of enterprise networking? Every system tried to talk to every other system. It was a mess. (Remember WINS servers?)
We’re seeing the same with agent-to-agent (A2A) communication. MCP protocols are emerging, but seamless interoperability is still a work in progress.
There are maybe 500 critical line of business systems. But millions of agents. Discovery, trust, orchestration will be much more complex for agents than for line of business systems.
And we haven’t even touched security.
Experiments and Ethics
I ran a LinkedIn lead-gen experiment using an agent built on Claude. It sourced 100 contacts - half mine, half its own – and started making conections for me. It did okay at first. But by message three, it often went off the rails. I had to step in. (And I'm not running it now.)
I was transparent. I told people: “This is an experiment. Here’s what I said. Here’s what the AI did. Where do you think I stepped in?” Most guessed right.
That’s the human signal. And it matters more than machine signal.
The Rise of AI Slop and the Flight to Quality
We’re entering a phase I call the “flight to quality.” In generative AI, people are rejecting slop - generic, soulless, machine created content. The same will happen in agentic AI.
Businesses won’t win by deploying more agents. They’ll win by deploying better ones.
That means:
Prioritizing human-in-the-loop design
Valuing human signals over machine signals
Building agents that amplify—not replace—human judgment
What Comes Next
Every company will need an agent—just like every company needed an API. But not every person will build their own.
We’ll see the rise of digital twins—agents sold by trusted providers that represent individuals or departments.
But we must proceed with caution. Accountability, security, and governance must come first. Otherwise, we’ll end up with trillions of tokens spent on agents no one understands or controls.
Final Thought: Human Signals Must Lead
In AI systems, human validation must outweigh machine popularity. A final signed contract, reviewed by two people, should carry more weight than a draft reviewed by hundreds. In content mamangement, that retention flag, especially when manually added, should carry great weight.
Agentic AI won’t take over the world. But people and businesses who know how to use agents—strategically, ethically, and intelligently—will.
Comments