AI Isn’t the Hard Part. Human Systems Are.
What actually breaks when organisations move faster with AI
A couple of weeks ago, my daughter and I made a collage together at the kitchen table.
She was exploring AI. Not the hype or the fear, but how humans and machines shape each other.
For the machine layer, we dismantled old phones and a keyboard. Wires, circuits, keys.
For the human layer, she chose materials with memory: paper, fabric, photographs.
As she worked, the boundary softened.
The machine elements began to feel organic.
The human materials formed patterns that looked almost computational.
I see the same thing every day in my work.
Not at the kitchen table, but inside organisations trying to move faster with AI than their human systems allow.
AI isn’t the hard part.
Human systems are.
Once you stop looking at AI in theory and start watching how humans and systems actually interact under pressure, certain patterns become impossible to ignore.
Across banks, platforms, regulators, boards, and scale-ups, I keep seeing the same six capabilities determine whether teams adapt or quietly stall.
1. Leadership alignment and direction
Most teams believe they’re aligned. Until they’re under pressure.
I’ve sat in rooms of the most senior, highly paid executives in an organisation where everyone nodded, then walked out with different interpretations of what mattered most.
You know alignment is slipping when you ask three people what success looks like this quarter and get three different answers.
What changed things wasn’t more communication. Cascading decks made it worse.
What worked was writing direction on a single page and using it everywhere for a full quarter. We named what we would not prioritise. And we assigned one owner per outcome. No shared accountability.
Clarity followed behaviour, not messaging.
2. Decision systems
Most delays aren’t caused by missing data. They’re caused by hesitation and diffusion.
I’ve watched capable teams stall for weeks because no one was quite sure who had the pen.
You know it’s broken when decisions keep circulating, changing language slightly, and never quite landing.
More forums didn’t help. They just created better discussion.
What worked was documenting who decides, who inputs, and who executes in plain English. We agreed which decisions had to land within 24 hours. And we stopped reopening decisions unless new information genuinely changed the risk.
Momentum returned quickly.
3. Governance and risk controls
Good governance doesn’t slow teams down. It protects them when speed increases.
Weak governance often looks efficient right up until the moment it isn’t.
You know it’s fragile when people can’t articulate the red lines without checking a policy.
SOPs didn’t help under pressure. No one read them.
What worked was stating the red lines out loud, repeatedly. Separating sandbox work from production. And designing a simple escalation path so exceptions didn’t trigger panic.
Risk conversations got shorter and sharper.
4. AI-in-the-loop workflows
AI doesn’t fix broken workflows. It exposes them.
I’ve seen AI introduced with real promise, only to surface confusion about who owns judgement when outputs are wrong or incomplete.
You know it’s unclear when no one can say who owns the outcome once AI is involved.
We started with tools. That was a mistake.
What worked was mapping one real workflow end to end. Marking where AI could assist and where human judgement had to sit. And explicitly agreeing who owned the outcome when AI got it wrong.
That agreement mattered more than the model.
5. Cross-functional coordination
Coordination isn’t about meetings. It’s about coherence.
Some of the biggest risks I’ve seen didn’t come from bad intent. They came from teams optimising locally and damaging the system globally.
You know coherence is breaking when work looks “done” in one team and creates problems for the next.
Detailed reporting hid the real issues.
What worked was a short, weekly cross-functional check-in with a fixed agenda. Progress was shown visually. And when priorities conflicted, leaders resolved it on the spot instead of letting teams absorb the tension.
Friction dropped almost immediately.
6. Intelligence behaviours
This is the quiet one.
No tool compensates for overload, fuzzy judgement, or fractured attention.
I see teams drowning in activity while clarity steadily declines.
You know it’s a problem when everything feels urgent and very little is actually finished.
Adding tools made the noise worse.
What worked was capping priorities at three per person. Cutting meetings aggressively. And protecting one block of uninterrupted thinking time each week, defended by leaders, not individuals.
Clarity improved without adding anything new.
Technology is no longer scarce.
Judgement is.
The teams that upgrade these human systems move faster and safer. The teams that don’t struggle, not because AI failed them, but because their systems never caught up with the world they’re now operating in. That’s the work now.
To the edge and beyond. See you out there!
Kate

