The Hidden Reason Most AI Transformations Stall

There’s a question we hear in almost every conversation we have with leaders right now: How do we get our people to actually use AI?

It’s a reasonable question. And it’s the wrong one.

Because in most of the organizations we work with, the people are already moving. They’re figuring things out on their own, finding workarounds, building fluency in ways that their job descriptions don’t require and their performance reviews don’t recognize. The bottleneck isn’t capability or even willingness. It’s what the organization has been built, over years, to reward.

That gap between what people can now do and what their organizations are designed to support is where most AI transformation efforts quietly stall.


The Data Reframes the Problem

Microsoft’s 2026 Work Trend Index puts numbers to something we’ve long observed in the field. Across 20,000 workers in 10 countries, researchers found that organizational factors, culture, manager behavior, how talent is developed and evaluated, account for more than twice the reported AI impact of individual mindset and behavior combined. 67% versus 32%. The strongest single predictor wasn’t someone’s attitude toward AI, or how much training they’d received, or even how often they used it. It was the culture of the organization around them.

That finding shouldn’t surprise us, but it should stop us.

Because most organizations are still treating AI adoption as an individual performance problem. They’re tracking who’s using which tools, how often, and for what. They’re building training curricula and setting usage targets, doing the things that make sense if you believe the constraint is individual capability.

Horizontal bar chart: Organizational factors (67%) vs. Individual mindset & behavior (32%) vs. Demographics in driving reported AI impact. Source: Microsoft 2026 Work Trend Index (n=20,000)

Edgar Schein, the MIT organizational psychologist who spent decades studying how cultures actually work, argued that what an organization says it values and what it actually rewards are often two different things, and people always respond to the latter. The espoused values live in strategy decks and town halls. The real values live in what gets recognized, what gets promoted, and what gets quietly tolerated when results are under pressure.

When only 13% of workers say their organization rewards reinvention of work with AI, even when it doesn’t immediately produce results, that tells you everything about where the real values are. Not 13% who tried and failed. 13% who feel the system is actually on their side when they take the risk.

A question worth sitting with:  When did your organization last visibly recognize someone for how they approached a challenge, not just whether it succeeded?


What the Organizations Pulling Ahead Actually Do Differently

The organizations pulling ahead aren’t distinguished by having more sophisticated AI tools or more technically advanced employees. They’re distinguished by the conditions they’ve built.

Workers in these environments are significantly more likely to say their manager:

  • openly uses AI

  • sets quality standards for AI work

  • creates space for experimentation, and

  • encourages more ambitious redesign of work.

They are also twice as likely to say reinvention is rewarded regardless of outcome.

Regardless of the outcome.

That’s the detail most performance management systems are not designed to support. We are extremely good at recognizing results. We are much less practiced at recognizing the quality of the attempt, the willingness to rethink, to try, to share what was learned, even when the experiment didn’t land. Peter Senge argued in The Fifth Discipline that the only sustainable competitive advantage is an organization’s ability to learn faster than its competitors.

But learning, by definition, requires tolerance for the intermediate state where you haven’t figured it out yet. If that state is penalized, people stop learning. They optimize for looking capable rather than becoming capable.


The Structural Question Leaders Need to Ask

So the real question for leaders isn’t how to get more people using AI. It’s harder than that:

What would have to be true about how we evaluate, recognize, and reward work for people to feel genuinely safe redesigning it?

That question tends to surface things organizations don’t love to look at:

  • Performance metrics designed for a different model of work.

  • Manager incentives that reward team output over team development.

  • A promotion culture that elevates people who execute well on known paths, not people who find better paths.

None of these are failures of intent. They’re features of systems built for a world that no longer entirely applies.

You can’t change what a culture actually values by talking about it differently. You change it by changing what gets measured, what gets recognized, and what happens to people who take risks and fail. Those are structural decisions, not communication decisions.

The organizations that figure this out first won’t just move faster on AI. They’ll build something more durable, the institutional capacity to keep learning as the tools, the workflows, and the nature of work itself keep changing. That’s the prize. And it has very little to do with which AI platform you’re deploying.

It starts with being honest about what you’re actually rewarding right now, and whether that’s the same as what you say you want.


A question worth sitting with:  If someone on your team spent a quarter redesigning how their work gets done, and it didn’t produce measurable output gains, how would your performance system treat that?

Next
Next

Leading in a State of Perpetual Motion: The “Transformations as Usual” Mandate