You are currently viewing Your AI Problem Isn’t the AI. It’s That People Stopped Trusting Each Other.

I’ve been paying close attention…

To something that keeps showing up in conversations I have with leaders and decision makers, especially in the healthcare field. They share that they picked the right AI tools, got executive support, ran training sessions, but nothing really changed. Worse, things get more tense.

For a long time, I thought this was just a change management challenge – maybe there were blindspots or something that weren’t addressed. But the more I look at it, the more I think it’s something deeper that starts before the tech is even part of the conversation.

What the numbers are telling us

Data shows that:

92% of companies plan to increase their AI investment over the next three years (McKinsey, 3,613 employees and 238 executives surveyed). But only 1% report reaching anything close to “AI maturity.”

56% of CEOs say they’ve seen zero financial payoff from AI (PwC, 4,454 CEOs across 95 countries). And 72% of CIOs report breaking even or losing money on their AI investments (Gartner, 506 CIOs surveyed).

These numbers are big and come directly from companies that are actively investing in this technology. The question becomes, “What’s going wrong?”

The main instinct is to blame the tools, or the training, or the data quality. But there’s a growing body of research showing that the challenge or issue is something else completely.

It’s not a technology problem

Amy Edmondson, the Harvard professor who basically defined the concept of psychological safety, just published a piece in Harvard Business Review with 3M’s Jayshree Seth that I found to be extremely important for us all (honestly, regardless if you’re doing the same type of work I am doing or not).

They argue that when AI enters a team, it changes much more than just workflows – it changes how people relate to each other. They describe something they call “trust ambiguity,” this idea that team members start second-guessing both the AI and also (more importantly) each other.

“Who made this decision: the AI or my colleague? Can I question this recommendation without looking like I don’t trust my team? If I raise a concern about an AI output, will people think I’m being ‘difficult’ or ‘resistant to change?’”

This is a dynamic I’ve seen before, especially in healthcare settings. Clinicians who have spent years (decades) building their expertise find themselves in a position where an AI tool is challenging their judgment. And the response isn’t always to push back on the AI, but instead to go quiet by not raising concerns and just going along with it.

Edmondson and Seth call it the “human-AI oversight paradox”: the more people use AI, the less confident they become in their own ability to question it. Even when they have every reason to.

The bigger picture

And here’s where the numbers get really interesting, because they paint this broader picture:

Only about 25% of employees regularly experiment with AI at work, even as the tools are being rolled out everywhere (Mercer, 4,500+ US employees). About 40% of workers in healthcare specifically haven’t used AI in any context (work or personal).

Meanwhile, 56% of workers say they’ve received no recent AI training despite their organizations accelerating adoption (ManpowerGroup, 13,918 workers across 19 countries). Employee willingness to support organizational change has collapsed from 74% in 2016 to 43% today (Gartner).

There’s a trend here – these people aren’t against AI. The reality is that these are people who don’t feel safe enough to engage with it. That is such an incredibly important difference.

In healthcare, this shows up in ways that have direct implications for patient care. Research from Qualifacts (2,000 US adults) found that only 10% of Americans would trust an AI-generated recommendation without a clinician involved. 77% say patients should always be informed when AI is part of their care.

All of this is exactly what you would expect from any professional – not resistance but instead a completely rational response from people who understand what’s at stake.

What leaders can do about it

What do organizations actually do? Edmondson and Seth outline several principles, and I want to build on those with what I’ve been seeing in my own work:

1. Treat AI adoption as a learning initiative (rather than a tech rollout). This changes everything, from how you communicate about it to how you measure success. If the expectation is that people should “just use the tool,” you’re creating an environment where questions feel like pushback. If the expectation is that everyone is figuring this out together (which, in my opinion, is the reality), you create space for honest conversation.

2. Reward the people who raise concerns (along with those who adopt quickly). Edmondson talks about creating “intelligent failure protocols” – structured ways for teams to learn from what goes wrong with AI. In healthcare, this is especially important. For example, a nurse who flags that an AI triage recommendation doesn’t match their clinical experience is doing exactly what patient safety requires. That kind of voice needs to be celebrated.

3. Build time for AI After-Action Reviews. One of the most practical recommendations from the HBR piece: after major AI-assisted decisions or projects, bring the team together to review what happened. This helps with understanding how the human-AI dynamic actually worked. Where did people defer to AI when they shouldn’t have? Where did they override it and why? This kind of reflective practice is something healthcare organizations already know how to do (think morbidity and mortality conferences, or incident reviews). The “muscle” is there – you just have to apply it (and, in a way, reframe) to AI.

4. Model vulnerability as a leader. If executives and senior clinicians aren’t willing to say “I’m still figuring this out too” or “this AI output doesn’t look right to me,” no one else will either (this needs to be stressed!). I’ve seen it: the organizations where AI adoption is actually working are the ones where leadership treats uncertainty as normal – at the end of the day, everyone is figuring it out.

The real work

The gap between AI investment and AI results surfaces, in my opinion, a trust problem. And it starts way before anyone logs into a new platform.

The organizations that will get the most out of AI are the ones where people feel safe enough to question, experiment, fail, and learn together. And that’s messy, and uncomfortable, and more….. AND, that’s fine.

That’s exactly what adoption looks like – just people being given the permission and the psychological safety to figure it out (together).

If you’re in healthcare or in any organization navigating this, and you’re seeing the tension but struggling to name it, I think Edmondson and Seth’s framework is a really strong place to start.

Leave a Reply