The AI sophistication gap
90% of employees at one firm used AI regularly. Only 5% used it well. Adoption is not the metric that matters.
The number everyone celebrates is the wrong one
Joint research by KPMG and the University of Texas at Austin, published in Harvard Business Review, studied 2,500 employees and analysed 1.4 million AI prompts over eight months. The adoption numbers were impressive. The sophistication metrics were sobering.
90%
used AI regularly
5%
used it with real sophistication
Most firms track logins, licence utilisation, and monthly active users. These metrics confirm that people have opened the tool. They say nothing about whether the tool is creating value.
The gap between 90% and 5% is where the productivity gains are hiding. That gap does not close with another licence or another prompt template. It closes when people change how they work.
Four behaviours that separate the 5% from the 90%
The researchers identified four behavioural patterns that distinguished sophisticated AI users from everyone else. None of them are about prompt engineering. All of them are about how people think about the tool.
1. They go deeper
Sophisticated users do not accept the first answer. They push past it with follow-up questions, refinements, and challenges. Their interactions are longer and more iterative. Where most people treat AI as a search engine, the top 5% treat it as a working partner they can push back on.
2. They shape how the AI thinks
Rather than issuing instructions and hoping for the best, sophisticated users set context. They define roles, provide examples of good output, and explain the reasoning they expect. They are not writing longer prompts for the sake of it. They are giving the tool enough context to produce something useful on the first pass.
3. They hand over real work
Most employees use AI for simple tasks: summarising an email, cleaning up a paragraph, generating a bullet list. The top performers delegate complex, multi-step work with clear constraints and success criteria. They define what done looks like before they start. The complexity is in the scope of the task, not just the length of the prompt.
4. They use AI across their whole role
Most people find one use case and stay there. Sophisticated users apply AI to ideation, analysis, technical guidance, knowledge synthesis, and problem-solving. They have integrated it into how they think about their work, not bolted it onto one repetitive task.
Experience outperforms enthusiasm
One finding challenges a common assumption. Senior employees outperformed junior ones. The assumption is that younger staff adopt faster. They do. But comfort and sophistication are different things.
Senior people had enough domain knowledge to delegate meaningfully. They knew what good output looked like because they had done the work themselves for years.
This has implications for how firms structure their AI capability programmes. Starting with the most technically comfortable people is not the same as starting with the people who will extract the most value. Your experienced practitioners already know what good looks like. They know the edge cases, the quality standards, and the shortcuts that cause problems later. Give them the tools and they will use them with the judgment that only comes from having done the work.
Junior staff still matter. But training them to use AI without first building their domain knowledge creates a different risk: confident use of tools they cannot evaluate.
Why most capability programmes miss the mark
The standard playbook runs like this: buy licences, send a company-wide email, run a lunch-and-learn, track adoption metrics, declare success. The tools are adopted. The working patterns do not change.
Three things are typically missing.
Clear standards
What does good AI-assisted work look like for each role? Without this, people default to the lowest-effort use case.
Hands-on training
Scenario-based practice using real work, not abstract exercises. People learn by solving problems they recognise.
Peer networks
Internal champions who share what works, debug what does not, and set the pace for their teams. Behaviour spreads through people, not policies.
The firms closing the sophistication gap are not buying more tools. They are investing in how their people use the ones they already have.
Measuring sophistication, not just adoption
If adoption metrics tell you whether people have opened the tool, sophistication metrics tell you whether they are getting value from it. That requires different questions.
- How many workflows have been redesigned around AI? Not just which tasks use it, but which processes have fundamentally changed.
- What complexity of work is being delegated? Single-step tasks or multi-step workflows with defined success criteria?
- Are people iterating or accepting first outputs? Iteration signals that someone is working with the tool, not just querying it.
- Where has time been redirected? Freed capacity only creates value if it moves to higher-value work. Track where it goes.
A diagnostic that starts with these questions will surface where your team is already creating value with AI and where the gap is widest. That is where investment in capability will have the greatest return.
Close the gap in your organisation
Your team is already using AI. The question is whether they are using it well enough to create real value. We will diagnose where the sophistication gap sits in your operations and build a plan to close it.