What that tells us about where AI is heading next.
By Anya Keatley, Head of AI, Secret Compass.
Over the last few years, AI capability has grown at an extraordinary pace. New models appear constantly. Benchmarks are broken. Features multiply. From a technical perspective, progress feels relentless.
But among people actually using these systems day to day, the conversation has shifted.
The question is no longer whether AI can do something.
It is whether it does it well enough, safely enough, and in a way that genuinely helps the person using it.
That change in emphasis matters more than any individual technical breakthrough.
WE ARE MOVING FROM DISCOVERY TO DIFFUSION
A growing number of researchers and industry leaders argue that we are entering a different phase of AI adoption. The early period of experimentation and novelty is giving way to widespread use inside real organisations, with real consequences.
Satya Nadella (CEO of Microsoft) has described this moment as one of diffusion rather than discovery. Capability now outpaces our ability to apply it consistently in ways that create meaningful impact. There is, in effect, a model overhang. The tools are powerful, but their value depends entirely on how they are used.
This reframes the role of AI. What matters less is how impressive a model looks in isolation, and more how well it fits into human work. AI is not a substitute for judgement. It is scaffolding for it.
ADOPTION IS RISING, CONFIDENCE IS NOT
The data reflects this tension clearly.
Levels of trust in AI remain relatively low, particularly in the UK, where concerns about risk, misinformation and reliability persist. At the same time, usage continues to grow. Many people now rely on AI tools at work, reporting improvements in efficiency, quality and innovation. A significant proportion feel they would struggle to complete their work without them.
These two things coexist. People use AI while remaining uncertain about how much they should rely on it.
That uncertainty has consequences. More than half of UK workers say they have made mistakes due to AI use. This is not simply misuse or carelessness. In many cases, it reflects tools being adopted faster than organisations can define safe boundaries, appropriate oversight and clear accountability.
AI is being used because it is useful. It is distrusted because its role is often poorly defined.
FROM HUMAN IN THE LOOP TO HUMAN IN THE LEAD
Much of the response to this trust gap has focused on keeping a “human in the loop”. While well intentioned, this framing still positions the machine as the primary actor, with humans stepping in to check or correct its output.
In practice, this often feels backwards.
If AI is to support good decision-making, particularly in complex or high-stakes environments, humans need to remain clearly in the lead. The machine should support thinking, structure and recall. Responsibility, interpretation and judgement must remain human.
An alternative framing, sometimes described as “AI in the loop”, captures this more accurately. Instead of inserting people into machine workflows, machines are inserted into human ones. The system adapts to how people actually work, rather than forcing people to adapt to the system.
This shift is subtle but important. It changes AI from something that produces answers into something that supports better questions.
WHY WE BUILT SECRET COMPASS AI
Secret Compass AI emerged from a very specific, practical problem.
When speaking with production teams, the biggest pain point in risk management was not identifying risk on set. It was writing risk assessments. The issue was rarely a lack of experience or care. It was time pressure, uncertainty about where to start, and concern about missing something important.
Copying from previous projects saved time, but often stripped out context. Generic tools produced generic outputs. In a creative industry where no two shoots are the same, that lack of nuance creates risk rather than reducing it.
We did not build Secret Compass AI to automate judgement. We built it to support it.
AI AS A THOUGHT PARTNER, NOT A SHORTCUT
Risk management is a process, not a document. The document is simply the visible record of thinking that has already taken place.
Secret Compass AI is designed to act as a structured thought partner. It helps teams identify the right contextual detail, organise information clearly, and generate a first draft that can then be reviewed, challenged and refined by humans who understand the reality of the production.
The responsibility does not shift. The accountability does not disappear. The human remains central.
Looking ahead to 2026, AI will continue to become more capable. That is inevitable. The more important question is whether we design systems that people trust because they reinforce good practice, respect professional judgement and make responsibility clearer rather than more diffuse.
Trust will not come from bigger models.
It will come from better design.

Dr Anya Keatley is the Head of AI & a TV Risk Manager at Secret Compass.
With a PhD from the University of Bristol and hands-on experience in risk management, she leads the company’s work at the intersection of technology and risk – including the development of Secret Compass AI, our intelligent risk assessment tool for TV and film.
For more tales of risk and adventure, follow @secret.compass on Instagram.