Generative AI has become part of daily workflow across the TV and Film industry. It speeds up admin, helps structure ideas and supports teams who are under increasing pressure to move fast. But alongside the enthusiasm sits a growing concern. Are we losing our ability to think critically when AI is doing so much of the work for us?
The question often gets framed provocatively as: Is AI making us stupid?
The real issue is more nuanced. Emerging research shows that if we use AI passively, our cognitive engagement can drop. We think less deeply. We evaluate less rigorously. We remember less of what we read or write. And in risk-critical environments, that matters.
At Secret Compass, we have just launched our own AI tool for creating first-draft risk assessments for TV and film. This tool has not been built by tech specialists working in isolation. It has been designed by the people who do this work in real environments, for real productions. Every part of the workflow reflects lived experience of managing risk where it matters. Our stance is clear. AI can accelerate the work. It cannot replace human judgement. Understanding why means looking closely at what the research is telling us.
Why cognitive engagement matters
Risk assessments are not simply a compliance exercise. They rely on experience, context and the ability to interrogate assumptions. They are acts of judgement. If AI reduces the mental effort we invest in evaluating hazards and control measures, the output may be fast, but it will not be robust.
Several recent studies explore this shift in cognitive effort. The pattern across them is consistent and relevant for anyone working in production.
What the research shows: highlights from the latest studies
A 2025 MIT study found that when adults wrote essays using AI, their cognitive engagement dropped compared to those writing without tools. Their recall was weaker and they felt less ownership over their work. When they later wrote without AI, their engagement stayed low. The researchers proposed the concept of cognitive debt. If we stop using certain cognitive skills, it becomes harder to re-engage them later.
A separate study published in the British Journal of Educational Technology observed similar trends. Students using AI produced stronger essays, but their knowledge retention and ability to transfer what they had learned did not improve. The authors warn of metacognitive laziness. Users complete tasks successfully, but engage less deeply in planning, evaluating and revising.
Meanwhile, in April 2025, Microsoft researchers examined more than 900 examples of real-world AI use among knowledge workers. They found that AI shifts the cognitive effort from problem-solving to verification. As confidence in AI rises, critical thinking effort decreases. The risk is a long-term erosion of independent judgement.
Microsoft’s own research commentary from 2024 adds another dimension. While AI can increase creativity when used intentionally, many users produce a narrower range of ideas and apply less critical evaluation. Memory also weakens when AI intermediates the process. The authors describe this as weakening our cognitive muscles.
Across these studies, the message is consistent. AI does not prevent us from thinking. It changes when, where and how we apply mental effort. If used passively, it can reduce depth and weaken the habits of mind that support high-quality work.
Beyond hallucinations: the rise of workslop
Most users are familiar with AI hallucinations. Workslop is different. Coined by researchers at Stanford Social Media Lab and BetterUp Labs, workslop describes AI-generated content that looks polished but lacks real substance. The formatting is clean. The structure is logical. The tone is correct. But the thinking underneath is shallow.
Workslop is especially problematic in safety-focused disciplines, because it gives a false sense of completeness. It looks like good work and is therefore more likely to be trusted. In practice, it may miss nuance, context or critical hazards.
When AI becomes a shortcut, rather than a tool for thought, it encourages reduced engagement. That increase in cognitive passivity is what the research warns against.
Why this matters for TV and film production
Production environments rely on the interplay between creative ambition and practical risk management. Every shoot location, contributor and creative idea introduces variables. No two productions are the same. And no AI system can understand creative intent or on-the-ground realities in the way a human can.
Risk assessments require professionals who:
- understand context
- interpret creative ideas into practical plans
- anticipate real-world behaviours
- make ethical, safety-led decisions
AI can help identify hazards, organise information and create structure. But it cannot understand the complexities of contributor behaviour, crew dynamics or what is achievable in challenging locations. That final layer of judgement must come from the experienced professionals who know the production.
This is why cognitive engagement matters. If we become passive users of AI outputs, we risk missing the nuance and contextual thinking that make risk assessments meaningful.
How we designed Secret Compass AI with this research in mind
Secret Compass AI produces a fast, accurate draft of a risk assessment, giving production teams a structured starting point. But the design of the tool is rooted in real operational experience, not generic software logic. It has not been created by tech specialists guessing at what risk managers need. It has been built by the same practitioners who deliver this work on location, often in complex and high-risk environments.
Every question a user answers has been deliberately crafted to prompt the kind of thinking that elevates a risk assessment from a checklist to a meaningful document. Users are guided to reflect on elements they might not have considered if starting with a blank page. The tool is essentially a structured conversation with our risk logic, designed to draw out the details, assumptions and context that matter most.
Key design principles shaped the system
- It prompts human input at every stage. Users adapt the hazards, context and control measures rather than accepting a passive summary.
- It encourages judgement rather than replacing it. AI provides breadth and structure. The depth comes from the user.
- It provides transparency so users can interrogate the output. Nothing is hidden or automated beyond scrutiny.
- It frees cognitive load without weakening cognitive habits. By handling repetitive tasks, users have more time and mental space for decisions that carry real consequence.
We call this a hands-on-the-wheel approach. AI accelerates the predictable parts of the process so the human expert can focus on the decisions where judgement matters most.
Hybrid intelligence: a more sustainable future for AI in production
The goal is not to resist AI. The goal is to use it in a way that strengthens human capability. This is the foundation of hybrid intelligence. Human and machine working together, each doing what they are best at.
AI brings speed, structure and coverage. Humans bring context, nuance, ethical reasoning and lived experience. When combined intentionally, they can produce outcomes that are stronger, clearer and more dependable.
The research is clear. AI can help us work faster. But if used without care, it risks narrowing our thinking and weakening the cognitive patterns needed for creative, judgement-based work. In production environments, where safety depends on quality of thought, that risk is not acceptable.
Hybrid intelligence offers a path forward. It ensures we keep our cognitive muscles engaged while benefiting from AI’s efficiency. It is not machines replacing humans. It is machines supporting humans to do their best work.
Conclusion
The question is not whether AI makes us less intelligent. The question is whether it changes how deeply we think. The studies suggest that without deliberate design and active engagement, it can. For production risk, where nuance and context shape real-world safety decisions, this matters.
AI should extend human thinking, not replace it. It should reduce cognitive load where appropriate without removing the reflective work that keeps us sharp.
This is the guiding principle behind Secret Compass AI. It accelerates the process while keeping people thinking. Because in the environments where we work, human judgement is what keeps people safe.
Download the full paper here
A detailed review of the latest studies from MIT, Microsoft and more, and what they mean for the future of risk.

Dr Anya Keatley is the Head of AI & a TV Risk Manager at Secret Compass.
With a PhD from the University of Bristol and hands-on experience in risk management, she leads the company’s work at the intersection of technology and risk – including the development of Secret Compass AI, our intelligent risk assessment tool for TV and film.
For more tales of risk and adventure, follow @secret.compass on Instagram.