AI IN TV RISK MANAGEMENT: THE PROMISE, THE PITFALLS, & THE HUMAN EDGE

In a world where uncertainty is both inevitable and accelerating, organisations in all industries – including TV & Film – are seeking new ways to anticipate, understand, and mitigate risk. Among the most significant technological advancements reshaping this space is Artificial Intelligence (AI), specifically the most recent boom in Generative AI powered products.

Traditional AI methods manipulate information in a formulaic way that is carefully pre-defined and scripted. They excel at processing vast amounts of data, identifying patterns, and automating routine tasks. Generative AI is a form of general artificial intelligence, capable of handling information dynamically. It has emerged as a game-changer in the world of content creation with OpenAI recently reporting this April to have hit the milestone of 20M paid subscribers to ChatGPT. Whilst there are daily improvements in Generative AI’s “intelligence” it still falls short in areas demanding contextual reasoning, complex non-standard problem solving and ethical discernment.

At Secret Compass, we’ve spent over a decade navigating risk in high-consequence environments – from remote expeditions to high-pressure productions. Our approach is rooted in real-world expertise, and it’s through this lens that we explore Generative AI’s growing role in risk management.

This article looks at the dual nature of AI in this context: its potential to enhance safety and efficiency, and the inherent pitfalls that underscore the irreplaceable value of human judgement.

The promise of AI in risk management is real – but so are the limitations. The most effective approaches do not rely on Generative AI or human judgement alone. Instead, they integrate the best of both.”

THE PROMISE OF AI IN RISK MANAGEMENT

Data Processing

One of the most immediate advantages traditional AI offers is its ability to process vast quantities of data at speeds and scales impossible for humans. AI systems, particularly those using machine learning and natural language processing, can sift through this data to identify patterns, flag anomalies, and generate insights more efficiently than traditional methods. It’s no surprise that AI is now routinely used by Open Source Intelligence (OSINT) providers & incident monitoring platforms.

Enhancing Engagement & Compliance

AI-driven automation and chatbot interfaces are increasingly being used to guide individuals through risk management processes. This can produce more tailored results, ensure no steps are missed, and make risk management more accessible across an organisation.

Time Saving

Large Language Models (LLMs) can understand and generate human language. They’re being used to draft documents based on user inputs and even mimic an organisation’s tone of voice. Used appropriately, this frees up risk managers to focus on complex or high-priority areas.

Thinking “Outside the Box”

Generative AI can also be a useful brainstorming tool. By generating insights from diverse datasets or simulating alternative perspectives, it may highlight risks that hadn’t been considered.

THE PITFALLS AND LIMITATIONS OF AI IN RISK MANAGEMENT

Contextual Blind Spots

Despite its capabilities, AI lacks the ability to understand nuance and context. Critical thinking – the process of analysing information and forming a judgement – is a human skill. Generative AI is based on pattern recognition, this is a statistical function, not a cognitive one. While AI can facilitate decision-making, risk management also requires intuition, experience and situational awareness. Context matters – and AI does not “understand” the world as people do.

Inaccuracy

AI models can produce inaccurate or misleading information (often referred to as hallucinations) – with surprising confidence. They may also blend unrelated ideas in ways that appear coherent but don’t stand up to scrutiny. Human review remains essential, though this requires enough expertise to critically evaluate the output.

Over-Reliance and Automation Bias

Decision-makers can become overly reliant on AI outputs – a phenomenon known as automation bias. When AI systems present information with a veneer of objectivity, it’s easy to take results at face value. This can lead to poor decisions, especially if the system’s training data includes hidden biases.

Depth of Knowledge

Models typically have a knowledge depth of a few months to several years & a distinct knowledge cut off point. On the surface generative AI will give basic information about a given subject field but may fail to give accurate detailed information about specific subjects or locations. Any recent changes or updates which have occurred after the knowledge cut off point will not be considered by the model.

the best of both worlds

The promise of AI in risk management is real – but so are the limitations. The most effective approaches do not rely on Generative AI or human judgement alone. Instead, they integrate the best of both.

That’s the approach we’ve taken at Secret Compass as we develop our own Generative AI-enhanced risk assessment tool: combining automation with the nuance, ethics, and critical thinking that only experienced professionals can provide.

Over the coming months, we’ll share more about how we’re using AI – thoughtfully, responsibly, and always in support of sound human judgement.

Anya Keatley is the Head of AI & a TV Risk Manager at Secret Compass

With a background in academic research and hands-on risk management, she leads the company’s work at the intersection of technology and risk – including the development of Secret Compass AI, our intelligent risk assessment tool for TV and film.

For more tales of risk and adventure, follow @secret.compass on Instagram.