NCA boss says 'AI deep fake videos could derail criminal trials' in stark warning that mirrors BBC thriller The Capture - but is threat based on actual evidence or speculation?
- Mar 18
- 5 min read
Updated: Mar 31

VITAL CCTV evidence used to convict criminals during trials could be undermined by "deepfakes" as they get ever more sophisticated, the boss of the National Crime Agency (NCA) has warned.
NCA Director General Graeme Biggar gave the stark warning as he unveiled the agency's strategic threat assessment of organised crime for this year published this week, March 17 2026, saying organised crime groups would use AI technology to exploit more victims and try to compromise the criminal justice system.
His warning has echoes of the BBC1 thriller series The Capture, starring Holliday Grainger as investigator Rachel Carey, who blows the whistle on a programme known as "Correction" after it was hijacked by criminals to spread misinformation and avoid detection.
Mr Biggar warned the series could become a reality over the next five years, saying 'it is highly likely that trust in information and institutions will decline between 2026 and 2031.'
The assessment said: "It is highly likely that exposure to mis and disinformation will increase, driven by shifting patterns of news consumption away from traditional news providers such as television and print towards less moderated channels such as social media, the increased use of generative artificial intelligence to create synthetic media, and the increasing amplification of false narratives by malicious actors.
"It is highly likely that SOC offenders will exploit less moderated online spaces and synthetic media to market illicit goods, target child sexual abuse, fraud and modern slavery and human trafficking victims, and advertise services including people smuggling.'
But, he also warned of the risk from ever more sophisticated deep fakes on our criminal justice system.
He said: "It is highly likely that trust in the criminal justice system will be reduced by the proliferation of deepfake media and the degrading effect it has on trust. While deep fake media detection capabilities are improving, it is almost certain they will at times lag behind improvements to generative artificial intelligence models. It is likely that SOC offenders will attempt to undermine the integrity of criminal trials by introducing deep fakes that provide false alibis or depict judges, jurors, or witnesses in ways that undermine their credibility or impartiality. It is also likely that as media generated by artificial intelligence becomes more realistic and commonplace in daily life, genuine video evidence such as CCTV footage will carry less weight with juries, potentially leading to acquittals where other sufficient corroborating evidence is lacking."
It comes almost a year after Judges were warned to be increasingly vigilant about "deep fake" evidence entering British courtrooms as AI technology becomes ever more sophisticated.
The Judiciary has also been warned about the risks of AI in preparing submissions as its use increases among parties across British courtrooms.
Artificial Intelligence Guidance for Judicial Office Holders, published on April 14 2025, highlighted the benefits of careful use of AI, but also alerted judges to the risks, including deep fake evidence and "fictitious laws" creeping into cases.
The new guidance warned: "AI tools are now being used to produce fake material, including text, images and video. Courts and tribunals have always had to handle forgeries, and allegations of forgery, involving varying levels of sophistication. Judges should be aware of this new possibility and potential challenges posed by deepfake technology."
It added that AI tools could still generate "misinformation", "selective data" or data that is "out of date, incomplete or biased" or "based on US law while purporting to be English."
It said: "Information provided by AI tools may be inaccurate, incomplete, misleading or out of date. Even if it purports to represent English law, it may not do so. AI tools may make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist, provide incorrect or misleading information regarding the law or how it might apply, and make factual errors."
Despite these ongoing concerns over the use of AI in courts, the guidance does encourage judges to use the private AI tool "Copilot Chat" developed by Microsoft, which is now available through an eJudiciary system.
The guidance said it could be useful for judges to summarise large bodies of text, to write presentations, and for administrative tasks.
They were urged to ensure accuracy of information and to not use it for legal research or analysis.
However, they were told AI could be used as a secondary tool in preparing judgements, providing accuracy is checked.
It said: "Judicial office holders are personally responsible for material which is produced in their name. Judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment. Provided these guidelines are appropriately followed, there is no reason why generative AI could not be a potentially useful secondary tool."
Lawyers are already using AI tools for disclosure of evidence and submissions, according to the document.
But, courts and tribunals are increasingly seeing litigants in person preparing evidence submissions through AI chatbots, with judges having to warn them they are responsible for the accuracy of any submissions.
The guidance said: "AI chatbots are now being used by unrepresented litigants. They may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error. If it appears an AI chatbot may have been used to prepare submissions or other documents, it is appropriate to inquire about this, ask what checks for accuracy have been undertaken (if any), and inform the litigant that they are responsible for what they put to the court/tribunal."
Judges are warned to look at submissions from litigants in person that reference cases that do not sound familiar, or have unfamiliar citations, and that use American spelling or refer to overseas cases, and content that "superficially at least appears to be highly persuasive and well written, but on closer inspection contains obvious substantive errors."
The NCA and CPS were asked if they could provide any examples of attempts using AI to derail the justice system and none of them provided any, suggesting this may be informed speculation rather than being based on any reality as yet.
A CPS spokesperson said: "We asked internally but nothing has yet been flagged to us about defendants inappropriately using AI during the criminal trial process."
The NCA said the purpose of strategic assessment was to assess potential and emerging threats as well as mature ones.
It was described as an intelligence assessment of the threat, rather than one based on judicial outcomes.
An NCA spokesman said: "The NCA rigorously assesses the threat from serious and organised crime using intelligence derived from across the UK and internationally.
"For each development identified, the NCA evaluates its probability using one of seven terms to clearly define the likelihood of it occurring, ranging from “remote chance” to “almost certain”."

.png)


Comments