top of page
  • Writer's pictureCLTR

AI incident reporting: Addressing a gap in the UK’s regulation of AI

by Tommy Shaffer Shane


Read the full policy paper here:


AI incident reporting_ Addressing a gap in the UK’s regulation of AI
.pdf
Download PDF • 702KB

Executive summary


AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014. With greater integration of AI into society, incidents are likely to increase in number and scale of impact.


In other safety-critical industries, such as aviation and medicine, incidents like these are collected and investigated by authorities in a process known as ‘incident reporting’.


We – along with a broad consensus of experts, U.S. and Chinese governments, and the EU – believe that a well-functioning incident reporting regime is critical for the regulation of AI, as it provides fast insights about how AI is going wrong.


However, it is a concerning gap in the UK’s regulatory plans.


This report sets out our case, and provides practical steps that the Department for Science, Innovation & Technology (DSIT) can take to address it.


The need for incident reporting


Incident reporting is a proven safety mechanism, and will support the UK Government’s ‘context-based approach’ to AI regulation by enabling it to:


1. Monitor how AI is causing safety risks in real-world contexts, providing a feedback loop that can allow course correction in how AI is regulated and deployed;


2. Coordinate responses to major incidents where speed is critical, followed by investigations into root causes to generate cross-sectoral learnings;


3. Identify early warnings of larger-scale harms that could arise in future, for use by the AI Safety Institute and Central AI Risk Function in risk assessments.


A critical gap


However, the UK’s regulation of AI currently lacks an effective incident reporting framework. If not addressed, DSIT will lack visibility of a range of incidents, including:


  • Incidents in highly capable foundation models, such as bias and discrimination or misaligned agents, which could cause widespread harm to individuals and societal functions;


  • Incidents from the UK Government’s own use of AI in public services, where failures in AI systems could directly harm the UK public, such as through improperly revoking access to benefits, creating miscarriages of justice, or incorrectly assessing students’ exams;


  • Incidents of misuse of AI systems, e.g. detected use in disinformation campaigns or biological weapon development, which may need urgent response to protect UK citizens;


  • Incidents of harm from AI companions, tutors and therapists, where deep levels of trust combined with extensive personal data could lead to abuse, manipulation, radicalisation, or dangerous advice, such as when an AI system encouraged a Belgian man to end his own life in 2023.


DSIT lacks a central, up-to-date picture of these types of incidents as they emerge. Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI.


DSIT should prioritise ensuring that the UK Government finds out about such novel harms not through the news, but through proven processes of incident reporting.


Recommended next steps for UK Government


This is a gap that DSIT should urgently address. We recommend three immediate next steps:


1. Create a system for the UK Government to report incidents in its own use of AI in public services. This is low-hanging fruit that can help the government responsibly improve public services, and could involve simple steps such as expanding the Algorithmic Transparency Recording Standard (ATRS) to include a framework for reporting public sector AI incidents. These incidents could be fed directly to a government body, and possibly shared with the public for transparency and accountability.


2. Commission UK regulators and consult experts to confirm where there are the most concerning gaps. This is essential to ensure effective coverage of priority incidents, and for understanding the stakeholders and incentives required to establish a functional regime.


3. Build capacity within DSIT to monitor, investigate and respond to incidents, possibly including the creation of a pilot AI incident database. This could comprise part of DSIT’s ‘central function’, and begin the development of the policy and technical infrastructure for collecting and responding to AI incident reports. This should focus initially on the most urgent gap identified by stakeholders, but could eventually collect all reports from UK regulators.


Read the full policy paper here:


ai_incident_reporting__addressing_a_gap_in_the_uk___s_regulation_of_ai
.pdf
Download PDF • 702KB

If you're interested in discussing this work further, please reach out to the author, Tommy Shaffer Shane, using tommy@longtermresilience.org

Recent Posts

See All

The near-term impact of AI on disinformation

by Tommy Shaffer Shane Read the full policy paper here: It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, wheth

The Centre for Long-Term Resilience is a non-profit company registered in England and Wales under the name Alpenglow Group Limited (12308171). Our registered office address is 71-75 Shelton Street, Covent Garden, London, England, WC2H 9JQ.

Privacy policy

Image credit: NASA

bottom of page