This summer I was one of the lucky few to get a sneak preview of researcher Sean McGregor‘s work on incidents caused by artificial intelligence systems.
Because of a book I’m helping to write (I’ll tell you about it next year) I needed access to a database containing a collection of incidents – understood in a generic way, i.e. failures, errors, aberrant results – caused by artificial intelligence algorithms over the years. Through a number of researchers I came across Sean’s project, who works for the XPRIZE Foundation and Partnership on AI, an organisation set up by the world’s leading technology companies to promote AI development and research.
The AI Incident Database (AIID), now available to all, contains over a thousand articles and references to incidents caused by automated systems, from the ‘flash crash‘ that shocked the markets in 2010 to the killing of a pedestrian by a self-driving car in 2018, from the many facial recognition errors around the world to the multiple ‘rebellions’ of chatbots that refuse to behave as they should.
This work is inspired, as the author explains, by similar databases that already exist in other sectors, such as aviation or cybersecurity, trying to combine their strengths. Obviously, the aim is not to hinder the march (unstoppable) of artificial intelligence, but to help its positive development by highlighting and cataloguing the problems to improve research.
For further information, I recommend the presentation article written by the author: When AI Systems Fail: Introducing the AI Incident Database
The research paper can be found here: Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database