Safety of Human–Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People

Stephen Fox (Corresponding Author), Juan G. Victores

Research output: Contribution to journalArticleScientificpeer-review

4 Downloads (Pure)

Abstract

Loopholes involve misalignments between rules about what should be done and what is actually done in practice. The focus of this paper is loopholes in interactions between human organizations’ implementations of task-specific artificial intelligence and individual people. The importance of identifying and addressing loopholes is recognized in safety science and in applications of AI. Here, an examination is provided of loophole sources in interactions between human organizations and individual people. Then, it is explained how the introduction of task-specific AI applications can introduce new sources of loopholes. Next, an analytical framework, which is well-established in safety science, is applied to analyses of loopholes in interactions between human organizations, artificial intelligence, and individual people. The example used in the analysis is human–artificial intelligence systems in gig economy delivery driving work.
Original languageEnglish
Article number36
JournalInformatics
Volume11
Issue number2
DOIs
Publication statusPublished - 29 May 2024
MoE publication typeA1 Journal article-refereed

Funding

This research was funded by the European Union (EU) Horizon 2020 project ALMA grant number 952091.

Fingerprint

Dive into the research topics of 'Safety of Human–Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People'. Together they form a unique fingerprint.

Cite this