Abstract
Loopholes involve misalignments between rules about what should be done and what is actually done in practice. The focus of this paper is loopholes in interactions between human organizations’ implementations of task-specific artificial intelligence and individual people. The importance of identifying and addressing loopholes is recognized in safety science and in applications of AI. Here, an examination is provided of loophole sources in interactions between human organizations and individual people. Then, it is explained how the introduction of task-specific AI applications can introduce new sources of loopholes. Next, an analytical framework, which is well-established in safety science, is applied to analyses of loopholes in interactions between human organizations, artificial intelligence, and individual people. The example used in the analysis is human–artificial intelligence systems in gig economy delivery driving work.
Original language | English |
---|---|
Article number | 36 |
Journal | Informatics |
Volume | 11 |
Issue number | 2 |
DOIs | |
Publication status | Published - 29 May 2024 |
MoE publication type | A1 Journal article-refereed |
Funding
This research was funded by the European Union (EU) Horizon 2020 project ALMA grant number 952091.
Keywords
- algebraic machine learning
- driving
- gig economy
- human–artificial intelligence (HAI) systems
- loopholes
- narrow AI
- quality management systems
- Swiss Cheese Model
- Theory of Active and Latent Failures