TY - GEN
T1 - The SPATIAL Architecture
T2 - 44th IEEE International Conference on Distributed Computing Systems, ICDCS 2024
AU - Ottun, Abdul Rasheed
AU - Marasinghe, Rasinthe
AU - Elemosho, Toluwani
AU - Liyanage, Mohan
AU - Ragab, Mohamad
AU - Bagave, Prachi
AU - Westberg, Marcus
AU - Asadi, Mehrdad
AU - Boerger, Michell
AU - Sandeepa, Chamara
AU - Senevirathna, Thulitha
AU - Siniarski, Bartlomiej
AU - Liyanage, Madhusanka
AU - La, Vinh Hoa
AU - Nguyen, Manh Dung
AU - De Oca, Edgardo Montes
AU - Oomen, Tessa
AU - Gonçalves, João Fernando Ferreira
AU - Tanasković, Illija
AU - Klopanovic, Sasa
AU - Kourtellis, Nicolas
AU - Soriente, Claudio
AU - Pridmore, Jason
AU - Cavalli, Ana Rosa
AU - Draskovic, Drasko
AU - Marchal, Samuel
AU - Wang, Shen
AU - Noguero, David Solans
AU - Tcholtchev, Nikolay
AU - Ding, Aaron Yi
AU - Flores, Huber
PY - 2024
Y1 - 2024
N2 - Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of- concept architecture that analyzes AI models in a human-in-the- loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in real- world industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.
AB - Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of- concept architecture that analyzes AI models in a human-in-the- loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in real- world industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight.
KW - Accountability
KW - AI Act
KW - Human Oversight
KW - Industrial Use Cases
KW - Practical Trust-worthiness
KW - Resilience
KW - Trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=85203142463&partnerID=8YFLogxK
U2 - 10.1109/ICDCS60910.2024.00092
DO - 10.1109/ICDCS60910.2024.00092
M3 - Conference article in proceedings
AN - SCOPUS:85203142463
SN - 979-8-3503-8606-6
SP - 947
EP - 959
BT - 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS)
PB - IEEE Institute of Electrical and Electronic Engineers
Y2 - 23 July 2024 through 26 July 2024
ER -