TY - GEN
T1 - Towards Accountable and Resilient AI-Assisted Networks
T2 - Joint European Conference on Networks and Communications and 6G Summit, EuCNC/6G Summit 2024
AU - Wang, Shen
AU - Sandeepa, Chamara
AU - Senevirathna, Thulitha
AU - Siniarski, Bartlomiej
AU - Nguyen, Manh Dung
AU - Marchal, Samuel
AU - Liyanage, Madhusanka
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Artificial Intelligence (AI) will play a critical role in future networks, exploiting real-time data collection for optimized utilization of network resources. However, current AI solutions predominantly emphasize model performance enhancement, engendering substantial risk when AI encounters irregularities such as adversarial attacks or unknown misbehaves due to its "black-box"decision process. Consequently, AI-driven network solutions necessitate enhanced accountability to stakeholders and robust resilience against known AI threats. This paper introduces a high-level process, integrating Explainable AI (XAI) techniques and illustrating their application across three typical use cases: encrypted network traffic classification, malware detection, and federated learning. Unlike existing task-specific qualitative approaches, the proposed process incorporates a new set of metrics, measuring model performance, explainability, security, and privacy, thus enabling users to iteratively refine their AI network solutions. The paper also elucidates future research challenges we deem critical to the actualization of trustworthy, AI-empowered networks.
AB - Artificial Intelligence (AI) will play a critical role in future networks, exploiting real-time data collection for optimized utilization of network resources. However, current AI solutions predominantly emphasize model performance enhancement, engendering substantial risk when AI encounters irregularities such as adversarial attacks or unknown misbehaves due to its "black-box"decision process. Consequently, AI-driven network solutions necessitate enhanced accountability to stakeholders and robust resilience against known AI threats. This paper introduces a high-level process, integrating Explainable AI (XAI) techniques and illustrating their application across three typical use cases: encrypted network traffic classification, malware detection, and federated learning. Unlike existing task-specific qualitative approaches, the proposed process incorporates a new set of metrics, measuring model performance, explainability, security, and privacy, thus enabling users to iteratively refine their AI network solutions. The paper also elucidates future research challenges we deem critical to the actualization of trustworthy, AI-empowered networks.
KW - AI
KW - Explainability
KW - Federated Learning
KW - Malware
KW - Privacy
KW - Security
KW - Traffic Classification
UR - http://www.scopus.com/inward/record.url?scp=85199867679&partnerID=8YFLogxK
U2 - 10.1109/EuCNC/6GSummit60053.2024.10597060
DO - 10.1109/EuCNC/6GSummit60053.2024.10597060
M3 - Conference article in proceedings
AN - SCOPUS:85199867679
SN - 979-8-3503-4500-1
T3 - European Conference on Networks and Communications
SP - 818
EP - 823
BT - 2024 Joint European Conference on Networks and Communications and 6G Summit, EuCNC/6G Summit 2024
PB - IEEE Institute of Electrical and Electronic Engineers
Y2 - 3 June 2024 through 6 June 2024
ER -