TY - JOUR
T1 - A Survey on XAI for 5G and Beyond Security
T2 - Technical Aspects, Challenges and Research Directions
AU - Senevirathna, Thulitha
AU - La, Vinh Hoa
AU - Marchal, Samuel
AU - Siniarski, Bartlomiej
AU - Liyanage, Madhusanka
AU - Wang, Shen
N1 - Publisher Copyright:
IEEE
PY - 2024/8
Y1 - 2024/8
N2 - With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems is envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are immensely popular in service layer applications and have been proposed as essential enablers in many aspects of 5G and beyond networks, from IoT devices and edge computing to cloud-based infrastructures. However, existing 5G ML-based security surveys tend to emphasize AI/ML model performance and accuracy more than the models’ accountability and trustworthiness. In contrast, this paper explores the potential of Explainable AI (XAI) methods, which would allow stakeholders in 5G and beyond to inspect intelligent black-box systems used to secure next-generation networks. The goal of using XAI in the security domain of 5G and beyond is to allow the decision-making processes of ML-based security systems to be transparent and comprehensible to 5G and beyond stakeholders, making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as ORAN, zero-touch network management, and end-to-end slicing, this survey emphasizes the role of XAI in them that the general users would ultimately enjoy. Furthermore, we presented the lessons from recent efforts and future research directions on top of the currently conducted projects involving XAI.
AB - With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems is envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are immensely popular in service layer applications and have been proposed as essential enablers in many aspects of 5G and beyond networks, from IoT devices and edge computing to cloud-based infrastructures. However, existing 5G ML-based security surveys tend to emphasize AI/ML model performance and accuracy more than the models’ accountability and trustworthiness. In contrast, this paper explores the potential of Explainable AI (XAI) methods, which would allow stakeholders in 5G and beyond to inspect intelligent black-box systems used to secure next-generation networks. The goal of using XAI in the security domain of 5G and beyond is to allow the decision-making processes of ML-based security systems to be transparent and comprehensible to 5G and beyond stakeholders, making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as ORAN, zero-touch network management, and end-to-end slicing, this survey emphasizes the role of XAI in them that the general users would ultimately enjoy. Furthermore, we presented the lessons from recent efforts and future research directions on top of the currently conducted projects involving XAI.
KW - 5G
KW - 6G mobile communication
KW - Accountability
KW - AI security
KW - B5G
KW - cyber-security
KW - Explainable security
KW - Trustworthy AI
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85200232321&partnerID=8YFLogxK
U2 - 10.1109/COMST.2024.3437248
DO - 10.1109/COMST.2024.3437248
M3 - Article
AN - SCOPUS:85200232321
SN - 1553-877X
SP - 1
JO - IEEE Communications Surveys and Tutorials
JF - IEEE Communications Surveys and Tutorials
ER -