Abstract
Artificial Intelligence (AI) will play a critical role in future networks, exploiting real-time data collection for optimized utilization of network resources. However, current AI solutions predominantly emphasize model performance enhancement, engendering substantial risk when AI encounters irregularities such as adversarial attacks or unknown misbehaves due to its "black-box"decision process. Consequently, AI-driven network solutions necessitate enhanced accountability to stakeholders and robust resilience against known AI threats. This paper introduces a high-level process, integrating Explainable AI (XAI) techniques and illustrating their application across three typical use cases: encrypted network traffic classification, malware detection, and federated learning. Unlike existing task-specific qualitative approaches, the proposed process incorporates a new set of metrics, measuring model performance, explainability, security, and privacy, thus enabling users to iteratively refine their AI network solutions. The paper also elucidates future research challenges we deem critical to the actualization of trustworthy, AI-empowered networks.
Original language | English |
---|---|
Title of host publication | 2024 Joint European Conference on Networks and Communications and 6G Summit, EuCNC/6G Summit 2024 |
Publisher | IEEE Institute of Electrical and Electronic Engineers |
Pages | 818-823 |
ISBN (Electronic) | 979-8-3503-4499-8 |
ISBN (Print) | 979-8-3503-4500-1 |
DOIs | |
Publication status | Published - 2024 |
MoE publication type | A4 Article in a conference publication |
Event | Joint European Conference on Networks and Communications and 6G Summit, EuCNC/6G Summit 2024 - Antwerp, Belgium Duration: 3 Jun 2024 → 6 Jun 2024 |
Publication series
Series | European Conference on Networks and Communications |
---|---|
Volume | 2024 |
ISSN | 2475-6490 |
Conference
Conference | Joint European Conference on Networks and Communications and 6G Summit, EuCNC/6G Summit 2024 |
---|---|
Country/Territory | Belgium |
City | Antwerp |
Period | 3/06/24 → 6/06/24 |
Funding
This work is partly supported by the European Union under the SPATIAL project (Grant ID. 101021808), the AI4CYBER project (Grant ID. 101070450) and by Science Foundation Ireland under CONNECT phase 2 (Grant no. 13/RC/2077 P2) projects and Fellowship (Grant No. 21/IRDIF/9839).
Keywords
- AI
- Explainability
- Federated Learning
- Malware
- Privacy
- Security
- Traffic Classification