Enhancing Accountability, Resilience, and Privacy of Intelligent Networks with XAI

  • Thulitha Senevirathna*
  • , Chamara Sandeepa
  • , Bartlomiej Siniarski
  • , Manh Dung Nguyen
  • , Samuel Marchal
  • , Michell Boerger
  • , Madhusanka Liyanage
  • , Shen Wang
  • *Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)

Abstract

In the rapidly evolving landscape of networking and security, the adoption of artificial intelligence (AI) is accelerating to meet the demands of real-time, data-driven applications. Current AI development processes predominantly prioritize model utility metrics such as accuracy, precision, and recall, often overlooking critical trustworthiness aspects like accountability, resilience to adversarial attacks, and privacy. To address this gap, we propose a novel AI/Machine Learning (ML) development process that systematically integrates trustworthiness metrics alongside traditional model utility measures. Our process emphasizes the iterative development of trustworthy AI models by balancing performance, accountability, resilience, and privacy through the incorporation of eXplainable AI (XAI) techniques. We validate the effectiveness of our methodology across four distinct networking and security use cases. In encrypted traffic classification, LightGBM emerges as the most practical model, offering a strong balance of utility, accountability, and robustness despite Neural Networks achieving the highest raw performance. For malware detection, feature reduction in the MalDoc model yields a minimal utility loss (<0.7%) while substantially enhancing resilience to evasion attacks (10–80%). In assessing privacy trade-offs in Federated Learning, we observe that although strong differential privacy significantly degrades utility (up to 70% on MNIST), it enables early-stage privacy protection without fully masking poisoned client behaviour, which remains detectable through SHAP and t-SNE-based analysis. Lastly, in a smart healthcare emergency e-call scenario, our 1D CNN model achieves not only strong predictive performance (96.21% accuracy, 91.55% precision, 93.13% recall) but also provides stable and interpretable explanations using LRP and SHAP, with LRP demonstrating higher consistency across ECG segments. Therefore, unlike prior studies that focus on isolated aspects such as accountability or resilience, our work proposes a holistic, quantifiable process that balances the trade-offs among model utility, accountability, resilience, and privacy to support the development of trustworthy AI models in communication systems.

Original languageEnglish
Pages (from-to)8389-8409
Number of pages21
JournalIEEE Open Journal of the Communications Society
Volume6
DOIs
Publication statusPublished - 2025
MoE publication typeA1 Journal article-refereed

Funding

This work is partly supported by the European Union under the SPATIAL project (Grant ID. 101021808), the AI4CYBER project (Grant ID. 101070450) and by Science Foundation Ireland under CONNECT phase 2 (Grant no. 13/RC/2077 P2) projects

Keywords

  • AI
  • Explainability
  • Federated Learning
  • Malware
  • Privacy
  • Security
  • Smart Healthcare
  • Traffic Classification

Fingerprint

Dive into the research topics of 'Enhancing Accountability, Resilience, and Privacy of Intelligent Networks with XAI'. Together they form a unique fingerprint.

Cite this