A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability

Stephen Fox* (Corresponding Author), Vitor Fortes Rey

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Information that is complicated and ambiguous entails high cognitive load. Trying to understand such information can involve a lot of cognitive effort. An alternative to expending a lot of cognitive effort is to engage in motivated cognition, which can involve selective attention to new information that matches existing beliefs. In accordance with principles of least action related to management of cognitive effort, another alternative is to give up trying to understand new information with high cognitive load. In either case, high cognitive load can limit potential for understanding of new information and learning from new information. Cognitive Load Theory (CLT) provides a framework for relating the characteristics of information to human cognitive load. Although CLT has been developed through more than three decades of scientific research, it has not been applied comprehensively to improve the explainability, transparency, interpretability, and shared interpretability (ETISI) of machine learning models and their outputs. Here, in order to illustrate the broad relevance of CLT to ETISI, it is applied to analyze a type of hybrid machine learning called Algebraic Machine Learning (AML). This is the example because AML has characteristics that offer high potential for ETISI. However, application of CLT reveals potential for high cognitive load that can limit ETISI even when AML is used in conjunction with decision trees. Following the AML example, the general relevance of CLT to machine learning ETISI is discussed with the examples of SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and the Contextual Importance and Utility (CIU) method. Overall, it is argued in this Perspective paper that CLT can provide science-based design principles that can contribute to improving the ETISI of all types of machine learning.

Original languageEnglish
Pages (from-to)1494-1509
Number of pages16
JournalMachine Learning and Knowledge Extraction
Volume6
Issue number3
DOIs
Publication statusPublished - Sept 2024
MoE publication typeA1 Journal article-refereed

Keywords

  • agreeable AI
  • algebraic machine learning
  • CIU
  • cognitive load theory
  • decision trees
  • explainability
  • interpretability
  • LIME
  • SHAP
  • shared interpretability
  • transparency
  • world models

Fingerprint

Dive into the research topics of 'A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability'. Together they form a unique fingerprint.

Cite this