Large Language Model hallucination mitigation in three industrial use cases

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Large-language models (LLMs) can produce factually false or ungrounded information. This occurs when the language model, being fundamentally a statistical model, generates inaccurate or non-existent information with high confidence in response to a given query. This phenomenon is particularly dangerous in safety-critical systems and involves risks when LLMs are used to access or control hardware such as robots, sensors, and other internet-of-things (IoT) applications. Additionally, in automation-rich industrial environments, effective human-machine cooperation depends on maintaining a shared and adaptive understanding of complex situations. Distributed Situation Awareness (DSA) provides a framework for how awareness emerges collectively across networks of operators, robots, sensors, digital interfaces, and LLM based artificially intelligent systems. LLMs can fuse fragmented data streams into coherent, actionable context, enabling Extended Reality (XR) technologies to strengthen DSA by embedding digital cues into physical workflows. While this process improves coordination and adaptability, it also makes reliable and verifiable model outputs essential, as hallucinations can erode operator trust and compromise distributed decision-making. Therefore, mitigation of hallucinations becomes essential for sustaining stable human–AI teaming. We present three industrial projects where LLMs are at the forefront, highlighting practical approaches for hallucination mitigation and demonstrating a transition from online to offline model. Scope is to demonstrate through architectural means how established mitigation mechanisms can be used in industrial systems.

Original languageEnglish
Pages (from-to)25564-25576
Number of pages13
JournalIEEE Access
Volume14
DOIs
Publication statusPublished - 2026
MoE publication typeA1 Journal article-refereed

Funding

This work was supported in part by Business Finland under Grant 5785/31/2023 and in part by VTT Technical Research Centre Ltd. The authors thank Stephen Fox for advices regarding the publication. GPT-4.1 and Qwen2.5-32B are used to generate the material presented in Appendix B for responses to the given query presented in Appendix A.

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 9 - Industry, Innovation, and Infrastructure
    SDG 9 Industry, Innovation, and Infrastructure

Keywords

  • Distributed Situation Awareness
  • Hallucinations
  • Large Language Model
  • distributed situation awareness
  • Large language model
  • hallucinations

Fingerprint

Dive into the research topics of 'Large Language Model hallucination mitigation in three industrial use cases'. Together they form a unique fingerprint.

Cite this