| Issue |
ITM Web Conf.
Volume 80, 2025
2025 2nd International Conference on Advanced Computer Applications and Artificial Intelligence (ACAAI 2025)
|
|
|---|---|---|
| Article Number | 01035 | |
| Number of page(s) | 7 | |
| Section | Machine Learning & Deep Learning Algorithms | |
| DOI | https://doi.org/10.1051/itmconf/20258001035 | |
| Published online | 16 December 2025 | |
Classification and solution of large language model hallucination
College of Electronic and Information Engineering, Tongji University, China
* This email address is being protected from spambots. You need JavaScript enabled to view it.
Large language models (LLMs) offer significant benefits across various fields, enhancing research efficiency and innovation. However, a critical challenge is their tendency to produce “hallucinations”—content that appears fluent and reasonable but is factually inaccurate or nonsensical. This phenomenon can undermine content credibility and lead to erroneous decisions in critical areas like academic research, medical diagnosis, and news dissemination. This article provides a comprehensive overview of LLM hallucinations. It begins by defining LLMs as deep learning models trained on massive text datasets, typically using a Transformer architecture through stages like pre-training, supervised fine-tuning (SFT), and Reinforcement Learning from Human Feedback (RLHF). The paper then defines and classifies hallucinations into two main types: internal (factual) hallucinations, where the output contradicts objective facts , and external (context-inconsistent) hallucinations, where the output is logically inconsistent with the given prompt or conversational context. Furthermore, the article explores detection methods based on output content analysis, which check for features like semantic consistency and factual ambiguity. Finally, it reviews mitigation mechanisms, discussing strategies such as training on high-quality data and SFT to address factual hallucinations , and using targeted instruction fine-tuning and context-aware decoding for context-inconsistent hallucinations to improve the overall reliability of LLMs.
© The Authors, published by EDP Sciences, 2025
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

