Open Access
| Issue |
ITM Web Conf.
Volume 78, 2025
International Conference on Computer Science and Electronic Information Technology (CSEIT 2025)
|
|
|---|---|---|
| Article Number | 04018 | |
| Number of page(s) | 8 | |
| Section | Foundations and Frontiers in Multimodal AI, Large Models, and Generative Technologies | |
| DOI | https://doi.org/10.1051/itmconf/20257804018 | |
| Published online | 08 September 2025 | |
- Z. Liu et al., “A review of research on the illusion problem of large language models,” Journal of Software, vol. 36, no. 3, pp. 1152–1185, Dec. 2024 [Google Scholar]
- Z. Xu, S. Jain, and M. Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models,” arXiv.org, Jan. 22, 2024 [Google Scholar]
- L. Huang et al., “A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions,” ACM transactions on office information systems, vol. 43, no. 2, Nov. 2024 [Google Scholar]
- H. Sun et al., “MoralDial: a Framework to Train and Evaluate Moral Dialogue Systems via Moral Discussions,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, vol. 1, Jan. 2023 [Google Scholar]
- L. Feng, F. Tung, H. Hajimirsadeghi, Y. Bengio, and M. O. Ahmed, “Memory Efficient Neural Processes via Constant Memory Attention Block,” arXiv.org, 2023 [Google Scholar]
- F. Zhou et al., “TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data,” Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Jan. 2022 [Google Scholar]
- X. Yang, “Exploring External Knowledge for Accurate modeling of Visual and Language Problems,” arXiv.org, 2023 [Google Scholar]
- Y. J. Kim, B. Kwak, Y. Kim, Reinald Kim Amplayo, S. Hwang, and J. Yeo, “Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning,” Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2244–2257, Jan. 2022 [Google Scholar]
- P. G. Szilagyi et al., “Text vs Patient Portal Messaging to Improve Influenza Vaccination Coverage,” JAMA Internal Medicine, vol. 184, no. 5, pp. 519–519, May 2024 [Google Scholar]
- T.-L. Wu et al., “SIMMC-VR: A Task-oriented Multimodal Dialog Dataset with Situated and Immersive VR Streams,” Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, vol. 1, Jan. 2023 [Google Scholar]
- A. Karunakaran and K. Spekkens, “A Bigger Cloud 9? New HI Observations of the RELHIC Candidate M94-Cloud 9,” arXiv.org, 2024 [Google Scholar]
- OpenAI, “GPT-4 Technical Report,” arXiv:2303.08774 [cs], Mar. 2023 [Google Scholar]
- R. Cardone, S. Padhy, S. Black, S. Cleveland, and J. Stubbs, “A Decentralized Authorization and Security Framework for Distributed Research Workflows,” arXiv.org, 2023 [Google Scholar]
- S. Narayan, Y. Zhao, J. Maynez, G. Simoes, V. Nikolaev, and R. McDonald, “Planning with Learned Entity Prompts for Abstractive Summarization,” arXiv.org, 2021 [Google Scholar]
- Y.-Z. Song, Y.-S. Chen, L. Wang, and Hong-Han Shuai, “General then Personal: Decoupling and Pre-training for Personalized Headline Generation,” Transactions of the Association for Computational Linguistics, vol. 11, pp. 1588–1607, Jan. 2023 [Google Scholar]
- Y. Tao, A. Hiatt, E. Haake, A. J. Jetter, and A. Agrawal, “When Context Leads but Parametric Memory Follows in Large Language Models,” Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 4034–4058, Jan. 2024 [Google Scholar]
- T. Zhao, S. Tian, J. Daly, M. Geiger, M. Jia, and J. Zhang, “Information Retrieval and Classification of Real-Time Multi-Source Hurricane Evacuation Notices,” arXiv.org, 2024 [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

