Open Access
Issue
ITM Web Conf.
Volume 80, 2025
2025 2nd International Conference on Advanced Computer Applications and Artificial Intelligence (ACAAI 2025)
Article Number 01035
Number of page(s) 7
Section Machine Learning & Deep Learning Algorithms
DOI https://doi.org/10.1051/itmconf/20258001035
Published online 16 December 2025
  1. W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, P. Liu, J.-Y. Nie, and J.-R. Wen, “A survey of large language models,” arXiv preprint arXiv:2311.03099, 2023. [Google Scholar]
  2. A. Chernyavskiy, D. Ilvovsky, P. Nakov, Transformers:”the end of history” for natural language processing?, in: Machine Learning andKnowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021,Proceedings, Part III 21, Springer, 2021, pp. 677–693. 1 [Google Scholar]
  3. Z. Gan, L. Li, C. Li, L. Wang, Z. Liu, and J. Gao, “Vision-Language Pre-Training: Basics, Recent Advances, and Future Trends,” Foundations and Trends® in Computer Graphics and Vision, vol. 14, no. 3–4, pp. 163–352, 2022, doi: 10.1561/0600000105. [Google Scholar]
  4. J. Li, J. Chen, R. Ren, X. Cheng, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, “The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models,” arXiv preprint arXiv:2401.03205, Jan. 2024. [Google Scholar]
  5. L. Gui, A. Bardes, R. Salakhutdinov, A. Hauptmann, M. Hebert, and Y.-X. Wang, “Learning to Hallucinate Examples from Extrinsic and Intrinsic Supervision,” *arXiv preprint arXiv:*, 2024. [Google Scholar]
  6. Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. 2023. Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies. ArXiv preprint abs/2308.03188 (2023). https://arxiv.org/abs/2308.03188 [Google Scholar]
  7. Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, and Scott Wen-tau Yih. 2023. TrustingYour Evidence: Hallucinate Less with Context- aware Decoding. ArXiv preprint abs/2305.14739 (2023). https://arxiv.org/abs/2305.14739 [Google Scholar]
  8. Chaoya Jiang, Haiyang Xu, Mengfan Dong, Jiaxing Chen, Wei Ye, Ming Yan, Qinghao Ye, Ji Zhang, Fei Huang, and Shikun Zhang. 2023. Hallucination Augmented Contrastive Learning for Multimodal Large Language Model. arXivpreprint arXiv:2312.06968 (2023) [Google Scholar]
  9. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” 2020. [Google Scholar]
  10. OpenAI. (2023). GPT-4 Technical Report. arXiv preprint arXiv:2303.08774. [Google Scholar]
  11. Chowdhery, A., Narang, S., Devlin, J., et al. (2023). PaLM: Scaling Language Modeling with Pathways. Journal of Machine Learning Research, 24, 240:1–240:113. [Google Scholar]
  12. Touvron, H., Martin, L., Stone, K., et al. (2023). Llama 2: Open Foundation and Fine- Tuned Chat Models. GenAI, Meta. [Google Scholar]
  13. Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., et al. (2021). The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027. [Google Scholar]
  14. Guu, K., Lee, K., Tung, Z., Pasupat, P., & Chang, M.-W. (2020). Retrieval Augmented Language Model Pre-Training. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020) (Vol. 119, pp. 3929–3938). PMLR. [Google Scholar]
  15. Varshney, N., Yao, W., Zhang, H., Chen, J., & Yu, D. (2023). A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low- Confidence Generation. arXiv preprint arXiv:2307.03987. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.