| Issue |
ITM Web Conf.
Volume 84, 2026
2026 International Conference on Advent Trends in Computational Intelligence and Data Science (ATCIDS 2026)
|
|
|---|---|---|
| Article Number | 03003 | |
| Number of page(s) | 12 | |
| Section | Large Language Models, Generative AI, and Multimodal Learning | |
| DOI | https://doi.org/10.1051/itmconf/20268403003 | |
| Published online | 06 April 2026 | |
In-Context Learning in Large Language Models: Mechanisms, Challenges, and Frontiers
School of Telecommunity Engineering, Xi’an Jiaotong University, 710049 Xi’an, China
* Corresponding author’s email: This email address is being protected from spambots. You need JavaScript enabled to view it.
Abstract
In recently years, large language models (LLMs) show an ability to learn directly from examples embedded in their input, a process known as in-context learning (ICL). This learning approach enables the model to comprehend examples within the context and generalize to new tasks without the need for modifying model parameters. This indicates that the learning process can be fully realized during the inference phase. The phenomenon was first observed in GPT-3 and has since become central to understanding reasoning in large models. Studies describe ICL as implicit Bayesian inference, as internal simulation of learning algorithms, or as the operation of induction heads within attention circuits. Recent work extends these perspectives: The Implicit Dynamics of ICL links activation updates to posterior inference, In-Context Learning with Long-Context Models explores many-shot scaling, and VL-ICL Bench evaluates multimodal adaptation. Overall, these studies indicate that ICL integrates multiple features such as statistical inference, algorithmic approximation, and mechanistic emergence. However, challenges persist in theoretical integration, reproducibility, and robustness. This article will review the theoretical framework, empirical results, and the latest research progress of ICL in 2025. The aim is to clarify its internal mechanism, reveal the current limitations, and envision the future development direction, in order to promote a more systematic and coherent understanding of ICL.
© The Authors, published by EDP Sciences, 2026
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

