Open Access
Issue
ITM Web Conf.
Volume 80, 2025
2025 2nd International Conference on Advanced Computer Applications and Artificial Intelligence (ACAAI 2025)
Article Number 02005
Number of page(s) 5
Section Reinforcement Learning, Bandits & Optimization
DOI https://doi.org/10.1051/itmconf/20258002005
Published online 16 December 2025
  1. A. Slivkins, Introduction to multi-armed bandits. Found. Trends Mach. Learn. 12, 1–286 (2019). [Google Scholar]
  2. A. Vahidi, Risk-seeking multi-armed bandits in an explore-then-commit setting (2025). [Google Scholar]
  3. A. Garivier, T. Lattimore, E. Kaufmann, On explore-then-commit strategies, in Adv. Neural Inf. Process. Syst. 29 (2016). [Google Scholar]
  4. K. Khamaru, C.-H. Zhang, Inference with the upper confidence bound algorithm. arXiv:2408.04595 (2024). [Google Scholar]
  5. O. Chapelle, L. Li, An empirical evaluation of Thompson sampling, in Adv. Neural Inf. Process. Syst. 24 (2011). [Google Scholar]
  6. D.J. Russo, B. Van Roy, A. Kazerouni, I. Osband, Z. Wen, A tutorial on Thompson sampling (Now Publishers, 2018). https://www.nowpublishers.com [Google Scholar]
  7. D.J. Russo, B. Van Roy, A. Kazerouni, I. Osband, Z. Wen, A tutorial on Thompson sampling. Found. Trends Mach. Learn. 11, 1–96 (2018). [CrossRef] [Google Scholar]
  8. S. Maghsudi, E. Hossain, Multi-armed bandits with application to 5G small cells. IEEE Wirel. Commun. 23, 64–73 (2016). [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.