Open Access
Issue
ITM Web Conf.
Volume 84, 2026
2026 International Conference on Advent Trends in Computational Intelligence and Data Science (ATCIDS 2026)
Article Number 03006
Number of page(s) 9
Section Large Language Models, Generative AI, and Multimodal Learning
DOI https://doi.org/10.1051/itmconf/20268403006
Published online 06 April 2026
  1. W. Shen, X. Zhang, Y. Yao, R. Zheng, H. Guo, Y. Liu, Robust RLHF with Noisy Rewards, in Proc. Int. Conf. Learn. Represent. (2025) [Google Scholar]
  2. J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, ... & B. McGrew, GPT-4 Technical Report, arXiv preprint arXiv:2303.08774 (2023) [Google Scholar]
  3. Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, ... & J. Kaplan, Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, arXiv preprint arXiv:2204.05862 (2022) [Google Scholar]
  4. Y. Yan, X. Lou, J. Li, Y. Zhang, J. Xie, C. Yu, ... & Y. Shen, Reward-Robust RLHF in LLMs, arXiv preprint arXiv:2409.15360 (2024) [Google Scholar]
  5. J. Skalse, N. Howe, D. Krasheninnikov, D. Krueger, Defining and Characterizing Reward Gaming, in Advances in Neural Information Processing Systems, 35, 9460–9471 (2022) [Google Scholar]
  6. Z. Wang, B. Bi, S. K. Pentyala, K. Ramnath, S. Chaudhuri, S. Mehrotra, ... & S. Asur, A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More, arXiv preprint arXiv:2407.16216 (2024) [Google Scholar]
  7. S. S. Srivastava, V. Aggarwal, A Technical Survey of Reinforcement Learning Techniques for Large Language Models, arXiv preprint arXiv:2507.04136 (2025) [Google Scholar]
  8. C. Zhang, W. Shen, L. Zhao, X. Zhang, X. Xu, W. Dou, J. Bian, Policy Filtration for RLHF to Mitigate Noise in Reward Models, arXiv preprint arXiv:2409.06957 (2024) [Google Scholar]
  9. J. Hu, J. K. Liu, H. Xu, W. Shen, Reinforce++: An Efficient RLHF Algorithm with Robustness to Both Prompt and Reward Models, arXiv preprint arXiv:2501.03262 (2025) [Google Scholar]
  10. Y. Mroueh, Reinforcement Learning with Verifiable Rewards: GRPO’s Effective Loss, Dynamics, and Success Amplification, arXiv preprint arXiv:2503.06639 (2025) [Google Scholar]
  11. B. Peng, L. Song, Y. Tian, L. Jin, H. Mi, D. Yu, Stabilizing RLHF Through Advantage Model and Selective Rehearsal, arXiv preprint arXiv:2309.10202 (2023) [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.