Open Access
| Issue |
ITM Web Conf.
Volume 78, 2025
International Conference on Computer Science and Electronic Information Technology (CSEIT 2025)
|
|
|---|---|---|
| Article Number | 01014 | |
| Number of page(s) | 9 | |
| Section | Deep Learning and Reinforcement Learning – Theories and Applications | |
| DOI | https://doi.org/10.1051/itmconf/20257801014 | |
| Published online | 08 September 2025 | |
- Kaelbling, L. P., Littman, M. L., & Moore, A. W.: 'Reinforcement learning: A survey'. Journal of Artificial Intelligence Research, 1996, 4, 237–285 [Google Scholar]
- Gu, S., Yang, L., Du, Y., Chen, G., Walter, F., Wang, J., & Knoll, A.: 'A review of safe reinforcement learning: Methods, theory and applications'. arXiv preprint arXiv:2205.10330. 2022 [Google Scholar]
- Gu, S., Sel, B., Ding, Y., Wang, L., Lin, Q., Jin, M., & Knoll, A.: 'Balance reward and safety optimization for safe reinforcement learning: A perspective of gradient manipulation'. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 19, pp. 21099–21106). 2024 [Google Scholar]
- Koller, T., Berkenkamp, F., Turchetta, M., & Krause, A.: 'Learning-based model predictive control for safe exploration'. In 2018 IEEE conference on decision and control (CDC) (pp. 6059–6066). IEEE. 2018 [Google Scholar]
- Chow, Y., Nachum, O., Duenez-Guzman, E., & Ghavamzadeh, M.: 'A lyapunov-based approach to safe reinforcement learning'. Advances in neural information processing systems, 31. 2018 [Google Scholar]
- Chow, Y., Nachum, O., Faust, A., Duenez-Guzman, E., & Ghavamzadeh, M.: 'Lyapunov-based safe policy optimization for continuous control'. arXiv preprint arXiv:1901.10031. 2019. [Google Scholar]
- Li, X., & Belta, C.: 'Temporal logic guided safe reinforcement learning using control barrier functions'. arXiv preprint arXiv:1903.09885. 2019 [Google Scholar]
- Fulton, N., & Platzer, A.: 'Safe reinforcement learning via formal methods: Toward safe control through proof and learning'. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1). 2018 [Google Scholar]
- Achiam, J., Held, D., Tamar, A., & Abbeel, P.: 'Constrained policy optimization'. In International conference on machine learning (pp. 22–31). PMLR. 2017 [Google Scholar]
- Ji, J., Zhou, J., Zhang, B., Dai, J., Pan, X., Sun, R., ... & Yang, Y.: 'Omnisafe: An infrastructure for accelerating safe reinforcement learning research'. Journal of Machine Learning Research, 25(285), 1–6. 2024 [Google Scholar]
- Boyd, S. P. and L. Vandenberghe.: 'Convex optimization'. Cambridge university press. 2004 [Google Scholar]
- Zhou, Z., Huang, M., Pan, F., He, J., Ao, X., Tu, D., & He, Q.: 'Gradient-adaptive pareto optimization for constrained reinforcement learning'. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 9, pp. 11443–11451). 2023 [Google Scholar]
- Calian, D.A. et al.: 'Balancing Constraints and Rewards with Meta-Gradient D4PG'. In ICLR. 2020 [Google Scholar]
- Xu, T., Liang, Y., & Lan, G.: 'Crpo: A new approach for safe reinforcement learning with convergence guarantee'. In International Conference on Machine Learning (pp. 11480–11491). PMLR. 2021 [Google Scholar]
- Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P.: 'Trust region policy optimization'. In International conference on machine learning, 1889–1897. PMLR. 2015 [Google Scholar]
- Yang, T. Y., Rosca, J., Narasimhan, K., & Ramadge, P. J.: 'Projection-based constrained policy optimization'. arXiv preprint arXiv:2010.03152. 2020 [Google Scholar]
- Yang, L., Ji, J., Dai, J., Zhang, L., Zhou, B., Li, P., & Pan, G.: 'Constrained update projection approach to safe policy optimization'. Advances in Neural Information Processing Systems, 35, 9111–9124. 2022 [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

