Open Access
| Issue |
ITM Web Conf.
Volume 84, 2026
2026 International Conference on Advent Trends in Computational Intelligence and Data Science (ATCIDS 2026)
|
|
|---|---|---|
| Article Number | 04008 | |
| Number of page(s) | 9 | |
| Section | Computer Vision, Robotic Systems, and Intelligent Control | |
| DOI | https://doi.org/10.1051/itmconf/20268404008 | |
| Published online | 06 April 2026 | |
- R. Girshick, J. Donahue, T. Darrell, et al. Rich feature hierarchies for accurate object detection and semantic segmentatio//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition: 580–587, (2014). [Google Scholar]
- Y. W. Guo, A. B. Yao, Y. R. Chen, et al. Dynamic network surgery for efficient DNNs//Proceedings of the 30th International Conference on Neural Information Processing. Systems. New York: ACM: 1387–1395, (2016) [Google Scholar]
- M. A. Gordon, K. Duh, N. Andrews. Compressing bert: Studying theeffects of weight pruning on transfer learning. arXiv preprint arXiv:2002.08307, (2020) [Google Scholar]
- E. Frantar, D. Alistarh. Sparsegpt: Massive language models can beaccurately pruned in one-shot//International Conference on Machine Learning. PMLR: 10323–10337, (2023) [Google Scholar]
- H. Li, A. Kadav, I. Durdanovic, et al. Pruning filters for efficient ConvNets[EB/OL]. https://arxiv.org/abs/1608.08710. (2023) [Google Scholar]
- S. Ashkboos, M. L. Croci, M. G. Nascimento, et al. Slicegpt: Compresslarge language models by deleting rows and columns. arXiv preprintarXiv:2401.15024, (2024) [Google Scholar]
- X. Ma, G. Fang, X. Wang. Llm-pruner: On the structural pruning of large language models. Advances in neural information processing sys‐tems, 36: 21702–21720, (2023) [Google Scholar]
- Y. He, G. L. Kang, X. Y. Dong, et al. Soft filter pruning for accelerating deep convolutional neural networks//Proceedings of International Joint Conference on Artificial Intelligence: 2234-2240, (2018) [Google Scholar]
- Z. H. You, K. Yan, J. M. Ye, et al. Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks//Proceedings of Advances in Neural Information Processing Systems: 2133-2144, (2019) [Google Scholar]
- M. B. Lin, R. R. Ji, S. J. Li, et al. Filter sketch for network pruning. arXiv preprint https://arXiv.org/abs/2001,08514, (2020) [Google Scholar]
- X. Lan, X. T. Zhu, S. G. Gong. Knowledge distillation by on-thefly native ensemble//Proceedings of the 32nd International Confer‐ence on Neural Information Processing Systems. Montréal , Canada: Curran Associates Inc. 7528-7538 (2018) [Google Scholar]
- L. Yuang, Z. Wei, W. Jun. Adaptive multi-teacher multi-level knowledge distillation. Neurocomputing, 415: 106–113, (2020) [Google Scholar]
- K. He, X. Zhang, S. Ren, J. Sun. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE: 770-778 (2016) [Google Scholar]
- H. R. Yang, M. X. Tang, W. Wen, et al. Learning low-rankdeep neural networks via singular vector orthogonality regularization and singular value sparsification//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE:2899-2908, (2020) [Google Scholar]
- M. Yin, Y. Sui, S. Y. Liao, et al. Towards efficient tensor decomposition-based DNN model compression with optimization framework//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway: IEEE: 10669-10678, (2021) [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

