Error
  • The authentification system partialy failed, sorry for the inconvenience. Please try again later.
Open Access
Issue
ITM Web Conf.
Volume 78, 2025
International Conference on Computer Science and Electronic Information Technology (CSEIT 2025)
Article Number 04019
Number of page(s) 10
Section Foundations and Frontiers in Multimodal AI, Large Models, and Generative Technologies
DOI https://doi.org/10.1051/itmconf/20257804019
Published online 08 September 2025
  1. Vijayan, V. K., Bindu, K. R., Parameswaran, L.: “A comprehensive study of text classification algorithms,” 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 2017, pp. 1109–1113. [Google Scholar]
  2. Clark, K., Luong, M.-T., Le, Q. V., Manning, C. D.: “ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators,” International Conference on Learning Representations, 2020. [Google Scholar]
  3. Sattarpour, S., Barati, A., Barati, H.: “EBIDS: efficient BERT-based intrusion detection system in the network and application layers of IoT,” Cluster Computing, 2025, 28, pp. 138. [Google Scholar]
  4. Hu, Y., Ding, J., Dou, Z., Chang, H.: “Short-text classification detector: A BERT-based mental approach,” Computational Intelligence and Neuroscience, 2022. [Google Scholar]
  5. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019, 1, pp. 4171–4186. [Google Scholar]
  6. Rogers, A., Kovaleva, O., Rumshisky, A.: “A Primer in BERTology: What We Know About How BERT Works,” Transactions of the Association for Computational Linguistics, 2020, 8, pp. 842–866. [Google Scholar]
  7. Liu, Z., Lin, W., Shi, Y., Zhao, J.: “A Robustly Optimized BERT Pre-training Approach with Post-training,” Proceedings of the 20th Chinese National Conference on Computational Linguistics, 2021, pp. 1218–1227. [Google Scholar]
  8. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: “ALBERT: A Lite BERT for Self-supervised Learning of Language Representations,” 2020. [Google Scholar]
  9. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: “DistilBERT: A distilled version of BERT: smaller, faster, cheaper and lighter,” 2020. [Google Scholar]
  10. Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., Levy, O.: “SpanBERT: Improving Pre-training by Representing and Predicting Spans,” Transactions of the Association for Computational Linguistics, 2020, 8, pp. 64–77. [Google Scholar]
  11. Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., Liu, Q.: “ERNIE: Enhanced Language Representation with Informative Entities,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 1441–1451. [Google Scholar]
  12. Conneau, A., Lample, G.: “Cross-lingual language model pretraining,” Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 7059–7069. [Google Scholar]
  13. Wang, W., Bi, B., Yan, M., Wu, C., Bao, Z., Xia, J., Peng, L., Si, L.: “StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding,” 2019. [Google Scholar]
  14. Jiao, X., Yin, Y., Shang, L., Jiang, X., Chen, X., Li, L., Wang, F., Liu, Q.: “TinyBERT: Distilling BERT for Natural Language Understanding,” Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 4163–4174. [Google Scholar]
  15. Hou, J., Omar, N., Tiun, S., Saad, S., He, Q.: “TF-BERT: Tensor-based fusion BERT for multimodal sentiment analysis,” Neural Networks, 2025, 185, 107222. [Google Scholar]
  16. Thipparthy, K. R., Kollu, A., Kulkarni, C., Dutta, A. K., Doshi, H., Kashyap, A., Sinha, K. P., Kondaveeti, S. B., Gupta, R.: “Discrete variational autoencoders BERT model-based transcranial focused ultrasound for Alzheimer's disease detection,” Journal of Neuroscience Methods, 2025, 416, 110386. [Google Scholar]
  17. Meng, Q., et al.: “Electric Power Audit Text Classification With Multi-Grained Pre-Trained Language Model,” IEEE Access, 2023, 11, pp. 13510–13518. [Google Scholar]
  18. Clark, K., Khandelwal, U., Levy, O., Manning, C. D.: “What Does BERT Look at? An Analysis of BERT’s Attention,” Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 2019, pp. 276–286. [Google Scholar]
  19. Yang, C.-H.H., Qi, J., Chen, S.Y.-C., Tsao, Y., Chen, P.-Y.: “When BERT Meets Quantum Temporal Convolution Learning for Text Classification in Heterogeneous Computing,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022, pp. 8602–8606. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.