ITM Web Conf.
Volume 56, 2023First International Conference on Data Science and Advanced Computing (ICDSAC 2023)
|Number of page(s)||14|
|Section||Language & Image Processing|
|Published online||09 August 2023|
Arabic Grammatical Error Detection Using Transformers-based Pretrained Language Models
1 Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
2 Computer Science Division, Department of Mathematics, Faculty of Science, Ain Shams University, Cairo, Egypt
* Corresponding author: firstname.lastname@example.org
This paper presents a new study to use pre-trained language models based on the transformers for Arabic grammatical error detection (GED). We proposed fine-tuned language models based on pre-trained language models called AraBERT and M-BERT to perform Arabic GED on two approaches, which are the token level and sentence level. Fine-tuning was done with different publicly available Arabic datasets. The proposed models outperform similar studies with F1 value of 0.87, recall of 0.90, precision of 0.83 at the token level, and F1 of 0.98, recall of 0.99, and precision of 0.97 at the sentence level. Whereas the other studies in the same field (i.e., GED) results less than the current study (e.g., F0.5 of 69.21). Moreover, the current study shows that the fine-tuned language models that were built on the monolingual pre-trained language models result in better performance than the multilingual pre-trained language models in Arabic.
© The Authors, published by EDP Sciences, 2023
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.