Open Access
Issue
ITM Web Conf.
Volume 65, 2024
International Conference on Multidisciplinary Approach in Engineering, Technology and Management for Sustainable Development: A Roadmap for Viksit Bharat @ 2047 (ICMAETM-24)
Article Number 03003
Number of page(s) 12
Section Computer Engineering and Information Technology
DOI https://doi.org/10.1051/itmconf/20246503003
Published online 16 July 2024
  1. Alikaniotis, D., Yannakoudakis, H., Rei, M.: Automatic text scoring using neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, pp. 715–725. Association for Computational Linguistics, August 2016. https://doi.org/10.18653/v1/P16-1068. https://www.aclweb.org/anthology/P16-1068 [Google Scholar]
  2. Angelov, P., Sperduti, A.: Challenges in deep learning. In: ESANN (2016) [Google Scholar]
  3. Basu, S., Jacobs, C., Vanderwende, L.: Powergrading: a clustering approach toamplify human effort for short answer grading. Trans. Assoc. Comput. Linguist. 1, 391–402 (2013) [CrossRef] [Google Scholar]
  4. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150 (2020) [Google Scholar]
  5. Benesty, J., Chen, J., Huang, Y., Cohen, I.: Pearson correlation coefficient. In: Benesty, J., Chen, J., Huang, Y., Cohen, I. (eds.) Noise Reduction in Speech Processing. STSP, vol. 2, pp. 1–4. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00296-05 [Google Scholar]
  6. Brenner, H., Kliebsch, U.: Dependence of weighted kappa coefficients on the number of categories. Epidemiology 199–202 (1996) [CrossRef] [Google Scholar]
  7. Brown, T.B., et al.: Language models are few-shot learners. arXiv preprintarXiv:2005.14165 (2020) [Google Scholar]
  8. Burrows, S., Gurevych, I., Stein, B.: The eras and trends of automatic short answergrading. Int. J. Artif. Intell. Educ. 25(1), 60–117 (2015) [CrossRef] [Google Scholar]
  9. Camus, L., Filighera, A.: Investigating transformers for automatic short answergrading. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Mill´an, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12164, pp. 43–48. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52240-78 [Google Scholar]
  10. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learningof universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364 (2017) [Google Scholar]
  11. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixedlength context. arXiv preprint arXiv:1901.02860 (2019) [Google Scholar]
  12. Dwivedi, C.: A study of selected-response type assessment (MCQ) and essay typeassessment methods for engineering students. J. Eng. Educ. Transform. 32(3), 91–95 (2019) [Google Scholar]
  13. Dzikovska, M.O., et al.: Semeval-2013 task 7: the joint student response analysisand 8th recognizing textual embodiment challenge. In: Second Joint Conference on Lexical and Computational Semantics (* SEM): Seventh International Workshop on Semantic Evaluation (SemEval 2013), vol. 2. Association for Computational Linguistics (2013) [Google Scholar]
  14. Gomaa, W.H., Fahmy, A.A.: Ans2vec: a scoring system for short answers. In: Hassanien, A.E., Azar, A.T., Gaber, T., Bhatnagar, R., F. Tolba, M. (eds.) AMLTA 2019. AISC, vol. 921, pp. 586–595. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-14118-959 [Google Scholar]
  15. Gong, T., Yao, X.: An attention-based deep model for automatic short answerscore. Int. J. Comput. Sci. Softw. Eng. 8(6), 127–132 (2019) [Google Scholar]
  16. Grandini, M., Bagli, E., Visani, G.: Metrics for multi-class classification: anoverview. arXiv preprint arXiv:2008.05756 (2020) [Google Scholar]
  17. Guerra, L., Zhuang, B., Reid, I., Drummond, T.: Automatic pruning for quantizedneural networks. arXiv preprint arXiv:2002.00523 (2020) [Google Scholar]
  18. Hasanah, U., Permanasari, A.E., Kusumawardani, S.S., Pribadi, F.S.: A review ofan information extraction technique approach for automatic short answer grading. In: 2016 1st International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE), pp. 192-196. IEEE (2016) 19. Hyndman, R.J., Koehler, A.B.: Another look at measures of forecast accuracy. Int. J. Forecast. 22(4), 679–688 (2006) [Google Scholar]
  19. J. Luo, “Automatic Short Answer Grading Using Deep Learning”, Accessed: Apr. 22, 2022. [Online]. Available: https://ir.library.illinoisstate.edu/etd/1495 [Google Scholar]
  20. Kaggle: The Hewlett Foundation: Automated Essay Scoring—Kaggle. https://www.kaggle.com/c/asap-aes/. Accessed 04 Oct 2021 [Google Scholar]
  21. Kitaev, N., Kaiser, L., Levskaya, A.: Reformer: the efficient transformer. arXivpreprint arXiv:2001.04451 (2020) [Google Scholar]
  22. Kumar, S., Chakrabarti, S., Roy, S.: Earth mover’s distance pooling over SiameseLSTMs for automatic short answer grading. In: IJCAI, pp. 2046–2052 (2017) [Google Scholar]
  23. Kumar, Y., Aggarwal, S., Mahata, D., Shah, R.R., Kumaraguru, P., Zimmermann, R.: Get it scored using autosas-an automated system for scoring short answers. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9662–9669 (2019) [Google Scholar]
  24. Li, Z.; Zhang, C.; Jin, Y.; Cang, X.; Puntambekar, S.; and Passonneau, R. J. 2023. Learning When to Defer to Humans for Short Answer Grading. In International Conference on Artificial Intelligence in Education, 414–425. Springer. [Google Scholar]
  25. Liu, T., Ding, W., Wang, Z., Tang, J., Huang, G.Y., Liu, Z.: Automatic shortanswer grading via multiway attention networks. In: Isotani, S., Mill´an, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11626, pp. 169–173. Springer, Cham (2019). https://doi.org/10.1007/978-3-03023207-832 [Google Scholar]
  26. Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. arXivpreprint arXiv:1907.11692 (2019) [Google Scholar]
  27. Lopez, M.M., Kalita, J.: Deep learning applied to NLP. arXiv preprintarXiv:1703.03091 (2017) [Google Scholar]
  28. Lun, J., Zhu, J., Tang, Y., Yang, M.: Multiple data augmentation strategies forimproving performance on automatic short answer scoring. In: AAAI, pp. 13389–13396 (2020) [CrossRef] [Google Scholar]
  29. M. Thakkar, A. Joorabchi, and A. Ahmed, “FINETUNING TRANSFORMER MODELS TO BUILD ASAG SYSTEM,” 2021, Accessed: Apr. 22, 2022. [Online]. Available: https://github.com/mithunthakkar26/NLP-Projects [Google Scholar]
  30. Putnikovic, M.; and Jovanovic, J. 2023. Embeddings for Automatic Short Answer Grading: A Scoping Review. IEEE Transactions on Learning Technologies. [Google Scholar]
  31. S. Bonthu, S. Rama Sree, and M. H. M. Krishna Prasad, “Automated Short Answer Grading Using Deep Learning: A Survey,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12844 LNCS, pp. 61–78, Aug. 2021, doi: 10.1007/978-3-030-84060-0_5. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.