Open Access
Issue |
ITM Web Conf.
Volume 65, 2024
International Conference on Multidisciplinary Approach in Engineering, Technology and Management for Sustainable Development: A Roadmap for Viksit Bharat @ 2047 (ICMAETM-24)
|
|
---|---|---|
Article Number | 03004 | |
Number of page(s) | 8 | |
Section | Computer Engineering and Information Technology | |
DOI | https://doi.org/10.1051/itmconf/20246503004 | |
Published online | 16 July 2024 |
- Zhang C, Wang J, Zhou Q, Xu T, Tang K, Gui H, et al. A Survey of Automatic Source Code Summarization. Symmetry. 2022 Mar;14(3):471. [CrossRef] [Google Scholar]
- Haiduc S, Aponte J, Moreno L, Marcus A. On the Use of Automated Text Summarization Techniques for Summarizing Source Code. In: 2010 17th Working Conference on Reverse Engineering [Internet]. Beverly, MA, USA: IEEE; 2010 [cited 2024 Apr 14]. p. 35–44. Available from: http://ieeexplore.ieee.org/document/5645482/ [Google Scholar]
- Liu B, Wang T, Zhang X, Fan Q, Yin G, Deng J. A Neural-Network based Code Summarization Approach by Using Source Code and its Call Dependencies. In: Proceedings of the 11th Asia-Pacific Symposium on Internetware [Internet]. Fukuoka Japan: ACM; 2019 [cited 2024 Apr 14]. p. 1–10. Available from: https://dl.acm.org/doi/10.1145/3361242.3362774 [Google Scholar]
- Bansal A, Haque S, McMillan C. Project-Level Encoding for Neural Source Code Summarization of Subroutines [Internet]. arXiv; 2021 [cited 2024 May 7]. Available from: http://arxiv.org/abs/2103.11599 [Google Scholar]
- Wang Y, Wang W, Joty S, Hoi SCH. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation [Internet]. arXiv; 2021 [cited 2024 Apr 15]. Available from: http://arxiv.org/abs/2109.00859 [Google Scholar]
- Feng Z, Guo D, Tang D, Duan N, Feng X, Gong M, et al. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In: Cohn T, He Y, Liu Y, editors. Findings of the Association for Computational Linguistics: EMNLP 2020 [Internet]. Online: Association for Computational Linguistics; 2020 [cited 2024 Mar 27]. p. 1536–47. Available from: https://aclanthology.org/2020.findings-emnlp.139 [CrossRef] [Google Scholar]
- Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach [Internet]. arXiv; 2019 [cited 2024 May 7]. Available from: http://arxiv.org/abs/1907.11692. [Google Scholar]
- Zaidi SAJ, Hussain S, Brahim Belhaouari S. Implementation of Text Base Information Retrieval Technique. International Journal of Advanced Computer Science and Applications. 2020 Dec 1;11. [Google Scholar]
- Belwal RC, Rai S, Gupta A. Text summarization using topic-based vector space model and semantic measure. Information Processing & Management. 2021 May;58(3):102536. [CrossRef] [Google Scholar]
- Ahmad WU, Chakraborty S, Ray B, Chang KW. A Transformer-based Approach for Source Code Summarization [Internet]. arXiv; 2020 [cited 2024 Apr 14]. Available from: http://arxiv.org/abs/2005.00653 [Google Scholar]
- Hu X, Li G, Xia X, Lo D, Jin Z. Deep code comment generation with hybrid lexical and syntactical information. Empir Software Eng. 2020 May;25(3):2179–217. [CrossRef] [Google Scholar]
- Ghadimi A, Beigy H. Hybrid multi-document summarization using pre-trained language models. Expert Systems with Applications. 2022 Apr 15;192:116292. [CrossRef] [Google Scholar]
- Liu Y, Lapata M. Text Summarization with Pretrained Encoders [Internet]. arXiv; 2019 [cited 2024 May 7]. Available from: http://arxiv.org/abs/1908.08345 [Google Scholar]
- Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer [Internet]. arXiv; 2023 [cited 2024 Apr 14]. Available from: http://arxiv.org/abs/1910.10683 [Google Scholar]
- Mutlu B, Sezer EA, Akcayol MA. Candidate sentence selection for extractive text summarization. Information Processing & Management. 2020 Nov;57(6):102359. [CrossRef] [Google Scholar]
- Scialom T, Dray PA, Lamprier S, Piwowarski B, Staiano J. MLSUM: The Multilingual Summarization Corpus [Internet]. arXiv; 2020 [cited 2024 May 7]. Available from: http://arxiv.org/abs/2004.14900 [Google Scholar]
- Wang D, Chen J, Zhou H, Qiu X, Li L. Contrastive Aligned Joint Learning for Multilingual Summarization. In: Zong C, Xia F, Li W, Navigli R, editors. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 [Internet]. Online: Association for Computational Linguistics; 2021 [cited 2024 May 7]. p. 2739–50. Available from: https://aclanthology.org/2021.findings-acl.242 [CrossRef] [Google Scholar]
- Zhu Q, Sun Z, Xiao Y an, Zhang W, Yuan K, Xiong Y, et al. A Syntax-Guided Edit Decoder for Neural Program Repair [Internet]. arXiv; 2022 [cited 2024 May 7]. Available from: http://arxiv.org/abs/2106.08253 [Google Scholar]
- Libovický J, Rosa R, Fraser A. How Language-Neutral is Multilingual BERT? [Internet]. arXiv; 2019 [cited 2024 May 7]. Available from: http://arxiv.org/abs/1911.03310 [Google Scholar]
- Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention Is All You Need [Internet]. arXiv; 2023 [cited 2024 Apr 21]. Available from: http://arxiv.org/abs/1706.03762 [Google Scholar]
- Sharma T, Kechagia M, Georgiou S, Tiwari R, Vats I, Moazen H, et al. A Survey on Machine Learning Techniques for Source Code Analysis [Internet]. arXiv; 2022 [cited 2024 Mar 26]. Available from: http://arxiv.org/abs/2110.09610 [Google Scholar]
- Wang A, Singh A, Michael J, Hill F, Levy O, Bowman SR. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding [Internet]. arXiv; 2019 [cited 2024 Apr 24]. Available from: http://arxiv.org/abs/1804.07461 [Google Scholar]
- Zolotareva E, Tashu TM, Horváth T. Abstractive Text Summarization using Transfer Learning. [Google Scholar]
- Dong L, Satpute MN, Wu W, Du DZ. Two-Phase Multidocument Summarization Through Content-Attention-Based Subtopic Detection. IEEE Transactions on Computational Social Systems. 2021 Dec;8(6):1379–92. [CrossRef] [Google Scholar]
- Ghadhab L, Jenhani I, Mkaouer MW, Ben Messaoud M. Augmenting commit classification by using fine-grained source code changes and a pre-trained deep neural language model. Information and Software Technology. 2021 Jul 1;135:106566. [CrossRef] [Google Scholar]
- Wang B, Xu C, Wang S, Gan Z, Cheng Y, Gao J, et al. Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models [Internet]. arXiv; 2022 [cited 2024 Jun 15]. Available from: http://arxiv.org/abs/2111.02840 [Google Scholar]
- Chung HW, Hou L, Longpre S, Zoph B, Tay Y, Fedus W, et al. Scaling Instruction-Finetuned Language Models [Internet]. arXiv; 2022 [cited 2024 Apr 16]. Available from: http://arxiv.org/abs/2210.11416 [Google Scholar]
- A. Victor Ikechukwu and M. S, “CX-Net: an efficient ensemble semantic deep neural network for ROI identification from chest-x-ray images for COPD diagnosis,” Mach. Learn.: Sci. Technol., vol. 4, no. 2, p. 025021, Jun. 2023, doi: 10.1088/2632-2153/acd2a5. [CrossRef] [Google Scholar]
- V. I. Agughasi, “Leveraging Transfer Learning for Efficient Diagnosis of COPD Using CXR Images and Explainable AI Techniques,” Inteligencia Artificial, vol. 27, no. 74, Art. no. 74, Jun. 2024, doi: 10.4114/intartif.vol27iss74pp133-151. [Google Scholar]
- V. I. Agughasi, “The Superiority of Fine-tuning over Full-training for the Efficient Diagnosis of COPD from CXR Images,” Inteligencia Artificial, vol. 27, no. 74, Art. no. 74, May 2024, doi: 10.4114/intartif.vol27iss74pp62-79. [Google Scholar]
- V. I. Agughasi and M. Srinivasiah, “Semi-supervised labelling of chest x-ray images using unsupervised clustering for ground-truth generation,” AET, vol. 2, no. 3, pp. 188–202, Sep. 2023, doi: 10.31763/aet.v2i3.1143. [CrossRef] [Google Scholar]
- A. V. Ikechukwu, M. S, and H. B, “COPDNet: An Explainable ResNet50 Model for the Diagnosis of COPD from CXR Images,” in 2023 IEEE 4th Annual Flagship India Council International Subsections Conference (INDISCON), Mysore, India: IEEE, Aug. 2023, pp. 1–7. doi: 10.1109/INDISCON58499.2023.10270604. [Google Scholar]
- I. Agughasi Victor and S. Murali, “i-Net: a deep CNN model for white blood cancer segmentation and classification,” IJATEE, vol. 9, no. 95, Oct. 2022, doi: 10.19101/IJATEE.2021.875564. [Google Scholar]
- A. V. Ikechukwu and S. Murali, “xAI: An Explainable AI Model for the Diagnosis of COPD from CXR Images,” in 2023 IEEE 2nd International Conference on Data, Decision and Systems (ICDDS), Dec. 2023, pp. 1–6. doi: 10.1109/ICDDS59137.2023.10434619. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.