| Issue |
ITM Web Conf.
Volume 78, 2025
International Conference on Computer Science and Electronic Information Technology (CSEIT 2025)
|
|
|---|---|---|
| Article Number | 04033 | |
| Number of page(s) | 13 | |
| Section | Foundations and Frontiers in Multimodal AI, Large Models, and Generative Technologies | |
| DOI | https://doi.org/10.1051/itmconf/20257804033 | |
| Published online | 08 September 2025 | |
Exploring The Principles and Prospects for Efficient Fine-Tuning of Transformer-Based Pre-Trained Large Language Models
FOB, City University of Macau, Macau, China
This email address is being protected from spambots. You need JavaScript enabled to view it.
Abstract
In recent years, large language models (LLMs) have made breakthroughs in natural language processing and multimodal tasks. However, the growing model size and the high cost of full parameter fine-tuning pose challenges to their efficient adaptation. This paper focus on Transformer-based Parameter Efficient Fine-Tuning (PEFT) techniques for large models, and analyze three types of methods, namely, additive-based, specification-based and reparameterization-based, from the perspectives of performance, engineering complexity and applicability. This paper concludes that the PEFT technique exhibits comparable or even better performance than full parameter fine-tuning in a variety of tasks, but still faces stability and adaptability challenges in complex scenarios. Future research can further advance the field by improving flexibility, optimizing strategies and focusing on privacy and security. The purpose of this paper is to provide a basic reference for researchers to understand the PEFT algorithm and its system implementation, to further promote the implementation of PEFT technology in the research industry.
© The Authors, published by EDP Sciences, 2025
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

