Open Access
| Issue |
ITM Web Conf.
Volume 78, 2025
International Conference on Computer Science and Electronic Information Technology (CSEIT 2025)
|
|
|---|---|---|
| Article Number | 04028 | |
| Number of page(s) | 12 | |
| Section | Foundations and Frontiers in Multimodal AI, Large Models, and Generative Technologies | |
| DOI | https://doi.org/10.1051/itmconf/20257804028 | |
| Published online | 08 September 2025 | |
- Schick, T., Schütze, H. Exploiting cloze questions for few shot text classification and natural language inference[J]. arXiv preprint arXiv:2001.07676, 2020. [Google Scholar]
- Gao, T., Fisch, A., Chen, D. Making pre-trained language models better few-shot learners[J]. arXiv preprint arXiv:2012.15723, 2020. [Google Scholar]
- Brown, T., Mann, B., Ryder, N., et al. Language models are few-shot learners[J]. Advances in neural information processing systems, 2020, 33: 1877–1901. [Google Scholar]
- Li, X.L., Liang, P. Prefix-tuning: optimizing continuous prompts for generation[J]. arXiv preprint arXiv:2101.00190, 2021. [Google Scholar]
- Houlsby, N., Giurgiu, A., Jastrzebski, S., et al. Parameter-efficient transfer learning for NLP[C]//International conference on machine learning. pmlr,. 2019: 2790–2799. [Google Scholar]
- Lester, B., Al-Rfou, R., Constant, N. The power of scale for parameter-efficient prompt tuning[J]. arXiv preprint arXiv:2104.08691, 2021. [Google Scholar]
- Razdaibiedina, A., Mao, Y., Hou, R., et al. Progressive prompts: continuous learning for language models[J]. arXiv preprint arXiv:2301.12314, 2023. [Google Scholar]
- Qin, C., Joty, S. Lfpt5: A unified framework for lifelong few-shot language learning based on prompt tuning of t5[J]. arXiv preprint arXiv:2110.07298, 2021. [Google Scholar]
- Huang, Y., Zhang, Y., Chen, J., et al. Continual learning for text classification with information disentanglement based regularization[J]. arXiv preprint arXiv:2104.05489, 2021. [Google Scholar]
- Liu, X., Zheng, Y., Du, Z., et al. GPT understands, too[J]. AI Open, 2024, 5: 208–215. [Google Scholar]
- Shin, T., Razeghi, Y., Logan, R.L., et al. Autoprompt: eliciting knowledge from language models with automatically generated prompts[J]. arXiv preprint arXiv:2010.15980, 2020. [Google Scholar]
- Schick, T., Schütze, H. It's not just size that matters: small language models are also few-shot learners[J]. arXiv preprint arXiv:2009.07118, 2020. [Google Scholar]
- Liu, X., Ji, K., Fu, Y., et al. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks[J]. arXiv preprint arXiv:2110.07602, 2021. [Google Scholar]
- Qin, G., Eisner, J. Learning how to ask: Querying LMs with mixtures of soft prompts[J]. arXiv preprint arXiv:2104.06599, 2021. [Google Scholar]
- Gu, Y., Han, X., Liu, Z., et al. Ppt: pre-trained prompt tuning for few-shot learning[J]. arXiv preprint arXiv:2109.04332, 2021. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

