Open Access
Issue |
ITM Web Conf.
Volume 70, 2025
2024 2nd International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2024)
|
|
---|---|---|
Article Number | 02006 | |
Number of page(s) | 8 | |
Section | Machine Learning in Healthcare and Finance | |
DOI | https://doi.org/10.1051/itmconf/20257002006 | |
Published online | 23 January 2025 |
- Z. B. Xie, and T. Lukasiewicz, An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15730-15745, Toronto, Canada. Association for Computational Linguistics. (2023) [CrossRef] [Google Scholar]
- D. Oba, M. Kaneko, and D. Bollegala. In-Contextual Gender Bias Suppression for Large Language Models. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1722-1742, St. Julian’s, Malta. Association for Computational Linguistics. (2024) [Google Scholar]
- L. J. Wang, Y. Y. Li, T. Miller, S. Bethard, and G. Savova, Two-Stage Fine-Tuning for Improved Bias and Variance for Large Pretrained Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15746-15761, Toronto, Canada. Association for Computational Linguistics. (2023) [CrossRef] [Google Scholar]
- F. Zhou, Y. Z. Mao, L. Yu, Y. Yang, and T. Zhong. Causal-Debias: Unifying Debiasing in Pretrained Language Models and Fine-tuning via Causal Invariant Learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4227-4241, Toronto, Canada. Association for Computational Linguistics. (2023) [CrossRef] [Google Scholar]
- J. Z. Zhu, S. J. Wu, X. W. Zhang, Y. X. Hou, and Z. Y. Feng. Causal Intervention for Mitigating Name Bias in Machine Reading Comprehension. In Findings of the Association for Computational Linguistics: ACL 2023, pages 12837-12852, Toronto, Canada. Association for Computational Linguistics. (2023) [CrossRef] [Google Scholar]
- J. Pearl, and D. Mackenzie, The book of why: The new science of cause and effect, (Basic book, New York, 2020) [Google Scholar]
- Z. W. Chen, L. M. Hu, W. X. Li, Y. X. Shao, and L. Q. Nie. Causal Intervention and Counterfactual Reasoning for Multi-modal Fake News Detection. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 627-638, Toronto, Canada. Association for Computational Linguistics (2023) [CrossRef] [Google Scholar]
- J. L. Wu, L. H. Zhang, D. Y. Zhou, and G. Q. Xu, DINER: Debiasing Aspect-based Sentiment Analysis with Multi-variable Causal Inference. In Findings of the Association for Computational Linguistics ACL 2024, pages 3504-3518 August 11-16 (2024) [Google Scholar]
- B. X. Cao, H. Y. Lin, X. P. Han, F. C. Liu, and L. Sun, Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5796-5808, Toronto, Canada. Association for Computational Linguistics (2023) [Google Scholar]
- N. Jain, D. J. Zhang, W. U. Ahmad, Z. J. Wang, F. Nan, X. P. Li, M. Tan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Xiaofei Ma, and Bing Xiang. ContraCLM: Contrastive Learning For Causal Language Model. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6436-6459, Toronto, Canada. Association for Computational Linguistics (2023) [CrossRef] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.