Open Access
| Issue |
ITM Web Conf.
Volume 78, 2025
International Conference on Computer Science and Electronic Information Technology (CSEIT 2025)
|
|
|---|---|---|
| Article Number | 04036 | |
| Number of page(s) | 7 | |
| Section | Foundations and Frontiers in Multimodal AI, Large Models, and Generative Technologies | |
| DOI | https://doi.org/10.1051/itmconf/20257804036 | |
| Published online | 08 September 2025 | |
- Nature Machine Intelligence[J]. 2019-present. [Google Scholar]
- Xiao, L., Wu, X., Yang, S., et al. Cross-modal fine-grained alignment and fusion network for multimodal aspect-based sentiment analysis[J]. Information Processing & Management, 2023, 60(6): 103508. [Google Scholar]
- Wu, T., Li, M., Chen, J., et al. Semantic Alignment for Multimodal Large Language Models[C]//Proceedings of the 32nd ACM International Conference on Multimedia. 2024: 3489–3498. [Google Scholar]
- Zhang, J., Cao, M., Xie, W., et al. E2e-mfd: Towards end-to-end synchronous multimodal fusion detection[J]. Advances in Neural Information Processing Systems, 2024, 37: 52296–52322. [Google Scholar]
- Liu, Y., Zhang, K., Ren, X., et al. AlignRec: Aligning and Training in Multimodal Recommendations[C]//Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024: 1503–1512. [Google Scholar]
- Sun, J., Chen, K., He, X., et al. UniTrans: Unified Parameter-Efficient Transfer Learning and Multimodal Alignment for Large Multimodal Foundation Model[J]. Computers, Materials & Continua, 2025, 83(1). [Google Scholar]
- Baltrušaitis, T., Ahuja, C., Morency, L.P. Multimodal machine learning: A survey and taxonomy[J]. IEEE transactions on pattern analysis and machine intelligence, 2018, 41(2): 423–443. [Google Scholar]
- Tsai, Y.H.H., Bai, S., Liang, P. P., et al. Multimodal transformer for unaligned multimodal language sequences[C]//Proceedings of the conference. Association for computational linguistics. Meeting. 2019, 2019: 6558. [Google Scholar]
- Radford, A., Kim, J. W., Hallacy, C., et al. Learning transferable visual models from natural language supervision[C]//International conference on machine learning. PmLR, 2021: 8748–8763. [Google Scholar]
- Wang, F., Ding, L., Rao, J., et al. Can linguistic knowledge improve multimodal alignment in vision-language pretraining?[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2024, 20(12): 1–22. [Google Scholar]
- Chen, Q., Hong, Y. Alifuse: Aligning and Fusing Multimodal Medical Data for Computer-Aided Diagnosis[C]//2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2024: 3082–3087. [Google Scholar]
- Pan, Z., Mao, Y., Xiong, L., et al. MFAE: Multimodal fusion and alignment for entity-level disinformation detection[J]. Pattern Recognition Letters, 2024, 184: 59–65. [Google Scholar]
- Liu, H., Li, C., Wu, Q., et al. Visual instruction tuning[J]. Advances in neural information processing systems, 2023, 36: 34892–34916. [Google Scholar]
- Barua, A. Enhancing Multimodal Reasoning with Data Alignment and Fusion[D]. Mälardalen University, 2024. [Google Scholar]
- Zhang, Y., Qiu, S., He, H. Multimodal motor imagery decoding method based on temporal spatial feature alignment and fusion[J]. Journal of Neural Engineering, 2023, 20(2): 026009. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

