Open Access
Issue
ITM Web Conf.
Volume 70, 2025
2024 2nd International Conference on Data Science, Advanced Algorithm and Intelligent Computing (DAI 2024)
Article Number 02009
Number of page(s) 8
Section Machine Learning in Healthcare and Finance
DOI https://doi.org/10.1051/itmconf/20257002009
Published online 23 January 2025
  1. L. Anderlini, F. Leonardo, and R. Alessandro. “Legal efficiency and consistency.” European Economic Review 121 (2020). [Google Scholar]
  2. Y. Wang, et al. “Equality before the law: legal judgment consistency analysis for fairness.” arXiv preprint arXiv:2103.13868 (2021). [Google Scholar]
  3. N. Xu, et al. “Distinguish confusing law articles for legal judgment prediction.” arXiv preprint arXiv:2004.02557 (2020). [Google Scholar]
  4. Yue, Linan, et al. “Neurjudge: A circumstance-aware neural framework for legal judgment prediction.” Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. (2021). [Google Scholar]
  5. S. Gururangan, et al. “Don’t stop pretraining: Adapt language models to domains and tasks.” arXiv preprint arXiv:2004.10964 (2020). [Google Scholar]
  6. I. Chalkidis, et al. “LEGAL-BERT: The muppets straight out of law school.” arXiv preprint arXiv:2010.02559 (2020). [Google Scholar]
  7. S. B. Majumder, D. Das. Rhetorical role labelling for legal judgments using ROBERTA. FIRE (Working Notes), (2020). [Google Scholar]
  8. H. Zhong, et al. How does NLP benefit legal system: a summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158, (2020). [Google Scholar]
  9. S. Hochreiter, and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, (1997). [CrossRef] [PubMed] [Google Scholar]
  10. R. Johnson, and T. Zhang. Deep pyramid convolutional neural networks for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), (2017). [Google Scholar]
  11. J. Devlin. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, (2018). [Google Scholar]
  12. V. Tran, M. L. Nguyen, and K. Satoh. Building legal case retrieval systems with lexical matching and summarization using a pre-trained phrase scoring model. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, (2019). [Google Scholar]
  13. V. Tran, et al. Encoded summarization: summarizing documents into continuous vector space for legal case retrieval. Artificial Intelligence and Law, 28:441-467, (2020). [CrossRef] [Google Scholar]
  14. A. Askari, et al. Combining lexical and neural retrieval with Longformer-based summarization for effective case law retrieval. DESIRES, (2021). [Google Scholar]
  15. H. Chen, et al. “Knowledge is power: understanding causality makes legal judgment prediction models more generalizable and robust.” arXiv preprint arXiv:2211.03046 (2022). [Google Scholar]
  16. X. Liu, et al. “Everything has a cause: Leveraging causal inference in legal text analysis.” arXiv preprint arXiv:2104.09420 (2021). [Google Scholar]
  17. Y. Shao, et al. “BERT-PLI: Modeling paragraph-level interactions for legal case retrieval.” IJCAI. 2020. [Google Scholar]
  18. H. Li, et al. “Thuir@ coliee 2023: Incorporating structural knowledge into pre-trained language models for legal case retrieval.” arXiv preprint arXiv:2305.06812 (2023). [Google Scholar]
  19. C. Xiao, et al. “Cail2018: A large-scale legal dataset for judgment prediction.” arXiv preprint arXiv:1807.02478 (2018). [Google Scholar]
  20. C. Xiao, et al. “Cail2019-scm: A dataset of similar case matching in legal domain.” arXiv preprint arXiv:1911.08962 (2019). [Google Scholar]
  21. I. Chalkidis, et al. “LexGLUE: A benchmark dataset for legal language understanding in English.” arXiv preprint arXiv:2110.00976 (2021). [Google Scholar]
  22. J. Niklaus, et al. “Multilegalpile: A 689gb multilingual legal corpus.” arXiv preprint arXiv:2306.02069 (2023). [Google Scholar]
  23. I. Chalkidis, et al. “LeXFiles and LegalLAMA: Facilitating English multinational legal language model development.” arXiv preprint arXiv:2305.07507 (2023). [Google Scholar]
  24. L. Zheng, et al. “When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings.” Proceedings of the eighteenth international conference on artificial intelligence and law. (2021). [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.