Open Access
Issue
ITM Web Conf.
Volume 60, 2024
2023 5th International Conference on Advanced Information Science and System (AISS 2023)
Article Number 00008
Number of page(s) 5
DOI https://doi.org/10.1051/itmconf/20246000008
Published online 09 January 2024
  1. Z. Kang, J. Yang, Z. Yang, and S. Cheng, “A review of techniques for 3d reconstruction of indoor environments, ” ISPRS Int. Geo-Inf., vol. 9, no. 5, p. 330, 2020. [CrossRef] [Google Scholar]
  2. Y. Liu, B. N. Zhao, S. Zhao, and L. Zhang, “Progressive Motion Coherence for Remote Sensing Image Matching, ” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–13, 2022. [CrossRef] [Google Scholar]
  3. Y. Liu et al., “Motion Consistency-Based Correspondence Growing for Remote Sensing Image Matching, ” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2021. [Google Scholar]
  4. Y. Liu, B. N. Zhao, and S. Zhao, “Rectified Neighborhood Construction for Robust Feature Matching With Heavy Outliers, ” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2022. [CrossRef] [Google Scholar]
  5. J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey, ” Inf. Fusion, vol. 45, pp. 153–178, 2018. [Google Scholar]
  6. D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints, ” Int. J. Comput. Vis., vol. 60, pp. 91–110, 2004. [CrossRef] [Google Scholar]
  7. D. DeTone, T. Malisiewicz, and A. Rabinovich, “SuperPoint: Self-Supervised Interest Point Detection and Description, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit, pp. 337–33712, 2017. [Google Scholar]
  8. J. Ma, X. Jiang, A. Fan, J. Jiang, and J. Yan, “Image Matching from Handcrafted to Deep Features: A Survey, ” Int. J. Comput. Vis., vol. 129, pp. 23–79, 2020. [Google Scholar]
  9. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, ” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. [CrossRef] [Google Scholar]
  10. J. Pilet, V. Lepetit, and P. Fua, “Fast non-rigid surface detection, registration and realistic augmentation, ” Int. J. Comput. Vis., vol. 76, no. 2, pp. 109–122, 2008. [CrossRef] [Google Scholar]
  11. J. Ma, J. Zhao, J. Tian, A. L. Yuille, and Z. Tu, “Robust Point Matching via Vector Field Consensus, ” IEEE Trans. Image Process., vol. 23, pp. 1706–1721, 2014. [CrossRef] [MathSciNet] [Google Scholar]
  12. J. Bian, W.-Y. Lin, Y. Matsushita, S.-K. Yeung, T. D. Nguyen, and M.-M. Cheng, “GMS: Grid-Based Motion Statistics for Fast, Ultra-robust Feature Correspondence, ” Int. J. Comput. Vis., vol. 128, pp. 1580–1593, 2017. [Google Scholar]
  13. J. Ma, J. Zhao, J. Jiang, H. Zhou, and X. Guo, “Locality preserving matching, ” Int. J. Comput. Vis., vol. 127, no. 5, pp. 512–531, 2019. [CrossRef] [MathSciNet] [Google Scholar]
  14. E. Brachmann, A. Krull, S. Nowozin, J. Shotton, F. Michel, S. Gumhold, and C. Rother, “Dsacdifferentiable ransac for camera localization, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit, 2017, pp. 6684–66921 [Google Scholar]
  15. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 652–660. [Google Scholar]
  16. K. Moo Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua, “Learning to find good correspondences, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 2666–2674. [Google Scholar]
  17. J. Zhang et al., “OANet: Learning Two-View Correspondences and Geometry Using OrderAware Network, ” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 6, pp. 3110–3122, 2022. [CrossRef] [Google Scholar]
  18. J. Ma, X. Jiang, J. Jiang, J. Zhao, and X. Guo, “Lmr: Learning a twoclass classifier for mismatch removal, ” IEEE Trans. Image Process., vol. 28, no. 8, pp. 4045–4059, 2019. [CrossRef] [MathSciNet] [Google Scholar]
  19. C. Zhao, Z. Cao, C. Li, X. Li, and J. Yang, “Nmnet: Mining reliable neighbors for robust feature correspondences, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 215–224. [Google Scholar]
  20. W. Sun, W. Jiang, E. Trulls, A. Tagliasacchi, and K.M. Yi, “Acne:Attentive context normalization for robust permutation-equivariant learning, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 11286–11295. [Google Scholar]
  21. J. Hu, L. Shen, and G. Sun, “Squeeze-andexcitation networks, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7132–7141. [Google Scholar]
  22. B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li, “Yfcc100m: The new data in multimedia research, ” Communications of the ACM, vol. 59, no. 2, pp. 64–73, 2016. [CrossRef] [Google Scholar]
  23. X. Liu and J. Yang, “Progressive Neighbor Consistency Mining for Correspondence Pruning, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit., 2023, pp. 9527–9537. [Google Scholar]
  24. L. Dai et al., “MS2DG-Net: Progressive correspondence learning via multiple sparse semantics dynamic graph, ” in Proceedings of IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 8973–8982. [Google Scholar]
  25. X. Liu, G. Xiao, R. Chen, and J. Ma, “Pgfnet: Preference-guided filtering network for two-view correspondence learning, ” IEEE Trans. Image Process., vol. 32, pp. 1367–1378, 2023. [CrossRef] [Google Scholar]
  26. L. Dai et al., “Enhancing two-view correspondence learning by local-global selfattention, ” Neurocomputing, vol. 459, pp. 176–187, 2021. [CrossRef] [Google Scholar]
  27. Y. Liu et al., “Robust feature matching via advanced neighborhood topology consensus, ” Neurocomputing, vol. 421, pp. 273–284, 2021. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.