|
[1] I. Lluvia, E. Lazkano, and A. Ansuategi, “Active mapping and robot exploration: A survey,” Sensors, vol. 21, no. 7, p. 2445, 2021. [2] J. N. Kundu, M. Rahul, A. Ganeshan, and R. V. Babu, “Object pose estimation from monocular image using multi-view keypoint correspondence,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 298–313. [3] S. Gauglitz, T. Höllerer, and M. Turk, “Evaluation of interest point detectors and feature descriptors for visual tracking,” Int. J. Comput. Vis., vol. 94, no. 3, pp. 335–360, 2011. [4] C. Harris et al., “A combined corner and edge detector,” in Alvey Vis. Conf., 1988, pp. 1–6. [5] D. Marimon, A. Bonnin, T. Adamek, and R. Gimeno, “DARTs: Efficient scale-space extraction of DAISY keypoints,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2010, pp. 2416–2423. [6] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004. [7] S. A. K. Tareen and Z. Saleem, “A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK,” in 2018 Int. Conf. Comput. Math. Eng. technologies, pp. 1–10. [8] H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded up robust features,” in Proc. Eur. Conf. Comput. Vis., 2006, pp. 404–417. [9] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proc. Eur. Conf. Comput. Vis., 2006, pp. 430–443. [10] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary robust independent elementary features,” in Proc. Eur. Conf. Comput. Vis., 2010, pp. 778–792. [11] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proc. IEEE Int. Conf. Comput. Vis., 2011, pp. 2564–2571. [12] A. Mukundan et al., “Understanding and improving kernel local descriptors,” Int. J. Comput. Vis., vol. 127, no. 11, pp. 1723–1737, 2019. [13] J. Ma and Y. Deng, “SDGMNet: Statistic-based dynamic gradient modulation for local descriptor learning,” arXiv preprint arXiv:2106.04434, 2021. [14] V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk, “HPatches: A benchmark and evaluation of handcrafted and learned local descriptors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 5173–5182. [15] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 1615–1630, 2005. [16] T. Lindeberg, “Linear scale-space and related multi-scale representations,” in Scale-space theory in computer vision, Berlin, Germany: Springer Science & Business Media, 1994, pp.31–54. [17] E. Karami, S. Prasad, and M. Shehata, “Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images,” arXiv preprint arXiv:1710.02726, 2017. [18] C. Strecha, A. Lindner, K. Ali, and P. Fua, “Training for task specific keypoint detection,” in Joint pattern recognition symposium, Berlin, Germany: Springer Science & Business Media, 2009, pp.151–160. [19] W. Hartmann, M. Havlena, and K. Schindler, “Predicting matchability,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 9–16. [20] A. Barroso-Laguna, E. Riba, D. Ponsa, and K. Mikolajczyk, “KeyNet: Keypoint detection by handcrafted and learned CNN filters,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 5836–5844. [21] Y. Verdie, K. Yi, P. Fua, and V. Lepetit, “Tilde: A temporally invariant learned detector,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 5279–5288. [22] Y. Ono, E. Trulls, P. Fua, and K. M. Yi, “LF-Net: Learning local features from images,” Advances in neural information processing systems 31, 2018. [Online]. Available: https://proceedings.neurips.cc/paper/2018 [23] D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superpoint: Self-supervised interest point detection and description,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 224–236. [24] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua, “LIFT: Learned invariant feature transform,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 467–483. [25] J. Sun, Z. Shen, Y. Wang, H. Bao, and X. Zhou, “LoFTR: Detector-free local feature matching with transformers,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 8922–8931. [26] D. Shi et al., “Multi-actor hierarchical attention critic with RNN-based feature extraction,” Neurocomputing, vol. 471, pp. 79–93, 2022. [27] H. Naeem and A. A. Bin-Salem, “A CNN-LSTM network with multi-level feature extraction-based approach for automated detection of coronavirus from CT scan and xray images,” Appl. Soft Comput., vol. 113, no. 1, p. 107918, 2021. [28] J. Revaud et al., “R2D2: repeatable and reliable detector and descriptor,” arXiv preprint arXiv:1906.06195, 2019. [29] Z. Luo et al., “Contextdesc: Local descriptor augmentation with cross-modality context,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2527–2536. [30] H. Noh, A. Araujo, J. Sim, T. Weyand, and B. Han, “Large-scale image retrieval with attentive deep local features,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 3456–3465. [31] Y. Tian, X. Yu, B. Fan, F. Wu, H. Heijnen, and V. Balntas, “SOSNET: Second order similarity regularization for local descriptor learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 11 016–11 025. [32] A. G. Howard et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017. [33] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in Int. Conf. Mach. Learn., 2019, pp. 6105–6114. [34] D. Mishkin, F. Radenovic, and J. Matas, “Repeatability is not enough: Learning affine regions via discriminability,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 284–300. [35] A. Mishchuk, D. Mishkin, F. Radenovic, and J. Matas, “Working hard to know your neighbor’s margins: Local descriptor learning loss,” Advances in neural information processing systems 30, 2017. [Online]. Available: https://proceedings.neurips.cc/paper/2017 [36] M. Dusmanu et al., “D2-Net: A trainable CNN for joint description and detection of local features,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8092–8101. [37] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [38] Y. Tian, B. Fan, and F. Wu, “L2-Net: Deep learning of discriminative patch descriptor in euclidean space,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 661–669. [39] P. Truong, M. Danelljan, R. Timofte, and L. Van Gool, “PDC-Net+: Enhanced probabilistic dense correspondence network,” arXiv preprint arXiv:2109.13912, 2021. [40] K. Li, L. Wang, L. Liu, Q. Ran, K. Xu, and Y. Guo, “Decoupling makes weakly supervised local feature better,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 15 838–15 848. [41] Z. Huo et al., “D-MSCD: Mean-standard deviation curve descriptor based on deep learning,” IEEE Access, vol. 8, pp. 204 509–204 517, 2020. [42] K. He, Y. Lu, and S. Sclaroff, “Local descriptors optimized for average precision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 596–605. [43] K. Mikolajczyk et al., “A comparison of affine region detectors,” Int. J. Comput. Vis., vol. 65, no. 1, pp. 43–72, 2005. [44] F. Radenović, A. Iscen, G. Tolias, Y. Avrithis, and O. Chum, “Revisiting oxford and paris: Large-scale image retrieval benchmarking,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 5706–5715. [45] T. Sattler et al., “Benchmarking 6 dof outdoor visual localization in changing conditions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 8601–8610. [46] P. Wang et al., “Understanding convolution for semantic segmentation,” in 2018 IEEE winter Conf. Appl. Comput. Vis., pp. 1451–1460. [47] M. Brown, G. Hua, and S. Winder, “Discriminative learning of local image descriptors,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 1, pp. 43–57, 2010. |