|
[1] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco:Common objects in context. In European conference on computer vision, pages 740{755. Springer, 2014. [2] DENG, Jia, et al. Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009. p. 248-255. [3] Benoˆıt Frenay and Michel Verleysen. Classification in the ´ presence of label noise: A survey. IEEE Transactions on Neural Networks and Learning Systems, 25(5):845–869, 2014 [4] Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. In ICLR Workshop, 2015. [5] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017. [6] LECUN, Yann, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86.11: 2278-2324. [7] KRIZHEVSKY, Alex, et al. Learning multiple layers of features from tiny images. 2009. [8] PATRINI, Giorgio, et al. Making deep neural networks robust to label noise: A loss correction approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. p. 1944-1952. [9] Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. In Advances in Neural Information Processing Systems, pages 10477–10486, 2018. [10] XIA, Xiaobo, et al. Are Anchor Points Really Indispensable in Label-Noise Learning?. In: Advances in Neural Information Processing Systems. 2019. p. 6838-6849. [11] SHARMA, Karishma, et al. NoiseRank: Unsupervised Label Noise Reduction with Dependence Models. arXiv preprint arXiv:2003.06729, 2020. [12] HAN, Jiangfan; LUO, Ping; WANG, Xiaogang. Deep self-learning from noisy labels. In: Proceedings of the IEEE International Conference on Computer Vision. 2019. p. 5138-5147. [13] Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimization framework for learning with noisy labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5552– 5560, 2018. [14] Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in Neural Information Processing Systems, pages 8792–8802, 2018. [15] WEI, Hongxin, et al. Combating noisy labels by agreement: A joint training method with co-regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. p. 13726-13735. [16] LAINE, Samuli; AILA, Timo. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. [17] TARVAINEN, Antti; VALPOLA, Harri. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: Advances in neural information processing systems. 2017. p. 1195-1204. [18] LI, Junnan, et al. Learning to learn from noisy labeled data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. p. 5051-5059. [19] LI, Junnan; SOCHER, Richard; HOI, Steven CH. Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394, 2020. [20] SONG, Hwanjun, et al. Learning from Noisy Labels with Deep Neural Networks: A Survey. arXiv preprint arXiv:2007.08199, 2020. [21] R. Wang, T. Liu, and D. Tao, “Multiclass learning with partially corrupted labels,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 6, pp. 2568–2580, 2017 [22] Florian Schroff, Dmitry Kalenichenko, and James Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815–823. [23] E. Arazo, D. Ortego, P. Albert, N. E. O’Connor, and K. McGuinness, “Unsupervised label noise modeling and loss correction,” in Proc. ICML, 2019. [24] NGUYEN, Duc Tam, et al. Self: Learning to filter noisy labels with self-ensembling. arXiv preprint arXiv:1910.01842, 2019. [25] A. Ghosh, H. Kumar, and P. Sastry, “Robust loss functions under label noise for deep neural networks,” in Proc. AAAI, 2017. [26] Z. Zhang and M. Sabuncu, “Generalized cross entropy loss for training deep neural networks with noisy labels,” in Proc. NeurIPS, 2018, pp. 8778–8788. [27] Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, and J. Bailey, “Symmetric cross entropy for robust learning with noisy labels,” in Proc. ICCV, 2019, pp. 322–330. [28] Y. Yan, Z. Xu, I. W. Tsang, G. Long, and Y. Yang, “Robust semisupervised learning through label aggregation,” in Proc. AAAI, 2016. [29] ZHOU, Dengyong, et al. Learning with local and global consistency. In: Advances in neural information processing systems. 2004. p. 321-328. [30] ISCEN, Ahmet, et al. Label propagation for deep semi-supervised learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2019. p. 5070-5079. [31] Matthijs Douze, Arthur Szlam, Bharath Hariharan, and Herve J ´ egou. Low-shot learning with large-scale diffusion. ´ In CVPR, 2018. [32] Aykut Erdem and Marcello Pelillo. Graph transduction as a noncooperative game. Neural Computation, 24, 2012. [33] BONTONOU, Myriam, et al. Introducing graph smoothness loss for training deep learning architectures. arXiv preprint arXiv:1905.00301, 2019. [34] Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In CVPR, 2015 [35] Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. Cleannet: Transfer learning for scalable image classfier training with label noise. In CVPR, 2017 [36] Lukas Bossard, Matthieu Guillaumin, and Luc J. Van Gool. Food-101 - mining discriminative components with random forests. In ECCV, 2014. [37] HE, Kaiming, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770-778. [38] Florian Schroff, Dmitry Kalenichenko, and James Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815–823. [39] LIU, Sheng, et al. Early-Learning Regularization Prevents Memorization of Noisy Labels. arXiv preprint arXiv:2007.00151, 2020. [40] ZHANG, Zizhao, et al. Distilling Effective Supervision from Severe Label Noise. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. p. 9294-9303. [41] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1944–1952, 2017. [42] ZHANG, Weihe; WANG, Yali; QIAO, Yu. Metacleaner: Learning to hallucinate clean representations for noisy-labeled visual recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. p. 7373-7382.
|