|
[1] D. Berthelot, N. Carlini, I. G. Goodfellow, N. Papernot, A. Oliver, and C. Raffel. Mixmatch: A holistic approach to semi-supervised learning. In NeurIPS, 2019. [2] K. W. Bowyer, N. V. Chawla, L. O. Hall, and W. P. Kegelmeyer. Smote: Synthetic minority over-sampling technique. J. Artif. Intell. Res., 16:321–357, 2002. [3] S. R. Bulo, G. Neuhold, and P. Kontschieder. Loss max-pooling for semantic image` segmentation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7082–7091, 2017. [4] J. Byrd and Z. C. Lipton. What is the effect of importance weighting in deep learn- ing? In ICML, 2018. [5] K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In Advances in Neural Information Processing Systems, 2019. [6] Y.-A. Chung, H.-T. Lin, and S.-W. Yang. Cost-aware pre-training for multiclass cost-sensitive deep learning. In IJCAI, 2015. [7] Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie. Class-balanced loss based on effective number of samples. In CVPR, 2019. [8] L. N. Darlow, E. Crowley, A. Antoniou, and A. J. Storkey. Cinic-10 is not imagenet or cifar-10. ArXiv, abs/1810.03505, 2018. [9] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019. [10] T. DeVries and G. W. Taylor. Improved regularization of convolutional neural net- works with cutout. arXiv preprint arXiv:1708.04552, 2017. [11] C. Elkan. The foundations of cost-sensitive learning. In IJCAI, 2001. [12] H. He, Y. Bai, E. A. Garcia, and S. Li. Adasyn: Adaptive synthetic sampling ap- proach for imbalanced learning. 2008 IEEE International Joint Conference on Neu- ral Networks (IEEE World Congress on Computational Intelligence), pages 1322– 1328, 2008. [13] Y. N. D. D. L.-P. Hongyi Zhang, Moustapha Cisse. mixup: Beyond empirical risk minimization. International Conference on Learning Representations, 2018. [14] G. V. Horn, O. M. Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. J. Belongie. The inaturalist species classification and detection dataset. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8769– 8778, 2017. [15] G. V. Horn and P. Perona. The devil is in the tails: Fine-grained classification in the wild. ArXiv, abs/1709.01450, 2017. [16] C. Huang, Y. Li, C. C. Loy, and X. Tang. Learning deep representation for im- balanced classification. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5375–5384, 2016. [17] J. S. Jaehyung Kim, Jongheon Jeong. Imbalanced classification via adversarial mi- nority over-sampling. OpenReview, 2019. [18] G. Lample, M. Ott, A. Conneau, L. Denoyer, and M. Ranzato. Phrase-based & neural unsupervised machine translation. In EMNLP, 2018. [19] Z.-Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut. Al- bert: A lite bert for self-supervised learning of language representations. ArXiv, abs/1909.11942, 2019. [20] Z. Li, T. Dekel, F. Cole, R. Tucker, N. Snavely, C. Liu, and W. T. Freeman. Learning the depths of moving people by watching frozen people. In CVPR, 2019. [21] T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollar. Focal loss for dense object´ detection. IEEE transactions on pattern analysis and machine intelligence, 2017. [22] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song. Sphereface: Deep hypersphere embedding for face recognition. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6738–6746, 2017. [23] W. Liu, Y. Wen, Z. Yu, and M. Yang. Large-margin softmax loss for convolutional neural networks. In ICML, 2016. [24] Z. Liu, Z. Miao, X. Zhan, J. Wang, B. Gong, and S. X. Yu. Large-scale long-tailed recognition in an open world. In CVPR, 2019. [25] S. S. Mullick, S. Datta, and S. Das. Generative adversarial minority oversampling. In The IEEE International Conference on Computer Vision (ICCV), October 2019. [26] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. D.-I. Kopf, E. Yang, Z. DeVito, M. Rai- son, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Py- torch: An imperative style, high-performance deep learning library. In NeurIPS 2019, 2019. [27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blon- del, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. [28] T. R. Shaham, T. Dekel, and T. Michaeli. Singan: Learning a generative model from a single natural image. In The IEEE International Conference on Computer Vision (ICCV), October 2019. [29] R. Takahashi, T. Matsubara, and K. Uehara. Ricap: Random image cropping and patching data augmentation for deep cnns. In Proceedings of The 10th Asian Con- ference on Machine Learning, 2018. [30] S. Thulasidasan, G. Chennupati, J. A. Bilmes, T. Bhattacharya, and S. E. Michalak. On mixup training: Improved calibration and predictive uncertainty for deep neural networks. In NeurIPS, 2019. [31] K. X. Tianyu Pang and J. Zhu. Mixup inference: Better exploiting mixup to defend adversarial attacks. In ICLR, 2020. [32] S. van Steenkiste, K. Greff, and J. Schmidhuber. A perspective on objects and sys- tematic generalization in model-based rl. ArXiv, abs/1906.01035, 2019. [33] V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez-Paz, and Y. Bengio. Manifold mixup: Better representations by interpolating hidden states. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learn- ing Research, pages 6438–6447, Long Beach, California, USA, 09–15 Jun 2019. PMLR. [34] V. Verma, A. Lamb, J. Kannala, Y. Bengio, and D. Lopez-Paz. Interpolation consis- tency training for semi-supervised learning. In IJCAI, 2019. [35] S. Wang, W. Liu, J. Wu, L. Cao, Q. Meng, and P. J. Kennedy. Training deep neural networks on imbalanced data sets. 2016 International Joint Conference on Neural Networks (IJCNN), pages 4368–4374, 2016. [36] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, G. Liu, A. Tao, J. Kautz, and B. Catanzaro. Video- to-video synthesis. In NeurIPS, 2018. [37] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo. Cutmix: Regularization strat- egy to train strong classifiers with localizable features. In International Conference on Computer Vision (ICCV), 2019.
|