|
[1] Y. Li, N. Wang, J. Shi, J. Liu, and X. Hou, “Revisiting batch normalization for practical domain adaptation,” arXiv preprint arXiv:1603.04779, 2016. [2] M. Chen, H. Xue, and D. Cai, “Domain adaptation for semantic segmentation with maximum squares loss,” in Proc. ICCV, 2019. [3] Y. Zou, Z. Yu, X. Liu, B. Kumar, and J. Wang, “Confidence regularized self-training,” in Proc. ICCV, 2019. [4] T. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez, “Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation,” in Proc. CVPR, 2019. [5] Y. Tsai, W. Hung, S. Schulter, K. Sohn, M. Yang, and M. Chandraker, “Learning to adapt structured output space for semantic segmentation,” in Proc. CVPR, 2018. [6] N. Araslanov and S. Roth, “Self-supervised augmentation consistency for adapting semantic segmentation,” in Proc. CVPR, 2021. [7] Y. Li, L. Yuan, and N. Vasconcelos, “Bidirectional learning for domain adaptation of semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6936–6945, 2019. [8] Y. Zou, Z. Yu, B. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in Proceedings of the European conference on computer vision (ECCV), pp. 289–305, 2018. [9] X. Luo, W. Chen, Y. Tan, C. Li, Y. He, and X. Jia, “Exploiting negative learning for implicit pseudo label rectification in source-free domain adaptive semantic segmentation,” arXiv preprint arXiv:2106.12123, 2021. [10] P. T. S and F. Fleuret, “Uncertainty reduction for model adaptation in semantic segmentation,” in Proc. CVPR, 2021. [11] V. Prabhu, S. Khare, D. Kartik, and J. Hoffman, “S4t: Source-free domain adaptation for semantic segmentation via self-supervised selective self-training,” arXiv preprint arXiv:2107.10140, 2021. [12] R. Lopes, S. Fenu, and T. Starner, “Data-free knowledge distillation for deep neural networks,” CoRR, vol. abs/1710.07535, 2017. [13] H. Chen, Y. Wang, C. Xu, Z. Yang, C. Liu, B. Shi, C. Xu, C. Xu, and Q. Tian, “Data-free learning of student networks,” CoRR, vol. abs/1904.01186, 2019. [14] P. Micaelli and A. Storkey, “Zero-shot knowledge transfer via adversarial belief matching,” in NeurIPS, 2019. [15] Y. Liu, W. Zhang, and J. Wang, “Source-free domain adaptation for semantic segmentation,” in Proc. CVPR, 2021. [16] R. Li, Q. Jiao, W. Cao, H. Wong, and S. Wu, “Model adaptation: Unsupervised domain adaptation without source data,” in Proc. CVPR, 2020. [17] H. Yin, P. Molchanov, J. Alvarez, Z. Li, A. Mallya, D. Hoiem, N. Jha, and J. Kautz, “Dreaming to distill: Data-free knowledge transfer via deepinversion,” in Proc. CVPR, 2020. [18] Z. Qiu, Y. Zhang, H. Lin, S. Niu, Y. Liu, Q. Du, and M. Tan, “Source-free domain adaptation via avatar prototype generation and adaptation,” in International Joint Conference on Artificial Intelligence, 2021. [19] H. Xia, H. Zhao, and Z. Ding, “Adaptive adversarial network for source-free domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9010–9019, 2021. [20] J. Liang, D. Hu, Y. Wang, R. He, and J. Feng, “Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. [21] S. Yang, J. van de Weijer, L. Herranz, S. Jui, et al., “Exploiting the intrinsic neighborhood structure for source-free domain adaptation,” Advances in Neural Information Processing Systems, vol. 34, pp. 29393–29405, 2021. [22] J. Huang, D. Guan, A. Xiao, and S. Lu, “Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data,” in NeurIPS, 2021. [23] J. Kundu, A. Kulkarni, A. Singh, V. Jampani, and R. Babu, “Generalize then adapt: Source-free domain adaptive semantic segmentation,” in Proc. ICCV, 2021. [24] Y. Kim, J. Yim, J. Yun, and J. Kim, “Nlnl: Negative learning for noisy labels,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 101–110, 2019. [25] S. Richter, V. Vineet, S. Roth, and V. Koltun, “Playing for data: Ground truth from computer games,” in Proc. ECCV, 2016. [26] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. Lopez, “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” in Proc. CVPR, 2016. [27] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proc. CVPR, 2016. [28] Y. Luo, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation,” in Proc. CVPR, 2019. [29] J. Chang, Y.-T. Pang, and C.-T. Hsu, “Towards the target: Self-regularized progressive learning for unsupervised domain adaptation on semantic segmentation,” in Asian Conference on Pattern Recognition, pp. 299–313, Springer, 2022. [30] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE Trans. PAMI, vol. 40, no. 4, pp. 834–848, 2017. [31] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. CVPR, 2016. |