|
REFERENCES [1] J. Z. Tsai, R.-S. Chang, and T.-Y. Li, “Detection of gap mura in TFT LCDs by the interference pattern and image sensing method,” IEEE Trans. Instrum. Meas., vol. 62, no. 11, pp. 3087–3092, Nov. 2013. [2] A. Y. Jazi, J. J. Liu, and H. Lee, "Automatic inspection of TFT-LCD glass substrates using optimized support vector machines." IFAC Proceedings Volumes 45.15, 325-330, 2012. [3] S. Mei, H. Yang, and Z. Yin, “Unsupervised-learning-based feature- level fusion method for mura defect recognition,” IEEE Trans. Semicond. Manuf., vol. 30, no. 1, pp. 105–113, Feb. 2017. [4] B. Chen, Z. Fang, Y. Xia, L. Zhang, Y. Huang, and L. Wang, “Accurate defect detection via sparsity reconstruction for weld radiographs,” NDT E Int., vol. 94, pp. 62–69, Mar. 2018. [5] L.-F. Chen, C.-T. Su, and M.-H. Chen, “A neural-network approach for defect recognition in TFT-LCD photolithography process,” IEEE Trans. Electron. Packag. Manuf., vol. 32, no. 1, pp. 1–8, Jan. 2009. [6] T. Nakazawa, and D. V. Kulkarni, “Wafer map defect pattern classification and image retrieval using convolutional neural network,” IEEE Trans. Semicond. Manuf., vol. 31, no. 2, pp. 309–314, May 2018. [7] T.-Y. Li, J.-Z. Tsai, R.-S. Chang, L.-W. Ho, and C.-F. Yang, “Pretest gap mura on TFT LCDs using the optical interference pattern sensing method and neural network classification,” IEEE Trans. Ind. Electron., vol. 60, no. 9, pp. 3976–3982, Sep. 2013. [8] K. Taniguchi, K. Ueta, and S. Tatsumi, “A mura detection method,” Pattern Recogn, vol. 39, no. 6, pp. 1044-1052, 2006. [9] C.J. Lu and D.M. Tsai, “Automatic defect inspection for LCDs using singular value decomposition,” The International Journal of Advanced Manufacturing Technology, vol. 25, no. 1-2, pp. 53-61, 2005. [10] D.C. Tseng, Y.C. Lee, and C.E. Shie, “LCD mura detection with multi-image accumulation and multi-resolution background subtraction,” Int. J. Innovative Comput., Inf. Control, vol. 8, no. 7, pp. 4837-4850, 2012. [11] K. Li, H. Li, Y. Liu, and P. Liang, “Background suppression of LCD mura defect using B-spline surface fitting,” Opto-Electronic Engineering, vol. 41, no. 2, pp. 33-39, 2014. [12] S. Jin, C. Ji, C. Yan, and J. Xing, “TFT-LCD mura defect detection using DCT and the Dual-γ piecewise exponential transform,” Precision Engineering, vol. 54, pp. 371-378, 2018 [13] Y.J. Chen, T.H. Lin, K.H. Chang, and C.F. Chien, “Feature extraction for defect classification and yield enhancement in color filter and micro-lens manufacturing: An empirical study,” Journal of Industrial and Production Engineering, vol. 30, no. 8, pp. 510–517, 2013. [14] M. Kim, M. Lee, M. An, and H. Lee, “Effective automatic defect classification process based on CNN with stacking ensemble model for TFT-LCD panel,” Journal of Intelligent Manufacturing, vol. 31, no. 5, pp. 1165–1174, 2020, doi: 10.1007/s10845-019-01502-y. [15] H. Yang, S. Mei, K. Song, B. Tao, and Z. Yin, “Transfer-learning- based online mura defect classification,” IEEE Trans. Semicond. Manuf., vol. 31, no. 1, pp. 116–123, Feb. 2018. [16] Z. Wei, J. Wang, H. Nichol, S. Wiebe, and D. Chapman, “A median Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image,” Micron, vol. 43, nos. 2–3, pp. 170–176, 2012. [17] F. Liu, J. Yang, and H. Yue, “Moiré pattern removal from texture images via low-rank and sparse matrix decomposition,” in Proc. IEEE Vis. Commun. Image Process. (VCIP), Singapore, pp. 1–4, 2015. [18] J. Yang, X. Zhang, C. Cai, and K. Li, “Demoiréing for screen-shot images with multi-channel layer decomposition,” in Proc. IEEE Vis. Commun. Image Process. (VCIP), St. Petersburg, FL, USA, pp. 1–4, 2017. [19] T. H. Kim and S. I. Park, “Deep context-aware descreening and rescreening of halftone images,” ACM Trans. Graph., vol. 37, no. 4, pp. 1–12, 2018. [20] Y. Sun, Y. Yu, and W. Wang, “Moiré photo restoration using multiresolution convolutional neural networks,” IEEE Trans. Image Process., vol. 27, no. 8, pp. 4160–4172, Aug. 2018. [21] B. Liu, X. Shu, and X. Wu, “Demoire ́ing of camera-captured screen images using deep convolutional neural network,” arXiv preprint, arXiv:1804.03809, 2018. [22] T. Gao, Y. Guo, X. Zheng, Q. Wang, and X. Luo, “Moiré pattern 844 removal with multi-scale feature enhancing network,” in Proc. IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), Shanghai, China, pp. 240–245, 2019. [23] S. Yuan, R. Timofte, G. Slabaugh, and A. Leonardis, “AIM 2019 challenge on image demoireing: Dataset and study,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), Seoul, South Korea, Oct. 2019, pp. 3526–3533. [Online]. Available: https://arxiv.org/abs/1911.02498 [24] Z. Wang, Q. She, and T. E. Ward, “Generative adversarial networks: A survey and taxonomy,” in Proc. IEEE Trans. Emerg. Topics Comput. Intell. (TETCI), 2019. [Online]. Available: https://arxiv.org/abs/1906.01529 [25] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. 27th Int. Conf. Neural Inf. Process. Syst. (NIPS’14), Dec. 2014, pp. 2672–2680. [26] D. W. Kim, J. R. Chung, and Se. W. Jung, “GRDN: Grouped resid- ual dense network for real image denoising and GAN-based real-world noise modeling,” in Proc. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, 2019, pp. 2086–2094. [Online]. Available: https://arxiv.org/abs/1905.11172 [27] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , pp. 5967–5976, 2017. [28] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” 2014. [Online]. Available: arXiv:1411.1784. [29] X. Yi and P. Babyn, “Sharpness-aware low-dose CT denoising using conditional generative adversarial network,” J. Digit. Imag., vol. 31, no. 5, pp. 655–669, 2018. [30] H.-J. Kim and D. Lee, “Image denoising with conditional generative adversarial networks (CGAN) in low dose chest images,” Nuclear Instrum. Methods Phys. Res. A, Accelerators Spectrometers Detectors Assoc. Equipment, vol. 954, Feb. 2020, Art. no. 161914. [31] F. Mattias P. Heinrich, M. Stille, and T. M. Buzug, “Residual U-Net convolutional neural network architecture for low-dose CT denoising,” Curr. Directions Biomed. Eng., vol. 4, no. 1, pp. 297–300, 2018. Online]. Available: https://doi.org/10.1515/cdbme-2018-0072 [32] M. Livne, J. Rieger, O. U. Aydin, A. A. Taha, E. M. Akay, T. Kossen, J. Sobesky, J. D. Kelleher, K. Hildebrand, D. Frey, and V. I. Madai “A U-Net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease,” Front Neurosci., 13:97. doi: 10.3389. 2019. [33] M. Kolařík, R. Burget, V. Uher, K. Říha, and M. K. Dutta, “Optimized high resolution 3D Dense-U-Net network for brain and spine segmentation,” Applied Sciences, vol. 9, no. 3, pp. 1-17, 2019. [34] J. Wu, Y. Zhang, K. Wang, and X. Tang, "Skip connection U-Net for white matter hyperintensities segmentation from MRI," IEEE Access, vol. 7, pp. 155194-155202, 2019. [35] V. Mnih, N. Heess, A. Graves, and K. kavukcuoglu, “Recurrent models of visual attention,” In Neural Information Processing Systems (NIPS), pp. 2204-2212, 2014. [36] J.H. Kim, S.W. Lee, D. Kwak, M.O. Heo, J. Kim, J.W. Ha, and B.T. Zhang, “Multimodal residual learning for visual Qa,” In Advances in Neural Information Processing Systems, pp. 361-369, 2016. [37] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Santiago, Chile, pp. 1520–1528, 2015. [38] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in Neural Information Processing Systems (NeurIPS), pp. 577–585, 2015. [39] L. Gao, X. Li, J. Song, and H. T. Shen, “Hierarchical LSTMs with adaptive attention for visual captioning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 5, pp.1112–1131, May 2020. [40] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, “Recurrent models of visual attention,” in Neural Information Processing Systems (NIPS). Red Hook, NY, USA: Curran, 2014, pp. 2204–2212. [41] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang, “Residual attention network for image classification,” in Proc. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017 [Online]. Available: https://arxiv.org/abs/1704.06904 [42] X. Nie, M. Duan, H. Ding, B. Hu, and E. K. Wong, “Attention mask R-CNN for ship detection and segmentation from remote sensing images,” IEEE Access, vol. 8, pp. 9325-9334, 2020. [43] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, 2014, pp. 1717–1724. [44] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. ICLR, San Diego, CA, USA, May 7-9, 2015, pp. 1-14. [Online]. Available: https://arxiv.org/abs/1409.1556. [45] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A Survey on deep transfer learning,” in Proc. 27th ICANN, Rhodes, Greece, 4–7 October, pp. 270-279, 2018. [46] Y. Guo, H. Shi, A. Kumar, K. Grauman, T. Rosing, and R. S. Feris, “SpotTune: Transfer learning through adaptive fine-tuning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 4805–4814. [47] R. Zhang, H. Tao, L. Wu and Y. Guan, "Transfer learning with neural networks for bearing fault diagnosis in changing working conditions." IEEE Access 5 (2017): 14347-14357. [48] J. Margeta, A. Criminisi, R. C. Lozoya, D. C. Lee, and N. Ayache, “Fine-tuned convolutional neural nets for cardiac MRI acquisition plane recognition,” Comput. Methods Biomech. Biomed. Eng. Imag. Visual., vol. 5, no. 5, pp. 339–349, 2016, doi: 10.1080/21681163.2015.1061448. [49] W. Liu, M. Zhang, Z. Luo, and Y. Cai, “An ensemble deep learning method for vehicle type classification on visual traffic surveillance sensors,” IEEE Access, vol. 5, pp. 24417-24425, 2017. [50] L. Rokach, “Ensemble-based classifiers,” Artif. Intell. Rev., vol. 33, no. 1–2, pp. 1–39, 2010. [51] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, S. A. Barman, “An ensemble classification-based approach applied to retinal blood vessel segmentation,” IEEE Trans. Biomed. Eng., vol. 59, no. 9, pp. 2538–2548, Sep. 2012. [52] X. Liu, Z. Liu, G. Wang, Z. Cai, and H. Zhang, “Ensemble transfer learning algorithm,” IEEE Access, vol. 6, pp. 2389-2396, 2018. [53] L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, 1996. [54] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of computer and system sciences, vol. 55, no. 1, pp. 119-139, 1997. [55] D. H. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, no. 2, pp. 241-259, 1992. [56] S. Hochreiter, and J. Schmidhuber “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997. [57] J.H. Kim, S.W. Lee, D. Kwak, M.O. Heo, J. Kim, J.W. Ha, and B.T. Zhang, “Multimodal residual learning for visual Qa,” In Advances in Neural Information Processing Systems, pp. 361-369, 2016. [58] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput. Assist. Interv. (MICCAI), Munich, Germany, pp. 234–241, 2015. [Online]. Available: https://arxiv.org/abs/1505.04597 [59] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. Computer Vision and Pattern Recognition(CVPR), Boston, MA,USA,doi. 10.1109/CVPR.2015.7298965, 2015. [60] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017. [Online]. Available: https://arxiv.org/abs/1612.03144 [61] S. Iizuka, E. S. Serra, and H. Ishikawa, “Globally and locally consistent image completion,” ACM Transactions on Graphics (TOC), vol. 36, no. 4, pp.107-115, 2017. [62] K. Zhang, Y. Guo, X. Wang, J. Yuan, and Q. Ding, “Multiple feature reweight denseNet for image classification,” IEEE Access, vol. 7, pp. 9872-9880, 2019. [63] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process, 13(4):600-612, 2004. [64] Dong, Qi, Shaogang Gong, and Xiatian Zhu. "Imbalanced deep learning by minority class incremental rectification." IEEE trans. on pattern analysis and machine intelligence, vol.41, no..6, pp. 1367-1381, 2018. [65] G. Hu, X. Peng, Y. Yang, T. M. Hospedales, and J. Verbeek, “Frankenstein learning deep face representations using small data,” IEEE Trans. Image Process, vol. 27, no. 1, pp. 293-303, 2018. [66] C. Qiu, S. Zhang, C. Wang, Z. Yu, H. Zheng, and B. Zheng, “Improving transfer learning and squeeze-and-excitation networks for small-scale fine-grained fish image classification,” IEEE Access, vol. 6, pp. 78503-78512, 2018. [67] J. Wang, S. Li, B. Han, Z. An, H. Bao, and S. Ji, “Generalization of deep neural networks for imbalanced fault classification of machinery using generative adversarial networks,” IEEE Access, vol. 7, pp. 111168 -111180, 2019. [68] D. Lee, S. Lee, H. Lee, K. Lee, and H.J. Lee, “Resolution-preserving generative adversarial networks for image enhancement,” IEEE Access, vol. 7, pp. 110344 -110357, 2019. [69] Y. Yang, C. Hou, Y. Lang, G. Yue, and Y. He, “One-class classification using generative adversarial networks,” IEEE Access, vol. 7, pp. 37970 -37979, 2019.
|