|
[1] H. Talebi and P. Milanfar. “Learned perceptual image enhancement”. In: 2018 IEEE International Conference on Computational Photography (ICCP). 2018, pp. 1–13. doi: 10.1109/ICCPHOT.2018.8368474. [2] Y. Chen et al. “Deep Photo Enhancer: Unpaired Learning for Image Enhancement from Photographs with GANs”. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, pp. 6306–6314. doi: 10.1109/CVPR.2018. 00660. [3] S. Wang et al. “Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images”. In: IEEE Transactions on Image Processing 22.9 (2013), pp. 3538–3548. issn: 1941-0042. doi: 10.1109/TIP.2013.2261309. [4] V. Bychkovsky et al. “Learning photographic global tonal adjustment with a database of input / output image pairs”. In: CVPR 2011. 2011, pp. 97–104. doi: 10.1109/ CVPR.2011.5995332. [5] Ian Goodfellow et al. “Generative Adversarial Nets”. In: Advances in Neural Information Processing Systems 27. Ed. by Z. Ghahramani et al. Curran Associates, Inc., 2014, pp. 2672–2680. url: http://papers.nips.cc/paper/5423-generativeadversarial-nets.pdf. [6] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. “U-Net: Convolutional Networks for Biomedical Image Segmentation”. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Ed. by Nassir Navab et al. Cham: Springer International Publishing, 2015, pp. 234–241. isbn: 978-3-319-24574-4. [7] K. He et al. “Deep Residual Learning for Image Recognition”. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90. 21 [8] Y. LeCun et al. “Gradient-Based Learning Applied to Document Recognition”. In: Intelligent Signal Processing. IEEE Press, 2001, pp. 306–351. [9] L. Kang et al. “Convolutional Neural Networks for No-Reference Image Quality Assessment”. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. 2014, pp. 1733–1740. doi: 10.1109/CVPR.2014.224. [10] S. Bosse et al. “A deep neural network for image quality assessment”. In: 2016 IEEE International Conference on Image Processing (ICIP). 2016, pp. 3773–3777. doi: 10.1109/ICIP.2016.7533065. [11] N. Murray, L. Marchesotti, and F. Perronnin. “AVA: A large-scale database for aesthetic visual analysis”. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. 2012, pp. 2408–2415. doi: 10.1109/CVPR.2012.6247954. [12] Y. Kao, C. Wang, and K. Huang. “Visual aesthetic quality assessment with a regression model”. In: 2015 IEEE International Conference on Image Processing (ICIP). 2015, pp. 1583–1587. doi: 10.1109/ICIP.2015.7351067. [13] B. Jin, M. V. O. Segovia, and S. Süsstrunk. “Image aesthetic predictors based on weighted CNNs”. In: 2016 IEEE International Conference on Image Processing (ICIP). 2016, pp. 2291–2295. doi: 10.1109/ICIP.2016.7532767. [14] Hui Zeng, Lei Zhang, and Alan C. Bovik. A Probabilistic Quality Representation Approach to Deep Blind Image Quality Prediction. 2017. arXiv: 1708.08190 [cs.CV]. [15] H. Talebi and P. Milanfar. “NIMA: Neural Image Assessment”. In: IEEE Transactions on Image Processing 27.8 (2018), pp. 3998–4011. issn: 1941-0042. doi: 10.1109/TIP.2018.2831899. [16] K. Lata, M. Dave, and K. N. Nishanth. “Image-to-Image Translation Using Generative Adversarial Network”. In: 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA). 2019, pp. 186–189. doi: 10.1109/ICECA.2019.8822195. [17] K. Schwarz, P. Wieschollek, and H. P. A. Lensch. “Will People Like Your Image? Learning the Aesthetic Space”. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). 2018, pp. 2048–2057. doi: 10.1109/WACV.2018.00226. [18] Andrew Howard et al. Searching for MobileNetV3. 2019. arXiv: 1905.02244 [cs.CV]. 22 [19] J. Deng et al. “ImageNet: A large-scale hierarchical image database”. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009, pp. 248–255. doi: 10.1109/CVPR.2009.5206848. [20] Mark Sandler et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. 2018. arXiv: 1801.04381 [cs.CV]. [21] K. He et al. “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 37.9 (2015), pp. 1904–1916. issn: 1939-3539. doi: 10.1109/TPAMI.2015.2389824. [22] Sergey Ioffe and Christian Szegedy. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”. In: Proceedings of the 32nd International Conference on Machine Learning. Ed. by Francis Bach and David Blei. Vol. 37. Proceedings of Machine Learning Research. Lille, France: PMLR, 2015, pp. 448–456. url: http://proceedings.mlr.press/v37/ioffe15.html. [23] Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for Activation Functions. 2017. arXiv: 1710.05941 [cs.NE]. [24] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer Normalization. 2016. arXiv: 1607.06450 [stat.ML]. [25] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. “BinaryConnect: Training Deep Neural Networks with binary weights during propagations”. In: Advances in Neural Information Processing Systems 28. Ed. by C. Cortes et al. Curran Associates, Inc., 2015, pp. 3123–3131. url: http://papers.nips.cc/paper/5647- binaryconnect - training - deep - neural - networks - with - binary - weights - during-propagations.pdf. [26] Martín Abadi et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2016. arXiv: 1603.04467 [cs.DC]. [27] Christian Szegedy et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. 2017. url: https://www.aaai.org/ocs/index.php/ AAAI/AAAI17/paper/view/14806. [28] Shu Kong et al. “Photo Aesthetics Ranking Network with Attributes and Content Adaptation”. In: CoRR abs/1606.01621 (2016). arXiv: 1606.01621. url: http: //arxiv.org/abs/1606.01621. |