|
[1] Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu Tdan, “temporallydeformable alignment network for video super-resolution. in 2020 ieee,” in CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3357–3366. [2] Hua Wang, Dewei Su, Chuangchuang Liu, Longcun Jin, Xianfang Sun, and Xinyi Peng, “Deformable non-local network for video super-resolution,” IEEE Access, vol. 7, pp. 177734–177744, 2019. [3] Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy, “Edvr: Video restoration with enhanced deformable convolutional networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 0–0. [4] Xinyi Ying, Longguang Wang, Yingqian Wang, Weidong Sheng, Wei An, and Yulan Guo, “Deformable 3d convolution for video super-resolution,” IEEE Signal Processing Letters, vol. 27, pp. 1500–1504, 2020. [5] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei, “Deformable convolutional networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 764–773. [6] Nian-Hui Lin, “Memory-effcient deformable convolution engine with locationconfined for video super-resolution,” Master’s Thesis in NTHU, 2022. [7] François Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258. [8] Chao-Tsung Huang, Yu-Chun Ding, Huan-Ching Wang, Chi-Wen Weng, Kai- Ping Lin, Li-Wei Wang, and Li-De Chen, “ecnn: A block-based and highlyparallel cnn accelerator for edge inference,” in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019, pp. 182– 195. [9] Yu-Chun Ding, Kai-Pin Lin, Chi-Wen Weng, Li-Wei Wang, Huan-Ching Wang, Chun-Yeh Lin, Yong-Tai Chen, and Chao-Tsung Huang, “A 4.6-8.3 tops/w 1.2-4.9 tops cnn-based computational imaging processor with overlapped stripe inference achieving 4k ultra-hd 30fps,” in European Solid-State Circuits Conference (ESSCIRC), 2022. [10] Kelvin CK Chan, Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy, “Understanding deformable alignment in video super-resolution,” in Proceedings of the AAAI conference on artificial intelligence, 2021, vol. 35, pp. 973–981. [11] Ce Liu and Deqing Sun, “On bayesian adaptive video super resolution,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 2, pp. 346–360, 2013. [12] Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T Freeman, “Video enhancement with task-oriented flow,” International Journal of Computer Vision, vol. 127, no. 8, pp. 1106–1125, 2019. [13] Jose Caballero, Christian Ledig, Andrew Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang, and Wenzhe Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4778– 4787. [14] Ding Liu, Zhaowen Wang, Yuchen Fan, Xianming Liu, Zhangyang Wang, Shiyu Chang, and Thomas Huang, “Robust video super-resolution with learned temporal dynamics,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2507–2515. [15] Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia, “Detailrevealing deep video super-resolution,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4472–4480. [16] Mehdi SM Sajjadi, Raviteja Vemulapalli, and Matthew Brown, “Framerecurrent video super-resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6626–6634. [17] Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim, “Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3224–3232. [18] Takashi Isobe, Songjiang Li, Xu Jia, Shanxin Yuan, Gregory Slabaugh, Chunjing Xu, Ya-Li Li, Shengjin Wang, and Qi Tian, “Video super-resolution with temporal group attention,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8008–8017. [19] Seungjun Nah, Sungyong Baik, Seokil Hong, Gyeongsik Moon, Sanghyun Son, Radu Timofte, and Kyoung Mu Lee, “Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 0–0. [20] Saehyun Ahn, Jung-Woo Chang, and Suk-Ju Kang, “An efficient accelerator design methodology for deformable convolutional networks,” in IEEE International Conference on Image Processing (ICIP). IEEE, 2020, pp. 3075–3079. [21] Sanghamitra Dutta, Ziqian Bai, Tze Meng Low, and Pulkit Grover, “Codenet: Training large scale neural networks in presence of soft-errors,” arXiv preprint arXiv:1903.01042, 2019. [22] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874–1883. [23] Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 624–632. [24] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga„ et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019. [25] Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. |