|
[1] C. Huang, Y. Wang, L. Huang, J. Chin, and L. Chen, “Fast physically correct refocusing for sparse light fields using block-based multi-rate view interpolation,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 603–618, 2017. [2] S. Meyer, O. Wang, H. Zimmer, M. Grosse, and A. Sorkine-Hornung, “Phase-based frame interpolation for video,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 735–744, 2015. [3] N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph., vol. 32, no. 4, pp. 80:1–80:10, 2013. [4] P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3d displays,” ACM Trans. Graph., vol. 32, no. 6, pp. 221:1–221:8, 2013. [5] Yue-Yun Li, “Phase-based frame interpolation for non-pixel-wise movements,” 2019. [6] E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in International Conference on Image Processing, vol. 3, pp. 444–447, 1995. [7] J. Portilla and E. P. Simoncelli, “A parametric texture model based on joint statistics of complex wavelet coefficients,” International Journal of Computer Vision, vol. 40, no. 1, pp. 49–70, 2000. [8] W. T. Freeman and E. H. Adelson, “The design and use of steerable filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891–906, 1991. [9] S. Meyer, A. Sorkine-Hornung, and M. Gross, “Phase-based modification transfer for video,” in European Conference on Computer Vision, pp. 633–648, 2016. [10] P. Kellnhofer, P. Didyk, S. Wang, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “3dtv at home: Eulerian-lagrangian stereo-to-multiview conversion,” ACM Trans. Graph., vol. 36, no. 4, pp. 146:1–146:13, 2017. [11] K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4d light fields,” in Asian Conference on Computer Vision, pp. 19–34, 2016. [12] S. Wanner, S. Meister, and B. Goldlücke, “Datasets and benchmarks for densely sampled 4d light fields,” in Vision, Modeling, and Visualization, pp. 225–226, 2013. [13] C. Huang, J. Chin, H. Chen, Y. Wang, and L. Chen, “Fast realistic refocusing for sparse light fields,” in IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1176–1180, 2015. [14] “The (new) stanford light field archive,” http://lightfield.stanford.edu/. [15] “Middlebury stereo datasets,” http://vision.middlebury.edu/stereo/data/. |