|
[1] C.-K. Liang, C.-C. Cheng, Y.-C. Lai, L.-G. Chen, and H. H. Chen, “Hardwareefficient belief propagation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 5, pp. 525–537, 2011. [2] J. Noraky and V. Sze, “Low power depth estimation of rigid objects for timeof-flight imaging,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 6, pp. 1524–1534, 2019. [3] J. J. Clark, “Active photometric stereo.” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 92, 1992, pp. 29–34. [4] J. L. G. Bello and M. Kim, “Self-supervised deep monocular depth estimation with ambiguity boosting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 12, pp. 9131–9149, 2021. [5] K. Zhou, X. Meng, and B. Cheng, “Review of stereo matching algorithms based on deep learning,” Computational intelligence and neuroscience, vol. 2020, no. 1, p. 8562323, 2020. [6] L. Liu, Y. Liu, Y. Lv, and J. Xing, “Lanet: Stereo matching network based on linear-attention mechanism for depth estimation optimization in 3d reconstruction of inter-forest scene,” Frontiers in Plant Science, vol. 13, p. 978564, 2022. [7] H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching and mutual information,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2005, pp. 807–814 vol. 2. [8] A. Gershun, “The light field,” Journal of Mathematics and Physics, vol. 18, no. 1-4, pp. 51–151, 1939. [9] M. Levoy and P. Hanrahan, “Light field rendering,” in Seminal Graphics Papers: Pushing the Boundaries, Volume 2, 2023, pp. 441–452. [10] Y. Furukawa, C. Hernández et al., “Multi-view stereo: A tutorial,” Foundations and Trends® in Computer Graphics and Vision, vol. 9, no. 1-2, pp. 1–148, 2015. [11] G. Luo and Y. Zhu, “Hole filling for view synthesis using depth guided global optimization,” IEEE Access, vol. 6, pp. 32 874–32 889, 2018. [12] S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 3, pp. 606–619, 2014. [13] V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” ACM Transactions on Graphics, vol. 22, no. 3, pp. 613–620, 2003. [14] Z. Li, L. Song, Z. Chen, X. Du, L. Chen, J. Yuan, and Y. Xu, “Relit-neulf: Efficient relighting and novel view synthesis via neural 4d light field,” in ACM International Conference on Multimedia, 2023, pp. 7007–7016. [15] C.-T. Huang, J. Chin, H.-H. Chen, Y.-W. Wang, and L.-G. Chen, “Fast realistic refocusing for sparse light fields,” in IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2015, pp. 1176–1180. [16] T. Gregory, M. P. Edgar, G. M. Gibson, and P.-A. Moreau, “A gigapixel computational light-field camera,” arXiv preprint arXiv:1910.08338, 2019. [17] Y.-L. Hsiao, “VLSI Architecture for High-Quality Belief Propagation with Large Tiles and Conditional Random Fields,” in NTHU, 2017. [18] S. Wanner, S. Meister, and B. Goldluecke, “Datasets and benchmarks for densely sampled 4d light fields.” in Vision, Modeling & Visualization, vol. 13, 2013, pp. 225–226. [19] J. Li, Z. Lu, G. Zeng, R. Gan, and H. Zha, “Similarity-aware patchwork assembly for depth image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3374–3381. [20] “Middlebury stereo,” vision.middlebury.edu/stereo/, accessed: 2024-06-10. [21] H. H. Chen, C. T. Huang, S. S. Wu, C. L. Hung, T. C. Ma, and L. G. Chen, “23.2 a 1920×1080 30fps 611 mw five-view depth-estimation processor for light-field applications,” in IEEE International Solid-State Circuits Conference, 2015, pp. 1–3. [22] J. Lee, D. Shin, K. Lee, and H.-J. Yoo, “A 31.2 pJ/disparity· pixel stereo matching processor with stereo SRAM for mobile UI application,” in Symposium on VLSI Circuits. IEEE, 2017, pp. C158–C159. [23] W. Wang, J. Yan, N. Xu, Y. Wang, and F.-H. Hsu, “Real-time high-quality stereo vision system in fpga,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 10, pp. 1696–1708, 2015. [24] Y.-T. Lu, “Memory-Efficient VLSI Architecture of Cost Volume Generation for High-Range Light-Field Stereo Matching,” in NTHU, 2017. [25] J.-X. Chai, X. Tong, S.-C. Chan, and H.-Y. Shum, “Plenoptic sampling,” in ACM Special Interest Group on GRAPHics and Interactive Techniques, 2000, pp. 307–318. [26] G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting,” ACM Transactions on Graphics, vol. 31, no. 4, pp. 1–11, 2012. [27] D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999. [28] D. Zachariah, M. Sundin, M. Jansson, and S. Chatterjee, “Alternating leastsquares for low-rank matrix reconstruction,” IEEE Signal Processing Letters, vol. 19, no. 4, pp. 231–234, 2012. [29] Y.-X. Wang and Y.-J. Zhang, “Nonnegative matrix factorization: A comprehensive review,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 6, pp. 1336–1353, 2012. [30] M. W. Berry, M. Browne, A. N. Langville, V. P. Pauca, and R. J. Plemmons, “Algorithms and applications for approximate nonnegative matrix factorization,” Computational Statistics & Data Analysis, vol. 52, no. 1, pp. 155–173, 2007. [31] A. Cichocki, S.-i. Amari, R. Zdunek, R. Kompass, G. Hori, and Z. He, “Extended SMART Algorithms for Non-negative Matrix Factorization,” in Artificial Intelligence and Soft Computing, vol. 4029, 2006, pp. 548–562. [32] N. Gillis and F. Glineur, “Accelerated Multiplicative Updates and Hierarchical ALS Algorithms for Nonnegative Matrix Factorization,” Neural Computation, vol. 24, no. 4, pp. 1085–1105, 2012. [33] C.-J. Hsieh and I. S. Dhillon, “Fast Coordinate Descent Methods with Variable Selection for Non-Negative Matrix Factorization,” in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2011, pp. 1064–1072. [34] F. Heide, G. Wetzstein, R. Raskar, and W. Heidrich, “Adaptive image synthesis for compressive displays,” ACM Transactions on Graphics, vol. 32, no. 4, pp. 1–12, 2013. [35] D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3d displays using low-rank light field factorization,” in ACM Special Interest Group on GRAPHics and Interactive Techniques Asia, 2010, pp. 1–10. [36] T. Greg, “Large Geometric Models Archive,” https://graphics.stanford.edu/data/3Dscanrep/, accessed: 2024-06-10. [37] Panba, “SHATTERING CUBES,” https://www.blendswap.com/blend/ 20738, accessed: 2024-06-10. [38] clive9, “Newtons Cradle,” https://www.blendswap.com/blend/18816, accessed: 2024-06-10. [39] H. Dennis, “3d wolf Animated and Game-Ready,” https://3dhaupt.com/, accessed: 2024-06-10. [40] B. O. Community, Blender - a 3D modelling and rendering package, http://www.blender.org, Stichting Blender Foundation, Amsterdam, 2018. [41] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. [42] G. Luo and Y. Zhu, “Foreground removal approach for hole filling in 3D video and FVV synthesis,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 10, pp. 2118–2131, 2016. [43] X. Liu, Y. Zhang, S. Hu, S. Kwong, C.-C. J. Kuo, and Q. Peng, “Subjective and objective video quality assessment of 3D synthesized views with texture/depth compression distortion,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 4847–4861, 2015. [44] A. Torralba and A. Oliva, “Depth estimation from image structure,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1226–1238, 2002. [45] C. Chen, H. Lin, Z. Yu, S. B. Kang, and J. Yu, “Light field stereo matching using bilateral statistics of surface cameras,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1518–1525. [46] C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, and M. Gross, “Scene reconstruction from high spatio-angular resolution light fields,” ACM Transactions on Graphics, vol. 32, no. 4, p. 1, 2013. [47] Q. Yang, “A non-local cost aggregation method for stereo matching,” in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 1402– 1409. [48] X. Mei, X. Sun, W. Dong, H. Wang, and X. Zhang, “Segment-tree based cost aggregation for stereo matching,” in IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 313–320. [49] “The stanford light field archive from stanford university computer graphics laboratory,” http://lightfield.stanford.edu/papers.html, accessed: 201706-08. [50] K. Honauer, O. Johannsen, D. Kondermann, and B. Goldluecke, “A dataset and evaluation methodology for depth estimation on 4D light fields,” in IEEE Asian Conference on Computer Vision, 2016. [51] C. C. Cheng, C. T. Li, C. K. Liang, Y. C. Lai, and L. G. Chen, “Architecture design of stereo matching using belief propagation,” in IEEE International Symposium on Circuits and Systems, 2010, pp. 4109–4112. [52] S. Birchfield and C. Tomasi, “A pixel dissimilarity measure that is insensitive to image sampling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 4, pp. 401–406, 1998. [53] H. Hirschmuller and D. Scharstein, “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 9, pp. 1582–1599, 2009. [54] N. Y. C. Chang, T. H. Tsai, B. H. Hsu, Y. C. Chen, and T. S. Chang, “Algorithm and architecture of disparity estimation with mini-census adaptive support weight,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 6, pp. 792–805, 2010. [55] J. Ko and Y. S. Ho, “Stereo matching using census transform of adaptive window sizes with gradient images,” in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2016, pp. 1–4. [56] D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” in IEEE Workshop on Stereo and Multi-Baseline Vision, 2001, pp. 131–140. [57] J. Sun, N.-N. Zheng, and H.-Y. Shum, “Stereo matching using belief propagation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 7, pp. 787–800, 2003. [58] S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4d light fields,” in IEEE Conference on Computer Vision and Pattern Recognition,2012, pp. 41–48. [59] S. Z. Li, Markov random field modeling in image analysis. & Business Media, 2009. Springer Science [60] A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast costvolume filtering for visual correspondence and beyond,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 504–511, 2013. [61] C. Ttofis and T. Theocharides, “High-quality real-time hardware stereo matching based on guided image filtering,” in Design, Automation Test in Europe Conference Exhibition, 2014, pp. 1–6. [62] D. W. Yang, L. C. Chu, C. W. Chen, J. Wang, and M. D. Shieh,“Depthreliability-based stereo-matching algorithm and its vlsi architecture design,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 6, pp. 1038–1050, 2015. [63] Y. Y. Boykov and M. P. Jolly, “Interactive graph cuts for optimal boundary region segmentation of objects in n-d images,” in IEEE International Conference on Computer Vision, vol. 1, 2001, pp. 105–112 vol.1. [64] T. Peterka, R. L. Kooima, D. J. Sandin, A. Johnson, J. Leigh, and T. A. DeFanti, “Advances in the advances in the dynallax solid-state dynamic parallax barrier autostereoscopic visualization display system,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 3, pp. 487–499, 2008. [65] K. Sakamoto and T. Morii, “Multi-view 3d display using parallax barrier combined with polarizer,” in Advanced Free-Space Optical Communication Techniques/Applications II and Photonic Components/Architectures for Microwave Systems and Displays, vol. 6399. SPIE, 2006, pp. 214–221. [66] H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs, “High dynamic range display systems,” in ACM Special Interest Group on GRAPHics and Interactive Techniques, 2004, pp. 760–768. [67] K. Takahashi, Y. Kobayashi, and T. Fujii, “From focal stack to tensor lightfield display,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4571–4584, 2018. [68] L. Si, Q. Wang, and Z. Xiao, “Matching cost fusion in dense depth recovery for camera-array via global optimization,” in IEEE International Conference on Virtual Reality and Visualization, 2014, pp. 180–185. [69] X. Mei, X. Sun, M. Zhou, S. Jiao, H. Wang, and X. Zhang, “On building an accurate stereo matching system on graphics hardware,” in IEEE International Conference on Computer Vision Workshops, 2011, pp. 467–474. [70] M. Levoy, “Light fields and computational imaging,” Computer, vol. 39, no. 8, pp. 46–55, 2006. [71] S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor system description, issues and solutions,” in IEEE Conference on Computer Vision and Pattern Recognition Workshop, 2004, pp. 35–35. [72] Y. S. Heo, K. M. Lee, and S. U. Lee, “Robust stereo matching using adaptive normalized cross-correlation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 4, pp. 807–822, 2010. [73] G. Li, “Stereo matching using normalized cross-correlation in logrgb space,” in IEEE International Conference on Computer Vision in Remote Sensing.IEEE, 2012, pp. 19–23. [74] G.-Q. Wei, W. Brauer, and G. Hirzinger, “Intensity-and gradient-based stereo matching using hierarchical gaussian basis functions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1143–1160,1998. [75] X. Song, X. Zhao, L. Fang, H. Hu, and Y. Yu, “Edgestereo: An effective multitask learning network for stereo matching and edge detection,” International Journal of Computer Vision, vol. 128, no. 4, pp. 910–930, 2020. [76] M. Poggi, D. Pallotti, F. Tosi, and S. Mattoccia, “Guided stereo matching,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 979–988. [77] J. Joglekar, S. S. Gedam, and B. K. Mohan, “Image matching using sift features and relaxation labeling technique—a constraint initializing method for dense stereo matching,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 9, pp. 5643–5652, 2014. [78] G. Saygili, L. van der Maaten, and E. A. Hendriks, “Improving segment based stereo matching using surf key points,” in IEEE International Conference on Image Processing. IEEE, 2012, pp. 2973–2976. [79] Y. Boykov and O. Veksler, Graph Cuts in Vision and Graphics: Theories and Applications. Springer US, 2006, pp. 79–96. [80] J. Sun, N.-N. Zheng, and H.-Y. Shum, “Stereo matching using belief propagation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 7, pp. 787–800, 2003. [81] Y. Ohta and T. Kanade, “Stereo by intra- and inter-scanline search using dynamic programming,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-7, no. 2, pp. 139–154, 1985. [82] X. Sun, X. Mei, S. Jiao, M. Zhou, and H. Wang, “Stereo matching with reliable disparity propagation,” in IEEE Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 2011, pp. 132–139. [83] N. A. Dodgson, “Autostereoscopic 3d displays,” Computer, vol. 38, no. 8, pp. 31–36, 2005. [84] J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M. Furuya, and M. Sato, “Integral three-dimensional television using a 33megapixel imaging system,” Journal of Display Technology, vol. 6, no. 10, pp. 422–430, 2010. [85] W.-X. Zhao, Q.-H. Wang, A.-H. Wang, and D.-H. Li, “Autostereoscopic display based on two-layer lenticular lenses,” Optics Letters, vol. 35, no. 24, pp. 4127–4129, 2010. [86] H.-J. Im, B.-J. Lee, H.-k. Hong, and H.-H. Shin, “Auto-stereoscopic 60 view 3d using slanted lenticular lens arrays,” Journal of Information Display, vol. 8, no. 4, pp. 23–26, 2007. [87] Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Optics Express, vol. 18, no. 9, pp. 8824–8835, 2010. [88] D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, and R. Raskar, “Polarization fields: dynamic light field display using multi-layer LCDs,” in ACM Special Interest Group on GRAPHics and Interactive Techniques Asia, 2011,pp. 1–10. [89] A. Shashua and T. Hazan, “Non-negative tensor factorization with applications to statistics and computer vision,” in International Conference on Machine learning, 2005, pp. 792–799. [90] V. D. Blondel, N.-D. Ho, and P. Van Dooren, “Weighted nonnegative matrix factorization and face feature extraction,” Image and Vision Computing, vol. 1, p. 17, 2008. [91] A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging, vol. 6, no. 1, pp. 81–94, 1984. [92] G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3d: tomographic image synthesis for attenuation-based light field and high dynamic range displays,” in ACM Special Interest Group on GRAPHics and Interactive Techniques, 2011, pp. 1–12. [93] J. Zhang, Z. Fan, D. Sun, and H. Liao, “Unified mathematical model for multilayer-multiframe compressive light field displays using lcds,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 3, pp. 1603–1614, 2018. [94] K. Maruyama, Y. Inagaki, K. Takahashi, T. Fujii, and H. Nagahara, “A 3d display pipeline from coded-aperture camera to tensor light-field display through cnn,” in 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019, pp. 1064–1068. [95] K. Maruyama, K. Takahashi, and T. Fujii, “Comparison of layer operations and optimization methods for light field display,” IEEE Access, vol. 8, pp. 38 767–38 775, 2020. [96] P.-H. Chen, S.-W. Yang, and C.-T. Huang, “A 250-mw 5.4 g-rendered-pixel/s realistic refocusing processor for high-performance five-camera mobile devices,” IEEE Open Journal of the Solid-State Circuits Society, vol. 3, pp. 52–62, 2023. [97] F.-C. Huang, D. P. Luebke, and G. Wetzstein, “The light field stereoscope.” in Special Interest Group on GRAPHics and Interactive Techniques emerging technologies, 2015, pp. 24–1. [98] X. Liu, K. Kang, and Y. Liu, “Stereoscopic image quality assessment based on depth and texture information,” IEEE Systems Journal, vol. 11, no. 4, pp.2829–2838, 2016. |