帳號:guest(3.138.134.140)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林學澤
作者(外文):Lin, Hsueh-Tse
論文名稱(中文):基於關注物體考量之雙魚眼鏡頭全景拼接
論文名稱(外文):Salient Object-aware Panorama Stitching of Dual-fisheye Camera
指導教授(中文):林嘉文
指導教授(外文):Lin, Chia-Wen
口試委員(中文):蔡文錦
施皇嘉
胡敏君
口試委員(外文):Tsai, Wen-Jiin
Shih, Huang-Chia
Hu, Min-Chun
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:107061516
出版年(民國):109
畢業學年度:109
語文別:英文
論文頁數:34
中文關鍵詞:拼接雙魚眼鏡頭顯著物體切割人物偵測
外文關鍵詞:StitchingDual fisheye cameraSalient object segmentationHuman detection
相關次數:
  • 推薦推薦:0
  • 點閱點閱:442
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
最近全景雙魚眼鏡頭被廣泛地使用,像是在極限運動,虛擬實境以及會議室鏡頭等等,以會議室鏡頭為例,對於維持每一位與會者的臉的完整性是非常重要的,因為在會議室中,人屬於顯著且重要的物體。也就是說,當有人移動經過兩顆鏡頭重疊的部分時,想將拼接的結果接得無痕跡且無瑕疵就變得相當有挑戰性。
其中Warping跟Seam finding這兩階段對於最後全景的結果好壞扮演十分重要的角色。然而目前現有的方法在實作這兩階段並未將移動物體的資訊考慮進去,特別是在Seam finding的部分,因為在影片中,接縫處位置的變化會直接影響視覺的感受,所以我們除了一般常用的約束項之外,更針對移動和重要的物體設計了演算法。
在本篇論文裡,我們提出了基於移動及重要物體的魚眼鏡頭全景拼接演算法。首先,為了使得Warping裡的對準項目更能集中在移動的物體的對應點,我們從對應點的座標位置計算moving weight m_t。第二,合適的接縫位置有助於減少拼接瑕疵,我們根據前後禎的warping結果算出adaptive weight λ_t。最後,針對移動或重要物體,我們從人物偵測及切割的結果,給予Seam finding中的能量圖相對應的懲罰。
Recently, dual-fisheye camera is widely used in lots of occasion, e.g., extreme sports, virtual reality or meeting room camera. For utilizing in meeting room as example, it is important to keep every attendees’ face complete because human is salient object in this occasion. That is, it becomes a challenge to stitch seamless and no defect when a person moving across the overlapping region between two fisheye cameras.
Warping and seam finding phases play significant character to the final panorama result. However, existing method don’t take the information of moving object into consideration when doing these two phases, especially in the seam finding phase. In video, the changes of seam position influence the visual experience dramatically. In addition to normal last frame regularization, we design algorithm that works on moving and salient object.
In our method, we propose a moving and salient object aware stitching algorithm for dual-fisheye camera. First, to make alignment term of warping could more focus on matching points lands on moving object, we calculate the moving weight m_t from matching points’ coordinates. Second, appropriate seam position lead more less defect from warping error. Thus, we compare the warping result between frame to frame and get adaptive weight λ_t for seam regularization term. Third, to address better stitching result on salient object, we add salient object term in the form of corresponding energy punishment into seam finding from human detection and segmentation.
摘要------------------------------------ii
Abstract-------------------------------iii
Content---------------------------------iv
Chapter 1 Introduction------------------6
Chapter 2 Related Work------------------8
2.1 Control Points-----------------------8
2.2 Mesh Warping-------------------------8
Chapter 3 Proposed Method--------------10
3.1 Overview----------------------------10
3.2 Image Preprocessing-----------------11
A. Lens Shading Correction--------------11
B. Image Unwarping----------------------12
C. AWB Correction-----------------------13
3.3 Matching Points Finding-------------14
A. SuperPoint---------------------------14
B. Adaptive Threshold for RANSAC--------16
3.4 Stitching---------------------------17
A. REW with Moving Weight---------------17
B. Salient Object-aware seam Carving----19
C. Multiband Blending-------------------20
Chapter 4 Experiments and Discussion---21
4.1 Dataset-----------------------------21
4.2 Implementation Details--------------22
4.3 Evaluation--------------------------23
4.4 Visualization-----------------------24
A. Single Image Test--------------------24
B. Video Test---------------------------27
4.5 Blind Test--------------------------31
Chapter 5 Conclusion------------------32
Reference-------------------------------33
1. “LUNA 360 Camera” [Online]. Available: http://luna.camera/.
2. D. G. Lowe, “Distinctive Image Features from Scale-invariant Key-points,” Proc. International journal of computer vision 60 (2), pp. 91-110, 2004.
3. D. DeTone, T. Malisiewicz, & A. Rabinovich, “SuperPoint: Self-Supervised Interest Point Detection and Description,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 337-349.
4. E. Rublee, V. Rabaud, K. Konolige, & G. Bradski, “Orb: An Efficient alternative to sift or surf,” in Computer Vision (ICCV), 2011 IEEE international conference on. IEEE, 2011, pp. 2564-2176.
5. G. Sharma, W. Wu, & E. N. Dalal, “The CIEDE2000 color‐difference formula: Implementation notes, supplementary test data, and mathematical observations,” Color Research & Application: Endorsed by Inter‐Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur, 30. 1, 2005, 21-30.
6. I. C. Lo, K. T. Shih, & H. H. Chen, “Image Stitching for Dual Fisheye Cameras,” in 2018 25th IEEE International Conference on Image Processing , 2018, pp. 3164-3168.
7. I. C. Lo, K. T. Shih, & H. H. Chen, “360° Video Stitching for Dual Fisheye Cameras,” in 2019 IEEE International Conference on Image Processing, 2019, pp. 3522-3526.
8. J. Redmon & A. Farhadi, “Yolov3: An incremental improvement,” in arXiv:1804.02767, 2018.
9. J. Li, Z. Wang, S. Lai, Y. Zhai, & M Zhang, “Parallax-Tolerant Image Stitching Based on Robust Elastic Warping,” in IEEE Transactions on Multimedia, vol. 20, no. 7, pp. 1-1, 2018.
10. M. A. Fischler, & R. C. Bolles, “Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography,” Communications of th ACM, Vol 24, pp. 381-395, 1981.
11. M. Brown, & D. G. Lowe, “Automatic Panoramic Image Stitching using Invariant Features,” Int. J. Comput. Vis., vol. 74, no. 1, pp. 59-73, Aug. 2007.
12. M. Afifi, B. Price, S. Cohen, & M. S. Brown, “When Color Constancy Goes Wrong: Correcting Improperly White-Balanced Images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1535-1544.
13. S. Avidan, & A. Shamir, “Seam Carving for Content-aware Image Resizing,” ACM Trans Graph, vol. 26, no. 3, Jul. 2007.
14. S. Schaefer, T. McPhail, & J. Warren, “Image Deformation Using Moving Least Squares,” In ACM SIGGRAPH 2006 Papers, 2006, pp. 533-540.
15. T. Ho &, M. Budagavi, “Dual-fisheye lens stitching for 360-degree imaging,” in Proc. of the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’17).
16. T. Ho, I. Schizas, K. R. Rao, & M. Budagavi, “360-degree video stitching for dual-fisheye lens cameras based on rigid moving least squares,” Proc. IEEE International Conference on Image Processing, 2017, pp. 51-55.
17. Tancredo Souza et al. “360 Stitching from Dual-Fisheye Cameras Based on Feature Cluster Matching,” In 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). pp. 313-320, 2018.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *