帳號:guest(3.145.84.183)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):吳易展
作者(外文):Wu, Yi Chan
論文名稱(中文):用於移動相機的基於位移分類及混合樣本之前景偵測
論文名稱(外文):Motion clustering with Hybrid-Sample-based foreground segmentation for moving cameras
指導教授(中文):邱瀞德
指導教授(外文):Chiu, Ching Te
口試委員(中文):陳煥宗
范倫達
楊家輝
口試委員(外文):Chen, Hwann Tzong
Van, Lan Da
Yang, Jar Ferr
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:103062599
出版年(民國):105
畢業學年度:104
語文別:中文
論文頁數:72
中文關鍵詞:前景偵測移動相機光流自適應回饋
外文關鍵詞:Foreground segmentationMoving cameraOptical flowAdaptive feedback
相關次數:
  • 推薦推薦:0
  • 點閱點閱:428
  • 評分評分:*****
  • 下載下載:8
  • 收藏收藏:0
前景偵測是許多影像監控應用中非常重要的步驟。雖然現在已經有許多關於前 景偵測的文獻,但大部分的研究都是建立在固定場景的假設之下,他們將每個 像素當作是場景中固定而且獨立的一個位置,當場景移動時,每個像素代表的 位置會隨著時間而變化,因此他們沒辦法處理場景移動所帶來的影響,當將他 們的方法直接套用在移動的場景時,會產生大量的偵測錯誤。雖然目前有些幾 何轉換的方法可以處理移動的場景,但由於轉換時產生的誤差,這類的方法無 法很精確對齊每個像素,另外,幾何轉換的方法沒辦法處理場景中突然的大量 移動。在很多固定場景的前景偵測方法中,像是 Gaussian Mixture Model,由 於他們的模型只建立在顏色的資訊中,他們通常沒辦法有效的偵測出與背景顏 色相近的物體,此外在這些方法中,他們使用了事先定義好的全域參數,並沒 有考慮到每個像素可能代表著不同的背景,舉例來說,為了降低在動態背景中 的偵測錯誤,我們需要調低該像素模型對於背景的敏感度。在這篇論文中,我 們對上面的問題提出了一個基於 hybrid-sample-based 的前景偵測法,特別是針 對 Pan-tilt-zoom 相機中左右以及上下的移動,首先我們利用 homography 轉換 將連續的兩張影像對齊後,再使用我們提出的 motion clustering registration 減 少幾何轉換誤差所帶來的影響,第二個是,對於場景突然的大量移動,我們 提出一個重置模型的機制,可以快速地將不適合的模型替換掉。接著我們將 場景中的紋理資訊加入我們的模型,我們採用的一個有效而且簡單的 binary descriptor,可以將隱藏在背景中的物體偵測出來。最後我們會依據觀察到的現 象來動態地調整每個像素模型的參數,並透過控制像素模型的敏感度以及適應 的速度來處理動態背景。我們使用 ChangeDetection.NET 2014 的數據做為測試資料,實驗的結果顯示我們的方法在隱藏的物體區域有更好的辨識率,另外我 們的 motion clustering registration 可以很有效的移除雜訊。在數據上,我們的 方法在 panning sequence 中至少比其他方法提高了十個百分比,此外,我們的 方法在相機抖動類別中的 tra c sequence 也比其他方法高了至少兩個百分比。
Foreground segmentation is a vital step for many high-level applications such as elderly surveillance, public safety, and tra c monitoring. While there are exten- sive methods that have been proposed for foreground segmentation/background subtraction, most of them assume that cameras are stationary which means they treat each pixel individually. With this assumption, they are unable to handle movements caused by the moving camera. It is because each pixel represents var- ious of position in the moving camera. Therefore, false detections increase due to the lack of alignment between observed pixels and background models. Although there are few methods are proposed to address the alignment problem, they do not consider the impact of registration errors and suddenly large movements be- tween consecutive frames. Most of them are also unable to detect subtle changes in camou age objects because they usually use only color intensities to model the background. Besides, in classi cation step, most of them use a global threshold which is unable to present the various behaviors of background scenes such as waving trees. In this paper, we propose a robust hybrid-sample-based foreground segmentation method for moving cameras to address these problems, especially on pan-tilt-zoom cameras. First, we propose a motion clustering registration to re- duce the impact of registration errors. We estimate a homography matrix between two consecutive frames. Thus, movements of pixels can be estimated by using predicted homography transform, and a motion clustering registration re nement is adopted to minimize the impact of registration errors. Second, a frame-level reinitialization scheme is proposed to solve a suddenly large movement between consecutive frames. Third, we propose a hybrid-sample-based background model- ing technique that each pixel is modeled by not only a color intensity value but also texture information. A novel robust binary descriptor is presented for the background modeling. This allows us to easily detect camou age foreground ob- jects which have the similar color to the background scene. Last, in order to deal with dynamic background, we adopt pixel-level feedback schemes to dynamically and locally control the sensitivity and the adaptation speed of the background model. We evaluate the proposed method with the ChangeDetection.NET 2014 dataset. Experimental results show our detection results are more robust espe- cially in camou age foreground regions. The shape of detection results is also more accuracy. The motion clustering registration can eliminate most of noises caused by registration errors. The proposed method is 10 percent better than other state-of-the-art algorithms in terms of overall F-score of panning sequences, and it also achieves the highest F-score in camera jitter scenarios.
1 Introduction 1
1.1 BasisofBackgroundSubtraction ................... 1 1.2 BackgroundSubtractioninPTZCameras . . . . . . . . . . . . . . 2 1.3 Motivation................................ 3 1.4 Contribution .............................. 6 1.5 ThesisOrganization .......................... 7
2 Related Works 8
2.1 Stationary Background Subtraction Methods . . . . . . . . . . . . . 8
2.1.1 Parametric-basedMethods................... 8
2.1.2 Sample-basedMethods..................... 9
2.1.3 Descriptor-basedMethods................... 10
2.2 Non-Stationary Background Subtraction Methods . . . . . . . . . . 11 2.2.1 Frame-to-globalMethods ................... 11 2.2.2 Frame-to-frameMethods.................... 12
3 Proposed Motion Clustering with Hybrid-sample-based Foreground Segmentation for Moving Cameras 13 3.1 Hybrid-Sample-based Pixel-level Modeling . . . . . . . . . . . . . . 14
3.2 MotionClusteringRegistration .................... 20 3.3 ReinitializationScheme......................... 29 3.4 Classi cationandAdaptiveFeedback................. 30
4 Experimental Results 41
4.1 PerformanceComparison........................ 41 4.1.1 Di erentCon gurationSettings................ 42 4.1.2 Pan-tilt-zoomCameraCategory................ 42 4.1.3 CameraJitterCategory .................... 48
4.2 ProcessingTime ............................ 62
5 Conclusion and Future Work 63
5.1 Conclusion................................ 63 5.2 FutureWork .............................. 64
[1] S. W. Kim, K. Yun, K. M. Yi, S. J. Kim, and J. Y. Choi, “Detection of moving objects with a moving camera using non-panoramic background model,” Machine Vision and Applications, vol. 24, no. 5, pp. 1015–1028, 2013.
[2] G. Allebosch, D. Van Hamme, F. Deboeverie, P. Veelaert, and W. Philips, “C-EFIC: Color and Edge Based Foreground Background Segmentation with Interior Classification.”
[3] G. Allebosch, F. Deboeverie, P. Veelaert, and W. Philips, “EFIC: Edge Based Foreground Background Segmentation and Interior Classification for Dynamic Camera Viewpoints,” Advanced Concepts for Intelligent Vision Systems: 16th International Conference, ACIVS 2015, Catania, Italy, October 26-29, 2015. Proceedings, pp. 130–141, 2015.
[4] S. Bianco, G. Ciocca, and R. Schettini, “How Far Can You Get By Combining Change Detection Algorithms?” CoRR, vol. abs/1505.02921, 2015.
[5] H. Sajid and S. C. S. Cheung, “Background subtraction for static and moving camera,” Image Processing (ICIP), 2015 IEEE International Conference on, pp. 4530–4534, Sept 2015.
[6] P. L. St-Charles, G. A. Bilodeau, and R. Bergevin, “SuBSENSE: A Universal Change Detection Method With Local Adaptive Sensitivity,” IEEE Transactions on Image Processing, vol. 24, no. 1, pp. 359–373, Jan 2015.
[7] A. Sobral and A. Vacavant, “A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos,” Computer Vision and Image Understanding, vol. 122, pp. 4 – 21, 2014.
[8] C. Staufer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., vol. 2, p. 252 Vol. 2, 1999.
[9] P. KaewTraKulPong and R. Bowden, “An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection,” Video-Based Surveillance Systems: Computer Vision and Distributed Processing, pp. 135– 144, 2002.
[10] Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 2, pp. 28–31 Vol.2, Aug 2004.
[11] P. M. Jodoin, M. Mignotte, and J. Konrad, “Statistical Background Subtraction Using Spatial Cues,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 12, pp. 1758–1763, Dec 2007.
[12] A. Elgammal, D. Harwood, and L. Davis, “Non-parametric Model for Background Subtraction,” Computer Vision — ECCV 2000: 6th European Conference on Computer Vision Dublin, Ireland, June 26–July 1, 2000 Proceedings, Part II, pp. 751–767, 2000.
[13] A. Mittal and N. Paragios, “Motion-based background subtraction using adaptive kernel density estimation,” Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 2, pp. II–302–II–309 Vol.2, June 2004.
[14] Q. Zhu, Z. Song, Y. Xie, and L. Wang, “A Novel Recursive Bayesian Learning- Based Method for the E cient and Accurate Segmentation of Video With Dynamic Background,” IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 3865–3876, Sept 2012.
[15] O. Barnich and M. V. Droogenbroeck, “ViBe: A Universal Background Subtraction Algorithm for Video Sequences,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1709–1724, June 2011.
[16] M. Hofmann, P. Tiefenbacher, and G. Rigoll, “Background segmentation with feedback: The Pixel-Based Adaptive Segmenter,” 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 38–43, June 2012.
[17] M. Heikkila and M. Pietikainen, “A texture-based method for modeling the background and detecting moving objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 4, pp. 657–662, April 2006.
[18] S. Zhang, H. Yao, and S. Liu, “Dynamic background modeling and subtraction using spatio-temporal local binary patterns,” 2008 15th IEEE International Conference on Image Processing, pp. 1556–1559, Oct 2008.
[19] S. Liao, G. Zhao, V. Kellokumpu, M. Pietikäinen, and S. Z. Li, “Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes,” Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 1301–1306, June 2010.
[20] G. A. Bilodeau, J. P. Jodoin, and N. Saunier, “Change Detection in Feature Space Using Local Binary Similarity Patterns,” Computer and Robot Vision (CRV), 2013 International Conference on, pp. 106–112, May 2013.
[21] N. Liu, H. Wu, and L. Lin, “Hierarchical Ensemble of Background Models for PTZ-Based Video Surveillance,” IEEE Transactions on Cybernetics, vol. 45, no. 1, pp. 89–102, Jan 2015.
[22] S. N. Sinha and M. Pollefeys, “Pan-tilt-zoom Camera Calibration and High- resolution Mosaic Generation,” Comput. Vis. Image Underst., vol. 103, no. 3, pp. 170–183, Sep. 2006.
[23] A. Mittal and D. Huttenlocher, “Scene modeling for wide area surveillance and image synthesis,” Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, vol. 2, pp. 160–167 vol.2, 2000.
[24] K. Xue, Y. Liu, G. Ogunmakin, J. Chen, and J. Zhang, “Panoramic Gaussian Mixture Model and large-scale range background subtraction method for PTZ camera-based surveillance systems,” Machine Vision and Applications, vol. 24, no. 3, pp. 477–492, 2013.
[25] S. Kang, J. Paik, A. Koschan, B. R. Abidi, and M. A. Abidi, “Real-time video tracking using PTZ cameras,” Proceedings of the International Conference on Quality Control by Artificial Vision, vol. 5132, no. May, pp. 103–111, 2003.
[26] D. Zamalieva, A. Yilmaz, and J. W. Davis, “A Multi-transformational Model for Background Subtraction with Moving Cameras,” Computer Vision – ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6- 12, 2014, Proceedings, Part I, pp. 803–817, 2014.
[27] C. Harris and M. Stephens, “A combined corner and edge detector,” In Proc. of Fourth Alvey Vision Conference, pp. 147–151, 1988.
[28] J. yves Bouguet, “Pyramidal implementation of the Lucas Kanade feature tracker,” Intel Corporation, Microprocessor Research Labs, 2000.
[29] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.
[30] C. Liu, W. Adviser-Freeman, and E. Adviser-Adelson, “Beyond pixels: exploring new representations and applications for motion analysis,” Proceedings of the 10th European Conference on Computer Vision: Part III, pp. 28–42, 2009.
[31] Y. Wang, P. M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, and P. Ishwar, “CDnet 2014: An Expanded Change Detection Benchmark Dataset,” 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 393–400, June 2014.
[32] S. Araki, T. Matsuoka, H. Takemura, and N. Yokoya, “Real-time tracking of multiple moving objects in moving camera image sequences using robust statistics,” Pattern Recognition, 1998. Proceedings. Fourteenth International Conference on, vol. 2, pp. 1433–1435, Aug 1998.
[33] C. Benedek and T. Sziranyi, “Bayesian Foreground and Shadow Detection in Uncertain Frame Rate Surveillance Videos,” IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 608–621, April 2008.
[34] A. Bevilacqua and P. Azzari, “A Fast and Reliable Image Mosaicing Technique with Application to Wide Area Motion Detection,” Image Analysis and Recognition: 4th International Conference, ICIAR 2007, Montreal, Canada, August 22-24, 2007. Proceedings, pp. 501–512, 2007.
[35] P. Azzari, L. D. Stefano, and A. Bevilacqua, “An effective real-time mosaicing algorithm apt to detect motion through background subtraction using a PTZ camera,” IEEE Conference on Advanced Video and Signal Based Surveillance, pp. 511–516, Sept 2005.
[36] M. D. Gregorio and M. Giordano, “Change Detection with Weightless Neural Networks,” 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 409–413, June 2014.
[37] E. Hayman and J. O. Eklundh, “Statistical background subtraction for a mobile observer,” Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pp. 67–74 vol.1, Oct 2003.
[38] M. Irani and P. Anandan, “A unified approach to moving object detection in 2D and 3D scenes,” Pattern Recognition, 1996., Proceedings of the 13th International Conference on, vol. 1, pp. 712–717 vol.1, Aug 1996.
[39] L. Maddalena and A. Petrosino, “A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications,” IEEE Transactions on Image Processing, vol. 17, no. 7, pp. 1168–1177, July 2008.
[40] Y. Sheikh and M. Shah, “Bayesian modeling of dynamic scenes for object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 11, pp. 1778–1792, Nov 2005.
[41] J. K. Suhr, H. G. Jung, G. Li, S. I. Noh, and J. Kim, “Background Compensation for Pan-Tilt-Zoom Cameras Using 1-D Feature Matching and Outlier Rejection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 3, pp. 371–377, March 2011.
[42] M. V. Droogenbroeck and O. Paquot, “Background subtraction: Experiments and improvements for ViBe,” 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 32–37, June 2012.
[43] A. Viswanath, R. K. Behera, V. Senthamilarasu, and K. Kutty, “Background Modelling from a Moving Camera,” Procedia Computer Science, vol. 58, pp. 289–296, 2015.
[44] S. Wu, T. Zhao, C. Broaddus, C. Yang, and M. Aggarwal, “Robust Pan, Tilt and Zoom Estimation for PTZ Camera by Using Meta Data and/or Frame- to-Frame Correspondences,” 2006 9th International Conference on Control, Automation, Robotics and Vision, pp. 1–7, Dec 2006.
[45] X. D. Pham, J. U. Cho, and J. W. Jeon, “Background compensation using Hough transformation,” Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pp. 2392–2397, May 2008.
[46] Z. Zivkovic and F. van der Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognition Letters, vol. 27, no. 7, pp. 773–780, 2006.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *