A Robust Moving Object Detection in Multi-Scenario Big Data for Video Surveillance

Abstract:

Advanced wireless imaging sensors and cloud data storage contribute to video surveillance by enabling the generation of large amounts of video footage every second. Consequently, surveillance videos have become one of the largest sources of unstructured data. Because multi scenario surveillance videos are often continuously produced, using these videos to detect moving objects is challenging for conventional moving object detection methods. This paper presents a novel model that harnesses both sparsity and low-rankness with contextual regularization to detect moving objects in multi scenario surveillance data. In the proposed model, we consider moving objects as a contiguous outlier detection problem through the use of low-rank constraint with contextual regularization, and we construct dedicated backgrounds for multiple scenarios using dictionary learning-based sparse representation, which ensures that our model can be effectively applied to multi scenario videos. Quantitative and qualitative assessments indicate that the proposed model outperforms existing methods and achieves substantially more robust performance than do other state-of the-art methods.

Existing System:

We consider moving objects as a contiguous outlier detection problem through the use of low-rank constraint with contextual regularization, and we construct dedicated backgrounds for multiple scenarios using dictionary learning-based sparse representation, which ensures that our model can be effectively applied to multi scenario videos. Quantitative and qualitative assessments indicate that the proposed model outperforms existing methods and achieves substantially more robust performance than do other state-of the-art methods.

Proposed System:

The proposed method can not only precisely detect moving objects from diverse scenarios but also can reconstruct complete backgrounds for individual incoming scenarios. These results can be attributed to the method’s cost function that is constrained by the low-rankness property of the backgrounds and the sparse representation of diverse scenarios, as well as the method’s contextual constraints for foreground detection. The proposed model can suppress false positives from the background and preserve fine foreground pixels. Hence, the proposed method yields foreground masks with a greater accuracy than that of current state-of-the-art methods. Quantitative and qualitative assessments demonstrated that the proposed method outperforms these state-of-the-art methods in application to multi scenario video sequences.

Conclusions:

In this paper, we presented novel sparse and low-rank representation technology within a contextual regularization model for motion detection. Foreground and background models were carefully considered in the development of the proposed model; thus, the model can accurately decompose a multi scenario video sequence into background and foreground to improve the performance of single scenario-based moving object detection. The proposed model can suppress false positives from the background and preserve fine foreground pixels. Hence, the proposed method yields foreground masks with a greater accuracy than that of current state-of-the-art methods. Quantitative and qualitative assessments demonstrated that the proposed method outperforms these state-of-the-art methods in application to multi scenario video sequences.

REFERENCES:

[1] T. Bouwmans, A. Sobral, S. Javed, S. K. Jung, and E.-H. Zahzah, “Decomposition into low-rank plus additive matrices for background/- foreground separation: A review for a comparative evaluation with a large-scale dataset,” Computer Science Review, vol. 23, pp. 1 – 71, 2017.

[2] B. H. Chen and S. C. Huang, “An advanced moving object detection algorithm for automatic traffic monitoring in real-world limited bandwidth networks,” IEEE Transactions on Multimedia, vol. 16, no. 3, pp. 837–847, April 2014.

[3] L. Maddalena and A. Petrosino, “A self-organizing approach to background subtraction for visual surveillance applications,” IEEE Transactions on Image Processing, vol. 17, no. 7, pp. 1168–1177, July 2008.

[4] D. Culibrk, O. Marques, D. Socek, H. Kalva, and B. Furht, “Neural network approach to background modeling for video object segmentation,” IEEE Transactions on Neural Networks, vol. 18, no. 6, pp. 1614–1627, Nov 2007.

[5] J. M. Guo, C. H. Hsia, Y. F. Liu, M. H. Shih, C. H. Chang, and J. Y. Wu, “Fast background subtraction based on a multilayer codebook model for moving object detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 10, pp. 1809–1821, Oct 2013.

[6] O. Barnich and M. V. Droogenbroeck, “Vibe: A universal background subtraction algorithm for video sequences,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1709–1724, June 2011.

[7] P. L. St-Charles, G. A. Bilodeau, and R. Bergevin, “Flexible background subtraction with self-balanced local sensitivity,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 2014, pp. 414–419.

[8] P. L. St-Charles and G. A. Bilodeau, “Improving background subtraction using local binary similarity patterns,” in IEEE Winter Conference on Applications of Computer Vision, March 2014, pp. 509–515.

[9] Z. Zhong, B. Zhang, G. Lu, Y. Zhao, and Y. Xu, “An adaptive background modeling method for foreground segmentation,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 5, pp. 1109– 1121, May 2017.

[10] T. Bouwmans and E. H. Zahzah, “Robust pca via principal component pursuit: A review for a comparative evaluation in video surveillance,” Computer Vision and Image Understanding, vol. 122, pp. 22 – 34, 2014.

[11] X. Shu, F. Porikli, and N. Ahuja, “Robust orthonormal subspace learning: Efficient recovery of corrupted low-rank matrices,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, June 2014, pp. 3874–3881.