Ⅰ. Introduction
In general, foreground object detection consists of a background modeling as well as a background subtraction in video frames. A major challenge to accurate foreground detection is a sudden illumination change in the scene. In these situations, the background is no longer stable and could be mistakenly classified as the foreground yielding false positives. Conventional background modeling belongs to either parametric or non-parametric approach. In both, current pixels varying significantly from the background image are chosen as foreground pixels[1-4].
The performance of a foreground detection highly depends on a reliable background model. The state-of-art methods[5-11] are mostly devised for gradual illumination changes and fail to handle sudden changes such as light on/off. Illumination change condition (ICC) can affect the performance of the foreground object detection due to many false alarms and may lead to a system malfunction.
In this letter, a non-parametric background modeling employing double backgrounds as well as illumination compensation is proposed. Two main functionalities are the utilization of double backgrounds as well as the fast compensation of them to new illumination condition. Fig. 1 shows the overall flow of the proposed method.
Fig. 1.The overall block diagram of the proposed method. 그림 1. 제안방법의 전체 블록도
Ⅱ. Proposed Method
The illumination-robust foreground detection uses two background models with slow and fast adaption speeds. Let and denote long-term background model (LTBM) and short-term background model (STBM), respectively. Comparing ith pixel of t-th frame It(i) with the LTBM, a foreground binary mask is obtained as
where TL is the long-term background threshold. Similar thresholding method is performed for the STBM with a threshold TS resulting in a binary mask . Since is used to extract all pixels with significant temporal activities, a smaller threshold is chosen (TS=0.4TL). TL=50, TS=20 are used in experiments.
Current LTBM is updated by integrating a current frame It(i) into a previous model and is computed by
where αL is an adaption parameter. Similarly, a STBM is computed using an adaption parameter αS.
Double backgrounds are utilized for ICC. The proposed method evaluates the responses of STBM and LTBM using the following thresholds:
The proposed updating strategy is used only in ICC, where the ratio of the number of foreground and background pixels of is higher than a threshold TR. The updating process consists of computing average illumination change followed by illumination compensation of background models.
The selective updating methodology is similarly performed for STBM. From the updated background, a final foreground mask FGt(i) is obtained by .
Ⅲ. Experimental Results
The performance of the proposed system is compared with five foreground detections methods; Double backgrounds (DBG)[9], Eigen background[4], MoG[5], KDE[2] and ViBe[6]. In Seq1, there are no moving objects during illumination change and humans enter a room after sudden changes while in two other sequences, moving humans exist during subsequent illumination changes. First we examined the performance of the algorithms during sudden illumination change in terms of FP (false positive) and TP (true positive) rates. The accuracy of foreground binary mask is evaluated through use of Recall=TP/(TP+FN) and Precision=TP/(TP+ FP). F-score compares the binary masks with ground truth (GT).
Table 1 compares the overall performance of the proposed algorithm with other methods in three sequences. As shown in the Table, our proposed approach significantly outperforms five methods in all sequences. The proposed method is able to detect the moving objects with high accuracy in three sequences and shows an acceptable performance. Fig. 2 shows the resulting foreground objects detected by five comparative methods and the proposed. The results show that our method outperforms other methods. The processing speed of the proposed method is faster than that of other approaches except DBG (with 218 fps vs. 145 fps of Eigen BG, 71 fps of MoG, 66 fps of ViBe, 42 fps of KDE and 296 fps of DBG for Seq3).
Table 1.Performance comparison of different methods 표 1. 비교 방법들과의 성능 비교
Fig. 2.Foreground objects extracted by five comparative methods and the proposed method. GT=ground truth 그림 2. 5개의 비교방법과 제안방법으로 추출된 전경객체. GT는 ground truth
Algorithms such as Eigen background and ViBE cannot adapt as fast as other methods like MoG to new illumination condition due to their updating methodology. Our method can be adapted to the background models to the new illumination condition right after the illumination has occurred. The most important part is the illumination compensation of the background models with an appropriate gain value of EAIC. First, we compute the amount of illumination change with high accuracy by choosing the effective pixels. Then a pixel-selective background updating is performed. For updating, a correct gain value is assigned to each pixel. The fast background compensation by assigning a correct compensation gain value to each pixel of the background model is very important.
Ⅴ. Conclusion
A novel foreground detection was proposed that can address the illumination change problem. The algorithm utilizes two background models with slow and fast adaptation rates for accurate illumination compensation. The proposed method delivers promising detection results in sudden illumination change and outperforms several state-of-art methods.
참고문헌
- M. Oral and U. Deniz, ″Centre of mass model - A novel approach to background modelling for segmentation of moving objects″, Image Vision and Computing, 25, pp. 1365–1376, 2007. https://doi.org/10.1016/j.imavis.2006.10.001
- A. Elagammal, D. Harwood and L. Davis, ″Non-parametric model for background subtraction″. Proc. European Conference on Computer Vision, Dublin, Ireland, pp 751-767, 2000.
- K. Kim, T. Chalidabhongse, D. Harwood and L. Davis, ″Real-time foreground background segmentation using codebook model″, Real-Time Imaging, 11(3), pp. 167-256, 2005. https://doi.org/10.1016/j.rti.2005.06.001
- N. Oliver, R. Rosario and A. Pentland, ″A bayesian computer vision system for modeling human interactions″, IEEE Trans. Pattern Analysis and Machine Intelligence, 22(8), pp. 831-843, 2000. https://doi.org/10.1109/34.868684
- Z. Zivkovic and F. Heijden, ″Efficient adaptive density estimation per image pixel for the task of background subtraction″, Pattern Recognition Letters, 27, pp. 773–780, 2006. https://doi.org/10.1016/j.patrec.2005.11.005
- O. Barnich and M. Droogenbroeck, ″ViBe: a powerful random technique to estimate the background in video sequences″, Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, pp. 945-948, 2009.
- L. Maddalena and A. Petrosino, ″A self-organizing approach to background subtraction for visual surveillance applications″, IEEE Trans. Image Processing, 17(7), pp. 1168-1177, 2008. https://doi.org/10.1109/TIP.2008.924285
- E. Jaraba, C. Urunuela, and J. Senar, ″Detected motion classification with a double-background and a neighborhood based difference″, Pattern Recognition Letters, 24, pp. 2079–2092, 2003. https://doi.org/10.1016/S0167-8655(03)00045-X
- S. Gruenwedel, N. Petrovic, L. Jovanov, J. Castaneda, A. Pizurica and W. Philips, ″Efficient foreground detection for real-time surveillance applications″, Electronics Letters, 49(18), 2013. https://doi.org/10.1049/el.2013.1944
- S. Hasan, and S. S. Cheung, "Background subtraction under sudden illumination change," IEEE International Workshop on Multimedia Signal Processing (MMSP), 2014.
- S. Parthipan, M. Sahfree, F. Li, and A. Wong, "PRIM: fast background subtraction under sudden, local illumination changes via probabilistic illumination range modelling," EEE International Conference on Image Processing (ICIP), 2015.