DOI QR코드

DOI QR Code

Crowd Density Estimation with Multi-class Adaboost in elevator

다중 클래스 아다부스트를 이용한 엘리베이터 내 군집 밀도 추정

  • Kim, Dae-Hun (Dept. of Electrical Engineering, Korea University) ;
  • Lee, Young-Hyun (Dept. of Visual Information Processing, Korea University) ;
  • Ku, Bon-Hwa (Dept. of Visual Information Processing, Korea University) ;
  • Ko, Han-Seok (Dept. of Electrical Engineering, Korea University)
  • 김대훈 (고려대학교 전기전자전파공학과) ;
  • 이영현 (고려대학교 영상정보처리학과) ;
  • 구본화 (고려대학교 영상정보처리학과) ;
  • 고한석 (고려대학교 전기전자전파공학과)
  • Received : 2012.02.01
  • Accepted : 2012.05.16
  • Published : 2012.07.31

Abstract

In this paper, an crowd density in elevator estimation method based on multi-class Adaboost classifier is proposed. The SOM (Self-Organizing Map) based conventional methods have shown insufficient performance in practical scenarios and have weakness for low reproducibility. The proposed method estimates the crowd density using multi-class Adaboost classifier with texture features, namely, GLDM(Grey-Level Dependency Matrix) or GGDM(Grey-Gradient Dependency Matrix). In order to classify into multi-label, weak classifier which have better performance is generated by modifying a weight update equation of general Adaboost algorithm. The crowd density is classified into four categories depending on the number of persons in the crowd, which can be 0 person, 1-2 people, 3-4 people, and 5 or more people. The experimental results under indoor environment show the proposed method improves detection rate by about 20% compared to that of the conventional method.

본 논문에서는 다중 클래스 아다부스트 기반의 분류기를 이용하여 엘리베이터 내 군집 밀도를 추정하는 방법을 제안한다. SOM을 사용하는 기존의 방법은 재현성이 떨어지며 충분한 성능을 내지 못한다. 제안한 방법은 GLDM(Grey-Level Dependency Matrix)과 GGDM(Grey-Gradient Dependency Matrix)의 텍스처 특징과 다중 클래스 아다부스트 기반의 분류기를 통해 실내 군집 밀도를 추정한다. 다중 클래스를 분류하기 위해 기존의 아다부스트 알고리즘에서 웨이트 업데이트 식을 변형하여 더 높은 성능의 약한 분류기를 생성하도록 하였다. 군집 밀도는 인원수에 따라 0명, 1~2명, 3~4명, 5명 이상 등 네 가지 클래스로 구분하였다. 엘리베이터 내 영상을 이용한 모의 실험 결과 제안된 방법은 기존의 방법보다 약 20% 정도의 검출률 향상을 나타내었다.

Keywords

References

  1. W. Hu, T. Tan, L. Wang and S. Maybank, "A survey on visual surveillance of object motion and behaviors", IEEE transactions on Systems, Man and Cybernetics - Part C, Vol. 34, No. 3, pp. 334-352, 2004 https://doi.org/10.1109/TSMCC.2004.829274
  2. H. Rahmalan, M. S. Nixon, and J. N. Carter. "On crowd density estimation for surveillance." In International Conference on Crime Detection and Prevention, pp.540C-545C, 2006.
  3. N. Dong, F. Liu and Z. Li, "Crowd Density Estimation Using Sparse Texture Features," Journal of Convergence Information Technology, vol. 5, pp. 125-137, 2010.
  4. J. C. S. J. Junior , S. R. Musse and C. R. Jung "Crowd analysis using computer vision techniques: A survey", IEEE Signal Process. Lett., vol. 27, no. 5, pp.66 - 77 , 2010.
  5. B. Zhan, D. N. Monekosso, P. Remagnino, S. A. Velastin, and L. Xu. "Crowd analysis: A survey.", Journal of Machine Vision and Applications, vol.19, pp.345-357, 2008. https://doi.org/10.1007/s00138-008-0132-4
  6. P. Viola, M. Jones, and D. Snow, "Detecting pedestrians using patterns of motion and appearance", International Journal of Computer Vision, Springer US, vol.63, no.2, pp.153-161, 2005. https://doi.org/10.1007/s11263-005-6644-8
  7. H. Tao, D. Kong, D. Gray, "Counting Pedestrians in Crowds Using Viewpoint Invariant Training", In Proceedings of the British Machine Vision Conference, pp.1-8, 2005.
  8. T. Zhao and R. Nevatia, "Bayesian human segmentation in crowded situations", In Proceedings of the Conference on Computer Vision and Pattern Recognition, pp.459-466, 2003.
  9. S.F. Lin, J.Y. Chen, and H.X. Chao, "Estimation of number of people in crowded scenes using perspective transformation", IEEE Trans. System, Man, and Cybernetics, IEEE, vol.31, no.6, pp.645-654, 2001. https://doi.org/10.1109/3468.983420
  10. A. Mohan, C. Papageorgiou, and T. Poggio, "Example-Based Object Detection in images by Components", IEEE Trans on PAMI, IEEE, vol.23, no.4, pp.394-361, 2001.
  11. K. Mikolajczyk, C. Schmid, and A. Zisserman, "Human Detection Based on a Probabilistic Assembly of Robust Part Detector", In Proceedings of the European Conference on Computer Vision, pp.69-82, 2004.
  12. B. Wu and R. Nevatia, "Detection of multiple, partially occluded humans in a single image by bayesian combination of edgelet part detectors", In Proceedings of the International Conference on Computer Vision, pp.90-97, 2005.
  13. V. Rabaud and S.J. Belongie, "Counting crowded moving objects", In Proceedings of the Conference on Computer Vision and Pattern Recognition, pp.705-711, 2006.
  14. A.N. Marana, M.A. Cavenaghi, R.S. Ulson, and F.L. Drumond, "Real-Time Crowd Density Estimation Using Images", Lecture Notes in Computer Science, Springer, no.3804, pp.355-362, 2005.
  15. A.N. Marana, L.F. Costa, R.A. Lotufo, and S.A. Velastin, "On the efficacy of texture analysis for crowd monitoring", In Proceedings of the Computer Graphics, Image Processing, and Vision, pp.354-361, 1998.
  16. A.N. Marana, L. Da F. Costa, R.A. Lotufo, S.A. Velastin, "Estimating crowd density with Minkowski fractal dimension", In Proceedings of the Conference on Acoust., Speech, Signal Processing, pp.3521-3524, 1999.
  17. S.Y. Cho, T.W.S. Chow, and C.T. Leung, "A neural-based crowd estimation by hybrid global learning algorithm", IEEE Trans. Syst, Man, Cybern, IEEE, vol.29, no.3, pp.535-541, 1999. https://doi.org/10.1109/3477.775269
  18. WL Hsu, KF Lin and CL Tsai, "Crowd density estimation based on frequency analysis", 7th IIH-MSP, pp. 348-351, 2011
  19. R.M. Haralick, "Statistical and structural approaches to texture", Proceedings of the IEEE, volume 67, pp. 786-804, 1979. https://doi.org/10.1109/PROC.1979.11328
  20. P. Viola, M. Jones, "Rapid object detection using a boosted cascade of simple features.", CVPR, vol. 1, pp. 511-518, 2001
  21. J. Zhu, S. Rosset, H. Zou, and T. Hastie. Multi-class adaboost. Technique Report, 2005.
  22. R.E. Schapire and Y. Singer, "Improved Boosting Algorithms Using Confidence-Rated Predictions", Machine Learning, vol. 37, no. 3, pp. 297-336, 1999. https://doi.org/10.1023/A:1007614523901