• 제목/요약/키워드: Surveillance video

Search Result 490, Processing Time 0.027 seconds

Detection using Optical Flow and EMD Algorithm and Tracking using Kalman Filter of Moving Objects (이동물체들의 Optical flow와 EMD 알고리즘을 이용한 식별과 Kalman 필터를 이용한 추적)

  • Lee, Jung Sik;Joo, Yung Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.7
    • /
    • pp.1047-1055
    • /
    • 2015
  • We proposes a method for improving the identification and tracking of the moving objects in intelligent video surveillance system. The proposed method consists of 3 parts: object detection, object recognition, and object tracking. First of all, we use a GMM(Gaussian Mixture Model) to eliminate the background, and extract the moving object. Next, we propose a labeling technique forrecognition of the moving object. and the method for identifying the recognized object by using the optical flow and EMD algorithm. Lastly, we proposes method to track the location of the identified moving object regions by using location information of moving objects and Kalman filter. Finally, we demonstrate the feasibility and applicability of the proposed algorithms through some experiments.

Tracking Object Movement via Two Stage Median Operation and State Transition Diagram under Various Light Conditions

  • Park, Goo-Man
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.21 no.4
    • /
    • pp.11-18
    • /
    • 2007
  • A moving object detection algorithm for surveillance video is here proposed which employs background initialization based on two-stage median filtering and a background updating method based on state transition diagram. In the background initialization, the spatiotemporal similarity is measured in the subinterval. From the accumulated difference between the base frame and the other frames in a subinterval, the regions affected by moving objects are located. The median is applied over the subsequence in the subinterval in which regions share similarity. The outputs from each subinterval are filtered by a two-stage median filter. The background of every frame is updated by the suggested state transition diagram The object is detected by the difference between the current frame and the updated background. The proposed method showed good results even for busy, crowded sequences which included moving objects from the first frame.

Individual Pig Detection using Fast Region-based Convolution Neural Network (고속 영역기반 컨볼루션 신경망을 이용한 개별 돼지의 탐지)

  • Choi, Jangmin;Lee, Jonguk;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.216-224
    • /
    • 2017
  • Abnormal situation caused by aggressive behavior of pigs adversely affects the growth of pigs, and comes with an economic loss in intensive pigsties. Therefore, IT-based video surveillance system is needed to monitor the abnormal situations in pigsty continuously in order to minimize the economic demage. Recently, some advances have been made in pig monitoring; however, detecting each pig is still challenging problem. In this paper, we propose a new color image-based monitoring system for the detection of the individual pig using a fast region-based convolution neural network with consideration of detecting touching pigs in a crowed pigsty. The experimental results with the color images obtained from a pig farm located in Sejong city illustrate the efficiency of the proposed method.

A Recognition Method for Moving Objects Using Depth and Color Information (깊이와 색상 정보를 이용한 움직임 영역의 인식 방법)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.4
    • /
    • pp.681-688
    • /
    • 2016
  • In the intelligent video surveillance, recognizing the moving objects is important issue. However, the conventional moving object recognition methods have some problems, that is, the influence of light, the distinguishing between similar colors, and so on. The recognition methods for the moving objects using depth information have been also studied, but these methods have limit of accuracy because the depth camera cannot measure the depth value accurately. In this paper, we propose a recognition method for the moving objects by using both the depth and the color information. The depth information is used for extracting areas of moving object and then the color information for correcting the extracted areas. Through tests with typical videos including moving objects, we confirmed that the proposed method could extract areas of moving objects more accurately than a method using only one of two information. The proposed method can be not only used in CCTV field, but also used in other fields of recognizing moving objects.

Realtime Object Region Detection Robust to Vehicle Headlight (차량의 헤드라이트에 강인한 실시간 객체 영역 검출)

  • Yeon, Sungho;Kim, Jaemin
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.2
    • /
    • pp.138-148
    • /
    • 2015
  • Object detection methods based on background learning are widely used in video surveillance. However, when a car runs with headlights on, these methods are likely to detect the car region and the area illuminated by the headlights as one connected change region. This paper describes a method of separating the car region from the area illuminated by the headlights. First, we detect change regions with a background learning method, and extract blobs, connected components in the detected change region. If a blob is larger than the maximum object size, we extract candidate object regions from the blob by clustering the intensity histogram of the frame difference between the mean of background images and an input image. Finally, we compute the similarity between the mean of background images and the input image within each candidate region and select a candidate region with weak similarity as an object region.

Object-based video summarization in a wide-area surveillance system (광범위한 지역 감시시스템에서의 물체기반 비디오 요약)

  • Kwon, HyeYoung;Lee, Kyoung-Mi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.544-548
    • /
    • 2006
  • 본 논문에서는 광범위한 지역을 감시하기 위해 설치된 여러 대의 카메라로부터 획득된 비디오에 대해 물체를 기반으로 한 비디오 요약 시스템을 제안한다. 제안된 시스템은 시야가 겹쳐지지 않은 다수의 CCTV 카메라를 통해서 촬영한 비디오들을 30분 단위로 나누어 비디오 데이터베이스를 구축하고 시간별, 카메라별 비디오 검색이 가능하다. 비디오에서 물체기반 키프레임을 추출하여 카메라별, 사람별로 비디오를 요약할 수 있도록 하였다. 또한 임계치에 따라 키프레임 검색정도를 조절함으로써 비디오 요약정도를 조절할 수 있다. 이렇게 검색된 키프레임에 대한 카메라별, 시간별 통계를 통해서 감시지역의 물체기반 이벤트를 간단히 확인해 볼 수 있다.

  • PDF

A Study on the Intention-based Context-aware Model for Video Surveillance System (영상 감시 시스템에 적용 가능한 의도기반 상황인식 모델에 관한 연구)

  • Kim Hyoung-Nyoun;Park Ji-Hyung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06a
    • /
    • pp.235-237
    • /
    • 2006
  • 상황인식 시스템에 포함되는 멀티 모달 센서의 활용과 장기간 축적된 센서정보(long-term value)를 상황정보로 어떻게 이용할 것인지에 대한 연구가 최근 활발히 진행되고 있다. 상황인식 시스템은 센서가 교체, 추가 및 제거되는 것과 관계없이 상황인식 모델의 재사용이 가능해야 하며, 센서정보와 상황정보는 프로세스 실행에 활용될 수 있어야 한다. 따라서, 본 논문에서는 센서정보와 상황정보를 노드로 구성하여 이들의 상호작용에 의해 프로세스의 실행을 결정하는 베이지안 네트워크로 표현된 상황인식 모델을 제안한다. 이 모델은 시스템의 역할이나 시스템을 구성한 의도가 센서의 교체나 추가, 제거에 관계없이 유지되는 점을 이용하여 이들간의 관계를 베이지안 네트워크로 나타낸다. 그리고 실험적으로 구현된 위치 기반 영상 감시 시스템에 적용하여 해당 모델의 유효성을 확인한다.

  • PDF

3D Convolutional Neural Networks based Fall Detection with Thermal Camera (열화상 카메라를 이용한 3D 컨볼루션 신경망 기반 낙상 인식)

  • Kim, Dae-Eon;Jeon, BongKyu;Kwon, Dong-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.45-54
    • /
    • 2018
  • This paper presents a vision-based fall detection system to automatically monitor and detect people's fall accidents, particularly those of elderly people or patients. For video analysis, the system should be able to extract both spatial and temporal features so that the model captures appearance and motion information simultaneously. Our approach is based on 3-dimensional convolutional neural networks, which can learn spatiotemporal features. In addition, we adopts a thermal camera in order to handle several issues regarding usability, day and night surveillance and privacy concerns. We design a pan-tilt camera with two actuators to extend the range of view. Performance is evaluated on our thermal dataset: TCL Fall Detection Dataset. The proposed model achieves 90.2% average clip accuracy which is better than other approaches.

Abnormal Crowd Behavior Detection in Video Surveillance System (영상 감시 시스템에서의 비정상 집단행동 탐지)

  • Park, Seung-Jin;Oh, Seung-Geun;Kang, Bong-Su;Park, Dai-Hee
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06c
    • /
    • pp.347-350
    • /
    • 2011
  • 감시카메라 환경에서의 비정상 집단행동 탐지란 감시카메라로부터 유입되는 영상에서 다중 객체가 위험에 처한 상황을 신속하고 정확하게 탐지 및 인식하는 분야를 말한다. 본 논문에서는 CCTV 등과 같은 감시카메라 환경에서 움직임 벡터와 SVDD를 이용하여 집단내의 비정상 상황을 탐지하는 프로토타입 시스템을 제안한다. 제안된 시스템은 움직임 벡터를 이용하여 영상내의 움직임 정보를 추출 표현하였으며, 비정상 집단행동의 판별 문제를 실용적 차원의 단일 클래스 분류 문제로 재해석하여 단일 클래스 SVM의 대표적 모델인 SVDD를 탐지자로 설계하였다. 공개적으로 사용 가능한 벤치마크 데이터 셋인 PETS 2009와 UMN을 이용하여 본 논문에서 제안한 비정상 집단행동 탐지 시스템의 성능을 실험적으로 검증한다.

Gait Recognition Based on GF-CNN and Metric Learning

  • Wen, Junqin
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1105-1112
    • /
    • 2020
  • Gait recognition, as a promising biometric, can be used in video-based surveillance and other security systems. However, due to the complexity of leg movement and the difference of external sampling conditions, gait recognition still faces many problems to be addressed. In this paper, an improved convolutional neural network (CNN) based on Gabor filter is therefore proposed to achieve gait recognition. Firstly, a gait feature extraction layer based on Gabor filter is inserted into the traditional CNNs, which is used to extract gait features from gait silhouette images. Then, in the process of gait classification, using the output of CNN as input, we utilize metric learning techniques to calculate distance between two gaits and achieve gait classification by k-nearest neighbors classifiers. Finally, several experiments are conducted on two open-accessed gait datasets and demonstrate that our method reaches state-of-the-art performances in terms of correct recognition rate on the OULP and CASIA-B datasets.