• Title/Summary/Keyword: video-surveillance

Search Result 490, Processing Time 0.023 seconds

Fire Detection Algorithm based on Color and Motion Information (색상과 움직임 정보 기반의 화재 감지 알고리즘)

  • Kim, Alla;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.6
    • /
    • pp.1011-1016
    • /
    • 2009
  • In this paper, we propose the method of fire detection. A wide distribution of CCTV cameras (Closed Circuit Television) in many public areas can be used not only for video surveillance systems but also for detecting fire occurrence. A proposed approach is based on visual information through a static camera. Video sequences are analyzed to find fire candidates and then spatial analyses procedure for detected fire-like color foreground is carried out. From the simulation results, our method showed the best performance when spatial and temporal fire candidates changes rapidly and close to fire motion.

  • PDF

A Segmentation Method for a Moving Object on A Static Complex Background Scene. (복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구)

  • Park, Sang-Min;Kwon, Hui-Ung;Kim, Dong-Sung;Jeong, Kyu-Sik
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

Human Face Identification using KL Transform and Neural Networks (KL 변환과 신경망을 이용한 개인 얼굴 식별)

  • Kim, Yong-Joo;Ji, Seung-Hwan;Yoo, Jae-Hyung;Kim, Jung-Hwan;Park, Mignon
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.1
    • /
    • pp.68-75
    • /
    • 1999
  • Machine recognition of faces from still and video images is emerging as an active research area spanning several disciplines such as image processing, pattern recognition, computer vision and neural networks. In addition, human face identification has numerous applications such as human interface based systems and real-time video systems of surveillance and security. In this paper, we propose an algorithm that can identify a particular individual face. We consider human face identification system in color space, which hasn't often considered in conventional in conventional methods. In order to make the algorithm insensitive to luminance, we convert the conventional RGB coordinates into normalized CIE coordinates. The normalized-CIE-based facial images are KL-transformed. The transformed data are used as the input of multi-layered neural network and the network are trained using error-backpropagation methods. Finally, we verify the system performance of the proposed algorithm by experiments.

  • PDF

RAVIP: Real-Time AI Vision Platform for Heterogeneous Multi-Channel Video Stream

  • Lee, Jeonghun;Hwang, Kwang-il
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.227-241
    • /
    • 2021
  • Object detection techniques based on deep learning such as YOLO have high detection performance and precision in a single channel video stream. In order to expand to multiple channel object detection in real-time, however, high-performance hardware is required. In this paper, we propose a novel back-end server framework, a real-time AI vision platform (RAVIP), which can extend the object detection function from single channel to simultaneous multi-channels, which can work well even in low-end server hardware. RAVIP assembles appropriate component modules from the RODEM (real-time object detection module) Base to create per-channel instances for each channel, enabling efficient parallelization of object detection instances on limited hardware resources through continuous monitoring with respect to resource utilization. Through practical experiments, RAVIP shows that it is possible to optimize CPU, GPU, and memory utilization while performing object detection service in a multi-channel situation. In addition, it has been proven that RAVIP can provide object detection services with 25 FPS for all 16 channels at the same time.

Low Resolution Rate Face Recognition Based on Multi-scale CNN

  • Wang, Ji-Yuan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1467-1472
    • /
    • 2018
  • For the problem that the face image of surveillance video cannot be accurately identified due to the low resolution, this paper proposes a low resolution face recognition solution based on convolutional neural network model. Convolutional Neural Networks (CNN) model for multi-scale input The CNN model for multi-scale input is an improvement over the existing "two-step method" in which low-resolution images are up-sampled using a simple bi-cubic interpolation method. Then, the up sampled image and the high-resolution image are mixed as a model training sample. The CNN model learns the common feature space of the high- and low-resolution images, and then measures the feature similarity through the cosine distance. Finally, the recognition result is given. The experiments on the CMU PIE and Extended Yale B datasets show that the accuracy of the model is better than other comparison methods. Compared with the CMDA_BGE algorithm with the highest recognition rate, the accuracy rate is 2.5%~9.9%.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

Abnormal Crowd Behavior Detection Using Heuristic Search and Motion Awareness

  • Usman, Imran;Albesher, Abdulaziz A.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.131-139
    • /
    • 2021
  • In current time, anomaly detection is the primary concern of the administrative authorities. Suspicious activity identification is shifting from a human operator to a machine-assisted monitoring in order to assist the human operator and react to an unexpected incident quickly. These automatic surveillance systems face many challenges due to the intrinsic complex characteristics of video sequences and foreground human motion patterns. In this paper, we propose a novel approach to detect anomalous human activity using a hybrid approach of statistical model and Genetic Programming. The feature-set of local motion patterns is generated by a statistical model from the video data in an unsupervised way. This features set is inserted to an enhanced Genetic Programming based classifier to classify normal and abnormal patterns. The experiments are performed using publicly available benchmark datasets under different real-life scenarios. Results show that the proposed methodology is capable to detect and locate the anomalous activity in the real time. The accuracy of the proposed scheme exceeds those of the existing state of the art in term of anomalous activity detection.

Traffic-Accident-in-Alley Prevention System by Object Tracking in Video Surveillance Camera Streaming Video (비디오 감시 카메라 내 사물 추적을 통한 골목길 교차로 사고 예방 시스템)

  • Kim, Hyungjin;Kim, Juneyoung;Park, Juhong;Shim, Jaeuk;Ko, Seokju;Kim, Jeongseok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.536-539
    • /
    • 2020
  • 길이 좁고 차도와 인도의 구분이 없는 골목길의 특성상 사각지대가 많고 보행자의 동선을 예측하기 힘들어 교통사고가 많이 발생하고 있다. 따라서 본 논문에서는 AI 를 활용, 영상 내 사물을 추적하여 골목길에서의 사고를 예방하는 시스템을 제안한다. 해당 시스템은 Object - Detection & Tracking 을 사용하여 보행자 및 차량을 식별·추적하여 두 개 이상의 사물이 동시에 교차로에 접근 시 사고 예방 알람을 발생시킨다. 이 시스템을 전국에 설치되어 있는 CCTV 에 활용하면 추가적인 비용과 설치 시간에 제한받지 않고 전국적으로 응용할 수 있을 것으로 기대된다.

Implementation of H.264/SVC Decoder Based on Embedded DSP (임베디드 DSP 기반 H.264/SVC 복호기 구현)

  • Kim, Youn-Il;Baek, Doo-San;Kim, Jae-Gon;Kim, Jin-Soo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.1018-1025
    • /
    • 2011
  • Scalable Video Coding (SVC) extension of H.264/AVC is a new video coding standard for media convergence by providing diverse videos of different spatial-temporal-quality layers with a single bitstream. Recently, real-time SVC codecs are being developed for the application areas of surveillance video and mobile video, etc. This paper presents the design and implementation of a H.264/SVC decoder based on an embedded DSP using Open SVC Decoder (OSD) which is a real-time software decoder designed for the PC environment. The implementation consists of porting C code of the OSD software from PC to DSP environment, profiling the complexity performance of OSD with further optimization, and integrating the optimized decoder into the TI Davinci EVM (Evaluation Module). 50 QCIF/CIF frames or 15 SD frames per second can be decoded with the implemented DSP-based SVC decoder.

A real-time multiple vehicle tracking method for traffic congestion identification

  • Zhang, Xiaoyu;Hu, Shiqiang;Zhang, Huanlong;Hu, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2483-2503
    • /
    • 2016
  • Traffic congestion is a severe problem in many modern cities around the world. Real-time and accurate traffic congestion identification can provide the advanced traffic management systems with a reliable basis to take measurements. The most used data sources for traffic congestion are loop detector, GPS data, and video surveillance. Video based traffic monitoring systems have gained much attention due to their enormous advantages, such as low cost, flexibility to redesign the system and providing a rich information source for human understanding. In general, most existing video based systems for monitoring road traffic rely on stationary cameras and multiple vehicle tracking method. However, most commonly used multiple vehicle tracking methods are lack of effective track initiation schemes. Based on the motion of the vehicle usually obeys constant velocity model, a novel vehicle recognition method is proposed. The state of recognized vehicle is sent to the GM-PHD filter as birth target. In this way, we relieve the insensitive of GM-PHD filter for new entering vehicle. Combining with the advanced vehicle detection and data association techniques, this multiple vehicle tracking method is used to identify traffic congestion. It can be implemented in real-time with high accuracy and robustness. The advantages of our proposed method are validated on four real traffic data.