• Title/Summary/Keyword: Surveillance video

Search Result 486, Processing Time 0.029 seconds

An Implementation of Embedded Video Surveillance System (임베디드 영상감시 시스템 구현)

  • Ahn, Sung-Ho;Lee, Kyung-Hee;Kwak, Ji-Young;Kim, Doo-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.47-50
    • /
    • 2003
  • 최근 IT분야에서 주목받고 있는 부분은 정보가전분야이다. 특히, 홈오토메이션(HA Home Automation)에서 중심이 되는 시스템은 홈서버라 할 수 있다. 즉, 가정내에 홈서버가 설치되어 이를 중심으로 내부적으로는 정보가전기기들이 네트워킹되어 있고, 밖으로는 인터넷을 기본 통신 환경으로 구성되는 것이 현실이다. 이러한 환경에서 홈시큐리티 방면의 영상감시기능은 핵심분야 중 하나이다. 영상감시기능을 수행하고자 가정의 홈서버상에 영상감시 모듈이 탑재되어진다. 이때, 홈서버는 임베디드 시스템으로 Qplus라고 하는 임베디드 리눅스 계열의 운영체제를 기본으로 하며, 영상감시 모듈은 SIP(Session Initiation Protocol)기반으로 수행된다. SIP은 VoIP(Voice over IP) 분야의 핵심기술로 최근 많이 부각되어 널리 활용되고 있는 응용 계층의 시그널링 프로토콜이다. 한편, 영상코덱은 ITU-T의 H.261표준을 따르고 있으며, 이러한 영상감시기능은 홈서버 뿐만 아니라, PDA와 같은 핸드헬드 장치를 통해서도 제공된다. 본 논문에서는 임베디드 영상가미 시스템의 설계 및 구현에 대해 기술하고 있다.

  • PDF

Implementation of fall-down detection algorithm based on Image Processing (영상처리 기반 낙상 감지 알고리즘의 구현)

  • Kim, Seon-Gi;Ahn, Jong-Soo;Kim, Won-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.2
    • /
    • pp.56-60
    • /
    • 2017
  • This paper describes the design and implementation of fall-down detection algorithm based on image processing. The fall-down detection algorithm separates objects by using background subtraction and binarization after grayscale conversion of the input image acquired by the camera, and recognizes the human body by using labeling operation. The recognized human body can be monitored on the display image, and an alarm is generated when fall-down is detected. By using computer simulation, the proposed algorithm has shown a detection rate of 90%. We verify the feasibility of the proposed system by verifying the function by using the prototype test implemented on the DSP image processing board.

A Frequency Spectrum Analysis based on FFT of Fire Thermal Image (FFT를 이용한 화재 열영상의 주파수 스펙트럼 분석)

  • Kim, Won-Ho;Jang, Bok-Gyu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.1
    • /
    • pp.33-37
    • /
    • 2011
  • This paper presents the frequency spectral analysis based on FFT of the infrared ray fire thermal image, it is an object to deduce the conditions for determining fire alarm through the image processing with the frequency domain. After the candidate regions are separated by using pre-defined brightness value, the fast fourier transform is performed for consecutive infrared thermal images, the frequency spectral analysis of the thermal image analyzed DC and AC frequency distribution. The fire criterion of the thermal image was presented based on the analyzed result and a practicality was confirmed through the computer simulation.

Robust Object Detection Algorithm Using Spatial Gradient Information (SG 정보를 이용한 강인한 물체 추출 알고리즘)

  • Joo, Young-Hoon;Kim, Se-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.422-428
    • /
    • 2008
  • In this paper, we propose the robust object detection algorithm with spatial gradient information. To do this, first, we eliminate error values that appear due to complex environment and various illumination change by using prior methods based on hue and intensity from the input video and background. Visible shadows are eliminated from the foreground by using an RGB color model and a qualified RGB color model. And unnecessary values are eliminated by using the HSI color model. The background is removed completely from the foreground leaving a silhouette to be restored using spatial gradient and HSI color model. Finally, we validate the applicability of the proposed method using various indoor and outdoor conditions in a complex environments.

Salient Motion Information Detection Method Using Weighted Subtraction Image and Motion Vector (가중치 차 영상과 움직임 벡터를 이용한 두드러진 움직임 정보 검출 방법)

  • Kim, Sun-Woo;Ha, Tae-Ryeong;Park, Chun-Bae;Choi, Yeon-Sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.4
    • /
    • pp.779-785
    • /
    • 2007
  • Moving object detection is very important for video surveillance in modern days. In special case, we can categorize motions into two types-salient and non-salient motion. In this paper, we first calculate temporal difference image for extract moving objects and adapt to dynamic environments and next, we also propose a new algorithm to detect salient motion information in complex environment by combining temporal difference image and binary block image which is calculated by motion vector using the newest MPEG-4 and EPZS, and it is very effective to detect objects in a complex environment that many various motions are mixed.

A Study on the Moving Object Tracking System Using Multi-feature Matching (다양한 특징 매칭을 이용한 움직이는 물체 추적 시스템에 관한 연구)

  • Piao, Zai-Jun;Kim, Sun-Woo;Choi, Yeon-Sung;Park, Chun-Bae;Ha, Tae-Ryeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.4
    • /
    • pp.786-792
    • /
    • 2007
  • Moving object tracking is very important in video surveillance system. This paper presents a method for tracking moving objects in an outdoor environment. To moving object tracking, first, after extract object that move yielding weight subtraction image and then use close operator to reduce the noise. And we track a object that move detected by matching the extracted multi-feature information. The proposed tracking technique can track moving object by multi-feature matching method so that exactly tracking the objects which are suddenly move or stop. The proposed tracking technique can be efficiently tracking the moving objects, because of combined with spatial position, shape and intensity informations.

Development of Access Management System based on Face Recognition using ResNet (ResNet을 이용한 얼굴 인식 기반 출입관리시스템 개발)

  • Rhyou, Se-Yeol;Kim, Hye-Jin;Cha, Kyung-Ae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.8
    • /
    • pp.823-831
    • /
    • 2019
  • In recent years, there has been developed systems such as a surveillance system and access control using a face recognition function instead of a password or an RFID chip, thereby reducing the risk of falsification. Moreover, deep learning technology has been applied to real-time face recognition technology in video, so it makes possible the development of access control system that improves the accuracy of recognition and efficiency of management. In this paper, we propose a real-time access management system based on face recognition using ResNet. The system is based on web server, which make it possible to manage the access by recognizing the person of the image through the camera and access information stored in the database. It can be accessed by a user application to receive various information. The implemented system identifies a person in real time and allows access control by accurately distinguishing whether they are members or not, and the test results can recognize in 0.2 seconds. The accuracy of recognition rate is up to about 97% depending on the experiment environment. With this system, access can be managed quickly and effectively, even many people rush to it.

Adaptive Background Modeling Considering Stationary Object and Object Detection Technique based on Multiple Gaussian Distribution

  • Jeong, Jongmyeon;Choi, Jiyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.51-57
    • /
    • 2018
  • In this paper, we studied about the extraction of the parameter and implementation of speechreading system to recognize the Korean 8 vowel. Face features are detected by amplifying, reducing the image value and making a comparison between the image value which is represented for various value in various color space. The eyes position, the nose position, the inner boundary of lip, the outer boundary of upper lip and the outer line of the tooth is found to the feature and using the analysis the area of inner lip, the hight and width of inner lip, the outer line length of the tooth rate about a inner mouth area and the distance between the nose and outer boundary of upper lip are used for the parameter. 2400 data are gathered and analyzed. Based on this analysis, the neural net is constructed and the recognition experiments are performed. In the experiment, 5 normal persons were sampled. The observational error between samples was corrected using normalization method. The experiment show very encouraging result about the usefulness of the parameter.

Separation of Occluding Pigs using Deep Learning-based Image Processing Techniques (딥 러닝 기반의 영상처리 기법을 이용한 겹침 돼지 분리)

  • Lee, Hanhaesol;Sa, Jaewon;Shin, Hyunjun;Chung, Youngwha;Park, Daihee;Kim, Hakjae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.136-145
    • /
    • 2019
  • The crowded environment of a domestic pig farm is highly vulnerable to the spread of infectious diseases such as foot-and-mouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a camera. Although it is required to correctly separate occluding pigs for tracking each individual pigs, extracting the boundaries of the occluding pigs fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate occluding pigs not only by exploiting the characteristics (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the limitation (i.e., the bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with two-pigs occlusion patterns show that the proposed method can provide better accuracy and processing speed than one of the state-of-the-art widely used deep learning-based segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).

Constrained adversarial loss for generative adversarial network-based faithful image restoration

  • Kim, Dong-Wook;Chung, Jae-Ryun;Kim, Jongho;Lee, Dae Yeol;Jeong, Se Yoon;Jung, Seung-Won
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.415-425
    • /
    • 2019
  • Generative adversarial networks (GAN) have been successfully used in many image restoration tasks, including image denoising, super-resolution, and compression artifact reduction. By fully exploiting its characteristics, state-of-the-art image restoration techniques can be used to generate images with photorealistic details. However, there are many applications that require faithful rather than visually appealing image reconstruction, such as medical imaging, surveillance, and video coding. We found that previous GAN-training methods that used a loss function in the form of a weighted sum of fidelity and adversarial loss fails to reduce fidelity loss. This results in non-negligible degradation of the objective image quality, including peak signal-to-noise ratio. Our approach is to alternate between fidelity and adversarial loss in a way that the minimization of adversarial loss does not deteriorate the fidelity. Experimental results on compression-artifact reduction and super-resolution tasks show that the proposed method can perform faithful and photorealistic image restoration.