• Title/Summary/Keyword: background difference image

Search Result 337, Processing Time 0.034 seconds

Smoke Detection using Block-based Difference Images and Projections (블록기반 차영상과 투영 그래프를 이용한 연기검출)

  • Kim, Dong-Keun;Kim, Won-Ho
    • The KIPS Transactions:PartB
    • /
    • v.14B no.5
    • /
    • pp.361-368
    • /
    • 2007
  • In this paper, we propose a smoke detection method which is based on block-wise difference of image frames in video. Our proposed method is composed of three steps which are (a) the detection step of the changed regions against the background, (b) the background update step, and (c) the smoke determination step from the changed regions. We first construct the block mean Image of frames in video. And to extract the changed regions against the background, we use a block-wise difference between background's block mean image and a current input frame's block mean image. After applying projections in block-based difference images, we can determine the changed regions as rectangles using projections of difference images. we propose a update scheme of background's block mean image using the projections. We decide the smoke region using the femoral statistics of the central position and YUV color in the changed region.

Motion Detection using Adaptive Background Image and Pixel Space (적응적 배경영상과 픽셀 간격을 이용한 움직임 검출)

  • 지정규;이창수;오해석
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.3
    • /
    • pp.45-54
    • /
    • 2003
  • Security system with web camera remarkably has been developed at an Internet era. Using transmitted images from remote camera, the system can recognize current situation and take a proper action through web. Existing motion detection methods use simply difference image, background image techniques or block matching algorithm which establish initial block by set search area and find similar block. But these methods are difficult to detect exact motion because of useless noise. In this paper, the proposed method is updating changed background image as much as $N{\times}M$pixel mask as time goes on after get a difference between imput image and first background image. And checking image pixel can efficiently detect motion by computing fixed distance pixel instead of operate all pixel.

  • PDF

Low-light Image Enhancement Based on Frame Difference and Tone Mapping (프레임 차와 톤 매핑을 이용한 저조도 영상 향상)

  • Jeong, Yunju;Lee, Yeonghak;Shim, Jaechang;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1044-1051
    • /
    • 2018
  • In this paper, we propose a new method to improve low light image. In order to improve the image quality of a night image with a moving object as much as the quality of a daytime image, the following tasks were performed. Firstly, we reduce the noisy of the input night image and improve the night image by the tone mapping method. Secondly, we segment the input night image into a foreground with motion and a background without motion. The motion is detected using both the difference between the current frame and the previous frame and the difference between the current frame and the night background image. The background region of the night image takes pixels from corresponding positions in the daytime image. The foreground regions of the night image take the pixels from the corresponding positions of the image which is improved by the tone mapping method. Experimental results show that the proposed method can improve the visual quality more clearly than the existing methods.

Codebook-Based Foreground Extraction Algorithm with Continuous Learning of Background (연속적인 배경 모델 학습을 이용한 코드북 기반의 전경 추출 알고리즘)

  • Jung, Jae-Young
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.449-455
    • /
    • 2014
  • Detection of moving objects is a fundamental task in most of the computer vision applications, such as video surveillance, activity recognition and human motion analysis. This is a difficult task due to many challenges in realistic scenarios which include irregular motion in background, illumination changes, objects cast shadows, changes in scene geometry and noise, etc. In this paper, we propose an foreground extraction algorithm based on codebook, a database of information about background pixel obtained from input image sequence. Initially, we suppose a first frame as a background image and calculate difference between next input image and it to detect moving objects. The resulting difference image may contain noises as well as pure moving objects. Second, we investigate a codebook with color and brightness of a foreground pixel in the difference image. If it is matched, it is decided as a fault detected pixel and deleted from foreground. Finally, a background image is updated to process next input frame iteratively. Some pixels are estimated by input image if they are detected as background pixels. The others are duplicated from the previous background image. We apply out algorithm to PETS2009 data and compare the results with those of GMM and standard codebook algorithms.

Convergence Control of Moving Object using Opto-Digital Algorithm in the 3D Robot Vision System

  • Ko, Jung-Hwan;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.3 no.2
    • /
    • pp.19-25
    • /
    • 2002
  • In this paper, a new target extraction algorithm is proposed, in which the coordinates of target are obtained adaptively by using the difference image information and the optical BPEJTC(binary phase extraction joint transform correlator) with which the target object can be segmented from the input image and background noises are removed in the stereo vision system. First, the proposed algorithm extracts the target object by removing the background noises through the difference image information of the sequential left images and then controlls the pan/tilt and convergence angle of the stereo camera by using the coordinates of the target position obtained from the optical BPEJTC between the extracted target image and the input image. From some experimental results, it is found that the proposed algorithm can extract the target object from the input image with background noises and then, effectively track the target object in real time. Finally, a possibility of implementation of the adaptive stereo object tracking system by using the proposed algorithm is also suggested.

Effects of Gas Background Temperature Difference(Emissivity) on OGI(Optical Gas Image) Clarity (가스의 배경 온도 차이(방사율)가 OGI(Optical Gas Image)의 선명도에 미치는 영향)

  • Park, Su-Ri;Han, Sang-Wook;Kim, Byung-Jick;Hong, Cheol-Jae
    • Journal of the Korean Institute of Gas
    • /
    • v.21 no.5
    • /
    • pp.1-8
    • /
    • 2017
  • Currently gas safety management in the industrial field has been done by LDAR as contact method or methane leak detector as non-contact method. But LDAR method requires a lot of man-power and methane leak detector have the limitation of methane only. Therefore the Research on the OGI(optical gas image) has big attention by industry. This research was undertaken to see the effect of background temperature difference of gas cloud on the clarity of OGI. The background temperature control panel was constructed to cool down the background temperature. OGI was taken at the various methane gas ejection rate and the designed temperature difference. The experimental results showed that the OGI(when the temperature difference is $-6^{\circ}C$) is more clear thane the OGI(when the temperature difference is zero). To quantify the clarity difference, MATLAB's RGB analysis method was employed. The RGB value of the OGI at ${\Delta}T-6^{\circ}C$ was 20% lower than the OGI at ${\Delta}T0^{\circ}C$. The clarity difference by T difference can be explained by the total radiation law. When the background temperature of the gas is lower than the air temperature, the radiation energy coming into the OGI lens is increasing. As the energy is increasing, the OGI image becomes clear.

Design and Implementation of the Security System for the Moving Object Detection (이동물체 검출을 위한 보안 시스템의 설계 및 구현)

  • 안용학;안일영
    • Convergence Security Journal
    • /
    • v.2 no.1
    • /
    • pp.77-86
    • /
    • 2002
  • In this paper, we propose a segmentation algorithm that can reliably separate moving objects from noisy background in the image sequence received from a camera at the fixed position. Image segmentation is one of the most difficult process in image processing and an adoption in the change of environment must be considered for the increase in the accuracy of the image. The proposed algorithm consists of four process : generation of the difference image between the input image and the reference image, removes the background noise using the background nois modeling to a difference image histogram, then selects the candidate initial region using local maxima to the difference image, and gradually expanding the connected regions, region by region, using the shape information. The test results show that the proposed algorithm can detect moving objects like intruders very effectively in the noisy environment.

  • PDF

Design Of Intrusion Detection System Using Background Machine Learning

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.5
    • /
    • pp.149-156
    • /
    • 2019
  • The existing subtract image based intrusion detection system for CCTV digital images has a problem that it can not distinguish intruders from moving backgrounds that exist in the natural environment. In this paper, we tried to solve the problems of existing system by designing real - time intrusion detection system for CCTV digital image by combining subtract image based intrusion detection method and background learning artificial neural network technology. Our proposed system consists of three steps: subtract image based intrusion detection, background artificial neural network learning stage, and background artificial neural network evaluation stage. The final intrusion detection result is a combination of result of the subtract image based intrusion detection and the final intrusion detection result of the background artificial neural network. The step of subtract image based intrusion detection is a step of determining the occurrence of intrusion by obtaining a difference image between the background cumulative average image and the current frame image. In the background artificial neural network learning, the background is learned in a situation in which no intrusion occurs, and it is learned by dividing into a detection window unit set by the user. In the background artificial neural network evaluation, the learned background artificial neural network is used to produce background recognition or intrusion detection in the detection window unit. The proposed background learning intrusion detection system is able to detect intrusion more precisely than existing subtract image based intrusion detection system and adaptively execute machine learning on the background so that it can be operated as highly practical intrusion detection system.

A Video Traffic Flow Detection System Based on Machine Vision

  • Wang, Xin-Xin;Zhao, Xiao-Ming;Shen, Yu
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1218-1230
    • /
    • 2019
  • This study proposes a novel video traffic flow detection method based on machine vision technology. The three-frame difference method, which is one kind of a motion evaluation method, is used to establish initial background image, and then a statistical scoring strategy is chosen to update background image in real time. Finally, the background difference method is used for detecting the moving objects. Meanwhile, a simple but effective shadow elimination method is introduced to improve the accuracy of the detection for moving objects. Furthermore, the study also proposes a vehicle matching and tracking strategy by combining characteristics, such as vehicle's location information, color information and fractal dimension information. Experimental results show that this detection method could quickly and effectively detect various traffic flow parameters, laying a solid foundation for enhancing the degree of automation for traffic management.