• Title/Summary/Keyword: video object extraction

Search Result 111, Processing Time 0.025 seconds

Composition of Foreground and Background Images using Optical Flow and Weighted Border Blending (옵티컬 플로우와 가중치 경계 블렌딩을 이용한 전경 및 배경 이미지의 합성)

  • Gebreyohannes, Dawit;Choi, Jung-Ju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.3
    • /
    • pp.1-8
    • /
    • 2014
  • We propose a method to compose a foreground object into a background image, where the foreground object is a part (or a region) of an image taken by a front-facing camera and the background image is a whole image taken by a back-facing camera in a smart phone at the same time. Recent high-end cell-phones have two cameras and provide users with preview video before taking photos. We extract the foreground object that is moving along with the front-facing camera using the optical flow during the preview. We compose the extracted foreground object into a background image using a simple image composition technique. For better-looking result in the composed image, we apply a border smoothing technique using a weighted-border mask to blend transparency from background to foreground. Since constructing and grouping pixel-level dense optical flow are quite slow even in high-end cell-phones, we compute a mask to extract the foreground object in low-resolution image, which reduces the computational cost greatly. Experimental result shows the effectiveness of our extraction and composition techniques, with much less computational time in extracting the foreground object and better composition quality compared with Poisson image editing technique which is widely used in image composition. The proposed method can improve limitedly the color bleeding artifacts observed in Poisson image editing using weighted-border blending.

A Background Initialization for Video Surveillance

  • Lim Kang Mo;Lee Se Yeun;Shin Chang Hoon;Kim Yoon Ho;Lee Joo Shin
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.810-813
    • /
    • 2004
  • In this paper, a background initialization for video surveillance proposed. The proposed algorithm is that the background images are sampled n frames during ${\Delta}t$ All Sampling frames are divided by $M{\times}N$ size block every frame. Average values of pixels for same location block of the sampling frames during ${\Delta}t$t are taken. then the maximum intensity $\alpha$ and the minimun intensity $\beta$ is obtained, respecticely. The intial by $M{\times}N$ size block, then average intensity $\eta$ of pixels for the block is obtained. If the average intensity $\eta$ is out of the initial range of the background image, it is decided the moving object image, and if the average intensity $\eta$ is included in the initial range of the background image. it is decided the background image. To examine the propriety of the proposed algorithm in this paper, the accuracy and robustness evaluation results for human and car in the indoor and outdoor enviroment. the error rate of the proposed method is less than the existing methods and the extraction rate of the proposed method is better than the existing methods.

  • PDF

Improved Extraction of Representative Motion Vector Using Background Information in Digital Cinema Environment (디지털 시네마 환경에서 배경정보를 이용한 대표 움직임 정보 추출)

  • Park, Il-Cheol;Kwon, Goo-Rak
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.731-736
    • /
    • 2012
  • Latest digital cinema is getting more interest on recent days. The combination of visually immersive 3D movie with chair movements and other physical effects has added more enjoyment. The movement of the chair is controlled manually in these digital cinemas. By the analysis of the digital cinema's video sequences, movement of the chair can be controlled automatically. In the proposed method first of all the motion of focused object and the background is identified and then the motion vector information is extracted by using the 9-search range. The motion vector is determined only for the movement of background while the object is stationary. The extracted Motion information from the digital cinemas is used for the movement control of the chair. The experimental results show that the proposed method outperforms the existing methods in terms of accuracy.

Adaptive Background Modeling Considering Stationary Object and Object Detection Technique based on Multiple Gaussian Distribution

  • Jeong, Jongmyeon;Choi, Jiyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.51-57
    • /
    • 2018
  • In this paper, we studied about the extraction of the parameter and implementation of speechreading system to recognize the Korean 8 vowel. Face features are detected by amplifying, reducing the image value and making a comparison between the image value which is represented for various value in various color space. The eyes position, the nose position, the inner boundary of lip, the outer boundary of upper lip and the outer line of the tooth is found to the feature and using the analysis the area of inner lip, the hight and width of inner lip, the outer line length of the tooth rate about a inner mouth area and the distance between the nose and outer boundary of upper lip are used for the parameter. 2400 data are gathered and analyzed. Based on this analysis, the neural net is constructed and the recognition experiments are performed. In the experiment, 5 normal persons were sampled. The observational error between samples was corrected using normalization method. The experiment show very encouraging result about the usefulness of the parameter.

Anomaly detection of isolating switch based on single shot multibox detector and improved frame differencing

  • Duan, Yuanfeng;Zhu, Qi;Zhang, Hongmei;Wei, Wei;Yun, Chung Bang
    • Smart Structures and Systems
    • /
    • v.28 no.6
    • /
    • pp.811-825
    • /
    • 2021
  • High-voltage isolating switches play a paramount role in ensuring the safety of power supply systems. However, their exposure to outdoor environmental conditions may cause serious physical defects, which may result in great risk to power supply systems and society. Image processing-based methods have been used for anomaly detection. However, their accuracy is affected by numerous uncertainties due to manually extracted features, which makes the anomaly detection of isolating switches still challenging. In this paper, a vision-based anomaly detection method for isolating switches, which uses the rotational angle of the switch system for more accurate and direct anomaly detection with the help of deep learning (DL) and image processing methods (Single Shot Multibox Detector (SSD), improved frame differencing method, and Hough transform), is proposed. The SSD is a deep learning method for object classification and localization. In addition, an improved frame differencing method is introduced for better feature extraction and a hough transform method is adopted for rotational angle calculation. A number of experiments are conducted for anomaly detection of single and multiple switches using video frames. The results of the experiments demonstrate that the SSD outperforms the You-Only-Look-Once network. The effectiveness and robustness of the proposed method have been proven under various conditions, such as different illumination and camera locations using 96 videos from the experiments.

Auto-Analysis of Traffic Flow through Semantic Modeling of Moving Objects (움직임 객체의 의미적 모델링을 통한 차량 흐름 자동 분석)

  • Choi, Chang;Cho, Mi-Young;Choi, Jun-Ho;Choi, Dong-Jin;Kim, Pan-Koo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.6
    • /
    • pp.36-45
    • /
    • 2009
  • Recently, there are interested in the automatic traffic flowing and accident detection using various low level information from video in the road. In this paper, the automatic traffic flowing and algorithm, and application of traffic accident detection using traffic management systems are studied. To achieve these purposes, the spatio-temporal relation models using topological and directional relations have been made, then a matching of the proposed models with the directional motion verbs proposed by Levin's verbs of inherently directed motion is applied. Finally, the synonym and antonym are inserted by using WordNet. For the similarity measuring between proposed modeling and trajectory of moving object in the video, the objects are extracted, and then compared with the trajectories of moving objects by the proposed modeling. Because of the different features with each proposed modeling, the rules that have been generated will be applied to the similarity measurement by TSR (Tangent Space Representation). Through this research, we can extend our results to the automatic accident detection of vehicle using CCTV.

  • PDF

Real-Time Place Recognition for Augmented Mobile Information Systems (이동형 정보 증강 시스템을 위한 실시간 장소 인식)

  • Oh, Su-Jin;Nam, Yang-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.477-481
    • /
    • 2008
  • Place recognition is necessary for a mobile user to be provided with place-dependent information. This paper proposes real-time video based place recognition system that identifies users' current place while moving in the building. As for the feature extraction of a scene, there have been existing methods based on global feature analysis that has drawback of sensitive-ness for the case of partial occlusion and noises. There have also been local feature based methods that usually attempted object recognition which seemed hard to be applied in real-time system because of high computational cost. On the other hand, researches using statistical methods such as HMM(hidden Markov models) or bayesian networks have been used to derive place recognition result from the feature data. The former is, however, not practical because it requires huge amounts of efforts to gather the training data while the latter usually depends on object recognition only. This paper proposes a combined approach of global and local feature analysis for feature extraction to complement both approaches' drawbacks. The proposed method is applied to a mobile information system and shows real-time performance with competitive recognition result.

3D Position Information Extraction of Video Image for Motion Simulation (모션 시뮬레이션을 위한 동영상에서의 3D 위치 정보 추출)

  • 박혜선;강신국;박민호;김항준
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.223-225
    • /
    • 2003
  • 패턴을 기반으로 딴 AR(Augmented Reality) 시스템은 실시간 동영상 안에 가상 물체들을 정확하게 올리기(registering) 위한 좋은 방법이다. AR 시스템을 구현하기 위해서는 우선 카메라가 보고 있는 영상의 3D 위치 정보를 추출하여야 한다. 본 논문에서는 카메라가 보고 있는 체스판 영상의 3D 위치 정보를 자동적으로 추출하여 그것과 동기적으로 움직이는 가상의 object를 구현하는 시스템을 제안한다. 제안된 방법은 카메라 1 대를 가지고 어떠한 sensor 나 marker 를 사용하지 않고 시간적 정보만을 이용하여 비교적 정확한 3D 위지 정보를 추출할 수 있고, 추출된 3D 위치 정보를 통해 자연스러운 3D 모션 시뮬레이션을 구현할 수 있다.

  • PDF

A study of Real-Time Face Recognition using Web CAM and Ideal Hair style Adaption Method (웹캠을 이용한 실시간 얼굴인식과 이상적 헤어스타일 적용방법에 관한 연구)

  • Kang, Nam-Soon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.2
    • /
    • pp.532-539
    • /
    • 2010
  • This paper suggests the system for searching and application is to be in combination between existing hair art area and Image/Video processing area. This proposed system usually saves various hair types into a database, then, users send images of their face over the internet by using WebCam. Finally, they can find the hair types for users.

Web-based Video Monitoring System on Real Time using Object Extraction (객체 추출을 이용한 실시간 웹기반 영상감시 시스템)

  • Lee, Keun-Wang;Oh, Taek-Hwan
    • Proceedings of the KAIS Fall Conference
    • /
    • 2006.05a
    • /
    • pp.426-429
    • /
    • 2006
  • 실시간 영상에서 객체 추적은 수년간 컴퓨터 비전 및 여러 실용적 응용 분야에서 관심을 가지는 주제 중 하나이다. 하지만 배경영상의 잡음을 객체로 인식하는 오류로 인하여 추출하고자 하는 객체를 찾지 못하는 경우가 있다. 본 논문에서는 실시간 영상에서 적응적 배경영상을 이용하여 객체를 추출하는 방법을 제안한다. 입력되는 영상에서 배경영역의 잡음을 제거하고 조명에 강인한 객체 추출을 위하여 객체영역이 아닌 배경영역 부분을 실시간으로 갱신함으로써 적응적 배경영상을 생성한다. 그리고 배경영상과 카메라로부터 입력되는 입력영상과의 차를 이용하여 객체를 추출한다.

  • PDF