• Title/Summary/Keyword: Foreground Extraction

Search Result 62, Processing Time 0.021 seconds

Real-time Video Matting for Mobile Device (모바일 환경에서 실시간 영상 전경 추출 연구)

  • Yoon, Jong-Chul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.487-492
    • /
    • 2018
  • Recently, various applications for image processing have been ported to the mobile environment due to the expansion of the image shooting on the mobile device. However, in the case of extracting the image foreground, which is one of the most important functions of image synthesis, is difficult since it needs complex calculation. In this paper, we propose an video synthesis technique that can divide images captured by mobile devices into foreground / background and combine them in real time on target images. Considering the characteristics of mobile shooting, our system can extract automatically foreground of input video that contains weak motion when shooting. Using SIMD and GPGPU-based acceleration algorithms, SD-quality images can be processed on mobile in real time.

A Basic Study on the Fire Flame Extraction of Non-Residential Facilities Based on Core Object Extraction (핵심 객체 추출에 기반한 비주거 시설의 화재불꽃 추출에 관한 기초 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.4
    • /
    • pp.71-79
    • /
    • 2017
  • Recently, Fire watching and dangerous substances monitoring system has been being developed to enhance various fire related security. It is generally assumed that fire flame extraction plays a very important role on this monitoring system. In this study, we propose the fire flame extraction method of Non-Residential Facilities based on core object extraction in image. A core object is defined as a comparatively large object at center of the image. First of all, an input image and its decreased resolution image are segmented. Segmented regions are classified as the outer or the inner region. The outer region is adjacent to boundaries of the image and the rest is not. Then core object regions and core background regions are selected from the inner region and the outer region, respectively. Core object regions are the representative regions for the object and are selected by using the information about the region size and location. Each inner region is classified into foreground or background region by comparing its values of a color histogram intersection of the inner region against the core object region and the core background region. Finally, the extracted core object region is determined as fire flame object in the image. Through experiments, we find that to provide a basic measures can respond effectively and quickly to fire in non-residential facilities.

Enhanced Object Extraction Method Based on Multi-channel Saliency Map (Saliency Map 다중 채널을 기반으로 한 개선된 객체 추출 방법)

  • Choi, Young-jin;Cui, Run;Kim, Kwang-Rag;Kim, Hyoung Joong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.2
    • /
    • pp.53-61
    • /
    • 2016
  • Extracting focused object with saliency map is still remaining as one of the most highly tasked research area around computer vision for it is hard to estimate. Through this paper, we propose enhanced object extraction method based on multi-channel saliency map which could be done automatically without machine learning. Proposed Method shows a higher accuracy than Itti method using SLIC, Euclidean, and LBP algorithm as for object extraction. Experiments result shows that our approach is possible to be used for automatic object extraction without any previous training procedure through focusing on the main object from the image instead of estimating the whole image from background to foreground.

Green Chroma Keying for Robot Performances in Public Places (공공장소에서 로봇 공연용 그린 크로마키 합성)

  • Hwang, Heesoo
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.7
    • /
    • pp.7-13
    • /
    • 2017
  • Robot performances in public places are conducted for the purpose of promoting robot technology and inducing interest in events, exhibitions, and streets instead of dedicated stages. This paper extracts robot images in real time from a robot operation in front of a green chroma key cloth, and synthesizes them on various stage images. A simple and robust method for extracting a foreground robot from a chroma key background without a user's preset is proposed. After increasing the color difference between the background and the foreground, this method automatically removes the background based on the histogram of the difference information, thereby eliminating the need for a user's preset. The simulation shows 98.8% of foreground extraction rate and experimental results demonstrate that the robots can effectively be extracted from the background.

A Novel Segment Extraction and Stereo Matching Technique using Color, Motion and Initial Depth from Depth Camera (컬러, 움직임 정보 및 깊이 카메라 초기 깊이를 이용한 분할 영역 추출 및 스테레오 정합 기법)

  • Um, Gi-Mun;Park, Ji-Min;Bang, Gun;Cheong, Won-Sik;Hur, Nam-Ho;Kim, Jin-Woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1147-1153
    • /
    • 2009
  • We propose a novel image segmentation and segment-based stereo matching technique using color, depth, and motion information. Proposed technique firstly splits reference images into foreground region or background region using depth information from depth camera. Then each region is segmented into small segments with color information. Moreover, extracted segments in current frame are tracked in the next frame in order to maintain depth consistency between frames. The initial depth from the depth camera is also used to set the depth search range for stereo matching. Proposed segment-based stereo matching technique was compared with conventional one without foreground and background separation and other conventional one without motion tracking of segments. Simulation results showed that the improvement of segment extraction and depth estimation consistencies by proposed technique compared to conventional ones especially at the static background region.

A Study on Extraction of Central Objects in Color Images (칼라 영상에서의 중심 객체 추출에 관한 연구)

  • 김성영;박창민;권규복;김민환
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.6
    • /
    • pp.616-624
    • /
    • 2002
  • An extraction method of central objects in the color images is proposed, in this paper. A central object is defined as a comparatively consist of the central object in the image. First of all. an input image and its decreased resolution images are segmented. Segmented regions are classified as the outer or the inner region. The outer region is adjacent regions are included by a same region in the decreased resolution image. Then core object regions and core background regions are selected from the inner region and the outer region respectively. Core object regions are the representative regions for the object and are selected by using the information about the information about the region size and location. Each inner regions is classified into foreground or background regions by comparing values of a color histogram intersection of the inner region against the core object region and the core background regions. The core object region and foreground regions consist of the central object in the image.

  • PDF

Deep Learning-based Vehicle Anomaly Detection using Road CCTV Data (도로 CCTV 데이터를 활용한 딥러닝 기반 차량 이상 감지)

  • Shin, Dong-Hoon;Baek, Ji-Won;Park, Roy C.;Chung, Kyungyong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.2
    • /
    • pp.1-6
    • /
    • 2021
  • In the modern society, traffic problems are occurring as vehicle ownership increases. In particular, the incidence of highway traffic accidents is low, but the fatality rate is high. Therefore, a technology for detecting an abnormality in a vehicle is being studied. Among them, there is a vehicle anomaly detection technology using deep learning. This detects vehicle abnormalities such as a stopped vehicle due to an accident or engine failure. However, if an abnormality occurs on the road, it is possible to quickly respond to the driver's location. In this study, we propose a deep learning-based vehicle anomaly detection using road CCTV data. The proposed method preprocesses the road CCTV data. The pre-processing uses the background extraction algorithm MOG2 to separate the background and the foreground. The foreground refers to a vehicle with displacement, and a vehicle with an abnormality on the road is judged as a background because there is no displacement. The image that the background is extracted detects an object using YOLOv4. It is determined that the vehicle is abnormal.

Realtime Smoke Detection using Hidden Markov Model and DWT (은닉마르코프모델과 DWT를 이용한 실시간 연기 검출)

  • Kim, Hyung-O
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.4
    • /
    • pp.343-350
    • /
    • 2016
  • In this paper, We proposed a realtime smoke detection using hidden markov model and DWT. The smoke type is not clear. The color of the smoke, form, spread direction, etc., are characterized by varying the environment. Therefore, smoke detection using specific information has a high error rate detection. Dynamic Object Detection was used a robust foreground extraction method to environmental changes. Smoke recognition is used to integrate the color, shape, DWT energy information of the detected object. The proposed method is a real-time processing by having the average processing speed of 30fps. The average detection time is about 7 seconds, it is possible to detect early rapid.

Visual Attention Detection By Adaptive Non-Local Filter

  • Anh, Dao Nam
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.1
    • /
    • pp.49-54
    • /
    • 2016
  • Regarding global and local factors of a set of features, a given single image or multiple images is a common approach in image processing. This paper introduces an application of an adaptive version of non-local filter whose original version searches non-local similarity for removing noise. Since most images involve texture partner in both foreground and background, extraction of signified regions with texture is a challenging task. Aiming to the detection of visual attention regions for images with texture, we present the contrast analysis of image patches located in a whole image but not nearby with assistance of the adaptive filter for estimation of non-local divergence. The method allows extraction of signified regions with texture of images of wild life. Experimental results for a benchmark demonstrate the ability of the proposed method to deal with the mentioned challenge.

Fingerprint region and table segmentation in fingerprint document (지문원지의 영역분할 및 도표 인식)

  • 정윤주;이영화;이준재;심재창
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.552-555
    • /
    • 1999
  • In this paper, a method for extracting the fingerprint regions and the table from fingerprint document which is the size of A4 including ten fingerprints images in a table is presented. The extraction of each fingerprint region is carried out by segmenting the foreground fingerprint region using a block filtering method and detecting its center point. The table extraction, by detecting a horizontal line using line tracing, and detecting a vertical line by its orthogonal equation. Here, T-shaped mask is proposed for finding the starting points of the vertical line intersecting horizontal line by the form of 'T'. Experimental results show above 95% correct rate of extracting the fingerprint region and table.

  • PDF