• Title/Summary/Keyword: Foreground object segmentation

Search Result 45, Processing Time 0.019 seconds

Salient Object Detection via Adaptive Region Merging

  • Zhou, Jingbo;Zhai, Jiyou;Ren, Yongfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4386-4404
    • /
    • 2016
  • Most existing salient object detection algorithms commonly employed segmentation techniques to eliminate background noise and reduce computation by treating each segment as a processing unit. However, individual small segments provide little information about global contents. Such schemes have limited capability on modeling global perceptual phenomena. In this paper, a novel salient object detection algorithm is proposed based on region merging. An adaptive-based merging scheme is developed to reassemble regions based on their color dissimilarities. The merging strategy can be described as that a region R is merged with its adjacent region Q if Q has the lowest dissimilarity with Q among all Q's adjacent regions. To guide the merging process, superpixels that located at the boundary of the image are treated as the seeds. However, it is possible for a boundary in the input image to be occupied by the foreground object. To avoid this case, we optimize the boundary influences by locating and eliminating erroneous boundaries before the region merging. We show that even though three simple region saliency measurements are adopted for each region, encouraging performance can be obtained. Experiments on four benchmark datasets including MSRA-B, SOD, SED and iCoSeg show the proposed method results in uniform object enhancement and achieve state-of-the-art performance by comparing with nine existing methods.

Fusion of Background Subtraction and Clustering Techniques for Shadow Suppression in Video Sequences

  • Chowdhury, Anuva;Shin, Jung-Pil;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.231-234
    • /
    • 2013
  • This paper introduces a mixture of background subtraction technique and K-Means clustering algorithm for removing shadows from video sequences. Lighting conditions cause an issue with segmentation. The proposed method can successfully eradicate artifacts associated with lighting changes such as highlight and reflection, and cast shadows of moving object from segmentation. In this paper, K-Means clustering algorithm is applied to the foreground, which is initially fragmented by background subtraction technique. The estimated shadow region is then superimposed on the background to eliminate the effects that cause redundancy in object detection. Simulation results depict that the proposed approach is capable of removing shadows and reflections from moving objects with an accuracy of more than 95% in every cases considered.

Multi-Level Segmentation of Infrared Images with Region of Interest Extraction

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.246-253
    • /
    • 2016
  • Infrared (IR) imaging has been researched for various applications such as surveillance. IR radiation has the capability to detect thermal characteristics of objects under low-light conditions. However, automatic segmentation for finding the object of interest would be challenging since the IR detector often provides the low spatial and contrast resolution image without color and texture information. Another hindrance is that the image can be degraded by noise and clutters. This paper proposes multi-level segmentation for extracting regions of interest (ROIs) and objects of interest (OOIs) in the IR scene. Each level of the multi-level segmentation is composed of a k-means clustering algorithm, an expectation-maximization (EM) algorithm, and a decision process. The k-means clustering initializes the parameters of the Gaussian mixture model (GMM), and the EM algorithm estimates those parameters iteratively. During the multi-level segmentation, the area extracted at one level becomes the input to the next level segmentation. Thus, the segmentation is consecutively performed narrowing the area to be processed. The foreground objects are individually extracted from the final ROI windows. In the experiments, the effectiveness of the proposed method is demonstrated using several IR images, in which human subjects are captured at a long distance. The average probability of error is shown to be lower than that obtained from other conventional methods such as Gonzalez, Otsu, k-means, and EM methods.

Video Segmentation Using DCT and Guided Filter in real time (DCT와 Guided 필터를 이용한 실시간 영상 분류)

  • Shin, Hyunhak;Lee, Zucheul;Kim, Wonha
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.718-727
    • /
    • 2015
  • In this paper, we present a novel segmentation method that can extract new foreground objects from a current frame in real-time. It is performed by detecting differences between the current frame and reference frame taken from a fixed camera. We minimize computing complexity for real-time video processing. First DCT (Discrete Cosine Transform) is utilized to generate rough binary segmentation maps where foreground and background regions are separated. DCT shows better result of texture analysis than previous methods where texture analysis is performed in spatial domain. It is because texture analysis in frequency domain is easier than that in special domain and intensity and texture in DCT are taken into account at the same time. We maximize run-time efficiency of DCT by considering color information to analyze object region prior to DCT process. Last we use Guided filter for natural matting of the generated binary segmentation map. In general, Guided filter can enhance quality of intermediate result by incorporating guidance information. However, it shows some limitations in homogeneous area. Therefore, we present an additional method which can overcome them.

A Mode Selection Algorithm using Scene Segmentation for Multi-view Video Coding (객체 분할 기법을 이용한 다시점 영상 부호화에서의 예측 모드 선택 기법)

  • Lee, Seo-Young;Shin, Kwang-Mu;Chung, Ki-Dong
    • Journal of KIISE:Information Networking
    • /
    • v.36 no.3
    • /
    • pp.198-203
    • /
    • 2009
  • With the growing demand for multimedia services and advances in display technology, new applications for 3$\sim$D scene communication have emerged. While multi-view video of these emerging applications may provide users with more realistic scene experience, drastic increase in the bandwidth is a major problem to solve. In this paper, we propose a fast prediction mode decision algorithm which can significantly reduce complexity and time consumption of the encoding process. This is based on the object segmentation, which can effectively identify the fast moving foreground object. As the foreground object with fast motion is more likely to be encoded in the view directional prediction mode, we can properly limit the motion compensated coding for a case in point. As a result, time savings of the proposed algorithm was up to average 45% without much loss in the quality of the image sequence.

Effective Covariance Tracker based on Adaptive Foreground Segmentation in Tracking Window (적응적인 물체분리를 이용한 효과적인 공분산 추적기)

  • Lee, Jin-Wook;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.8
    • /
    • pp.766-770
    • /
    • 2010
  • In this paper, we present an effective covariance tracking algorithm based on adaptive size changing of tracking window. Recent researches have advocated the use of a covariance matrix of object image features for tracking objects instead of the conventional histogram object models used in popular algorithms. But, according to the general covariance tracking algorithm, it can not deal with the scale changes of the moving objects. The scale of the moving object often changes in various tracking environment and the tracking window(or object kernel) has to be adapted accordingly. In addition, the covariance matrix of moving objects should be adaptively updated considering of the tracking window size. We provide a solution to this problem by segmenting the moving object from the background pixels of the tracking window. Therefore, we can improve the tracking performance of the covariance tracking method. Our several simulations prove the effectiveness of the proposed method.

Background Segmentation in Color Image Using Self-Organizing Feature Selection (자기 조직화 기법을 활용한 컬러 영상 배경 영역 추출)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.407-412
    • /
    • 2008
  • Color segmentation is one of the most challenging problems in image processing especially in case of handling the images with cluttered background. Great amount of color segmentation methods have been developed and applied to real problems. In this paper, we suggest a new methodology. Our approach is focused on background extraction, as a complimentary operation to standard foreground object segmentation, using self-organizing feature selective property of unsupervised self-learning paradigm based on the competitive algorithm. The results of our studies show that background segmentation can be achievable in efficient manner.

An Improved Method for Detection of Moving Objects in Image Sequences Using Statistical Hypothesis Tests

  • Park, Jae-Gark;Kim, Munchurl;Lee, Myoung-Ho;Ahn, Chei-Teuk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.171-176
    • /
    • 1998
  • This paper resents a spatio-temporal video segmentation method. The algorithm segments each frame of video sequences captured by a static or moving camera into moving objects (foreground) and background using a statistical hypothesis test. In the proposed method, three consecutive image frames are exploited and a hypothesis testing is performed by comparing two means from two consecutive difference images, which results in a T-test. This hypothesis test yields change detection mask that indicates moving areas (foreground) and non-moving areas (background). Moreover, an effective method for extracting object mask form change detection mask is proposed.

  • PDF

A Basic Study on the Fire Flame Extraction of Non-Residential Facilities Based on Core Object Extraction (핵심 객체 추출에 기반한 비주거 시설의 화재불꽃 추출에 관한 기초 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.4
    • /
    • pp.71-79
    • /
    • 2017
  • Recently, Fire watching and dangerous substances monitoring system has been being developed to enhance various fire related security. It is generally assumed that fire flame extraction plays a very important role on this monitoring system. In this study, we propose the fire flame extraction method of Non-Residential Facilities based on core object extraction in image. A core object is defined as a comparatively large object at center of the image. First of all, an input image and its decreased resolution image are segmented. Segmented regions are classified as the outer or the inner region. The outer region is adjacent to boundaries of the image and the rest is not. Then core object regions and core background regions are selected from the inner region and the outer region, respectively. Core object regions are the representative regions for the object and are selected by using the information about the region size and location. Each inner region is classified into foreground or background region by comparing its values of a color histogram intersection of the inner region against the core object region and the core background region. Finally, the extracted core object region is determined as fire flame object in the image. Through experiments, we find that to provide a basic measures can respond effectively and quickly to fire in non-residential facilities.

A Multi-Layer Graphical Model for Constrained Spectral Segmentation

  • Kim, Tae Hoon;Lee, Kyoung Mu;Lee, Sang Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.437-438
    • /
    • 2011
  • Spectral segmentation is a major trend in image segmentation. Specially, constrained spectral segmentation, inspired by the user-given inputs, remains its challenging task. Since it makes use of the spectrum of the affinity matrix of a given image, its overall quality depends mainly on how to design the graphical model. In this work, we propose a sparse, multi-layer graphical model, where the pixels and the over-segmented regions are the graph nodes. Here, the graph affinities are computed by using the must-link and cannot-link constraints as well as the likelihoods that each node has a specific label. They are then used to simultaneously cluster all pixels and regions into visually coherent groups across all layers in a single multi-layer framework of Normalized Cuts. Although we incorporate only the adjacent connections in the multi-layer graph, the foreground object can be efficiently extracted in the spectral framework. The experimental results demonstrate the relevance of our algorithm as compared to existing popular algorithms.

  • PDF