• Title/Summary/Keyword: Salient object detection

Search Result 26, Processing Time 0.025 seconds

AANet: Adjacency auxiliary network for salient object detection

  • Li, Xialu;Cui, Ziguan;Gan, Zongliang;Tang, Guijin;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3729-3749
    • /
    • 2021
  • At present, deep convolution network-based salient object detection (SOD) has achieved impressive performance. However, it is still a challenging problem to make full use of the multi-scale information of the extracted features and which appropriate feature fusion method is adopted to process feature mapping. In this paper, we propose a new adjacency auxiliary network (AANet) based on multi-scale feature fusion for SOD. Firstly, we design the parallel connection feature enhancement module (PFEM) for each layer of feature extraction, which improves the feature density by connecting different dilated convolution branches in parallel, and add channel attention flow to fully extract the context information of features. Then the adjacent layer features with close degree of abstraction but different characteristic properties are fused through the adjacent auxiliary module (AAM) to eliminate the ambiguity and noise of the features. Besides, in order to refine the features effectively to get more accurate object boundaries, we design adjacency decoder (AAM_D) based on adjacency auxiliary module (AAM), which concatenates the features of adjacent layers, extracts their spatial attention, and then combines them with the output of AAM. The outputs of AAM_D features with semantic information and spatial detail obtained from each feature are used as salient prediction maps for multi-level feature joint supervising. Experiment results on six benchmark SOD datasets demonstrate that the proposed method outperforms similar previous methods.

Salient Motion Information Detection Method Using Weighted Subtraction Image and Motion Vector (가중치 차 영상과 움직임 벡터를 이용한 두드러진 움직임 정보 검출 방법)

  • Kim, Sun-Woo;Ha, Tae-Ryeong;Park, Chun-Bae;Choi, Yeon-Sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.4
    • /
    • pp.779-785
    • /
    • 2007
  • Moving object detection is very important for video surveillance in modern days. In special case, we can categorize motions into two types-salient and non-salient motion. In this paper, we first calculate temporal difference image for extract moving objects and adapt to dynamic environments and next, we also propose a new algorithm to detect salient motion information in complex environment by combining temporal difference image and binary block image which is calculated by motion vector using the newest MPEG-4 and EPZS, and it is very effective to detect objects in a complex environment that many various motions are mixed.

Background Prior-based Salient Object Detection via Adaptive Figure-Ground Classification

  • Zhou, Jingbo;Zhai, Jiyou;Ren, Yongfeng;Lu, Ali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1264-1286
    • /
    • 2018
  • In this paper, a novel background prior-based salient object detection framework is proposed to deal with images those are more complicated. We take the superpixels located in four borders into consideration and exploit a mechanism based on image boundary information to remove the foreground noises, which are used to form the background prior. Afterward, an initial foreground prior is obtained by selecting superpixels that are the most dissimilar to the background prior. To determine the regions of foreground and background based on the prior of them, a threshold is needed in this process. According to a fixed threshold, the remaining superpixels are iteratively assigned based on their proximity to the foreground or background prior. As the threshold changes, different foreground priors generate multiple different partitions that are assigned a likelihood of being foreground. Last, all segments are combined into a saliency map based on the idea of similarity voting. Experiments on five benchmark databases demonstrate the proposed method performs well when it compares with the state-of-the-art methods in terms of accuracy and robustness.

A Study on the salient points detection and object representation for object matching (물체 정합을 위한 특징점 추출 및 물체 표현에 관한 연구)

  • Park, Jeong-Min;Sohn, Kwang-Hoon;Huh, Young
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.6
    • /
    • pp.101-108
    • /
    • 1998
  • An efficient approach to recognize occluded objects is to detect a number of essential features on the boundary of the unknown shape. The salient points including corner points, tangential points and inflection points are detected by the relation of neighboring pixels of each pixel on the boundaries. Corner points are usually detected in the curvature function and tangential points and inflection points are detected by median filtering the curvature function to avoid the effect of quantization noise as corner points is not sufficient to represent an object with lines and arcs. Then, these salient points are used as features for object matching. Discrete Hopfield Neural Network is used for object matching. Experimental results show that the matching result using salient points is better than those of using corner points only when an object consists of lines and arcs.

  • PDF

Multi-Object Detection Using Image Segmentation and Salient Points (영상 분할 및 주요 특징 점을 이용한 다중 객체 검출)

  • Lee, Jeong-Ho;Kim, Ji-Hun;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.48-55
    • /
    • 2008
  • In this paper we propose a novel method for image retrieval system using image segmentation and salient points. The proposed method consists of four steps. In the first step, images are segmented into several regions by JSEG algorithm. In the second step, for the segmented regions, dominant colors and the corresponding color histogram are constructed. By using dominant colors and color histogram, we identify candidate regions where objects may exist. In the third step, real object regions are detected from candidate regions by SIFT matching. In the final step, we measure the similarity between the query image and DB image by using the color correlogram technique. Color correlogram is computed in the query image and object region of DB image. By experimental results, it has been shown that the proposed method detects multi-object very well and it provides better retrieval performance compared with object-based retrieval systems.

Salient Object Detection via Multiple Random Walks

  • Zhai, Jiyou;Zhou, Jingbo;Ren, Yongfeng;Wang, Zhijian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1712-1731
    • /
    • 2016
  • In this paper, we propose a novel saliency detection framework via multiple random walks (MRW) which simulate multiple agents on a graph simultaneously. In the MRW system, two agents, which represent the seeds of background and foreground, traverse the graph according to a transition matrix, and interact with each other to achieve a state of equilibrium. The proposed algorithm is divided into three steps. First, an initial segmentation is performed to partition an input image into homogeneous regions (i.e., superpixels) for saliency computation. Based on the regions of image, we construct a graph that the nodes correspond to the superpixels in the image, and the edges between neighboring nodes represent the similarities of the corresponding superpixels. Second, to generate the seeds of background, we first filter out one of the four boundaries that most unlikely belong to the background. The superpixels on each of the three remaining sides of the image will be labeled as the seeds of background. To generate the seeds of foreground, we utilize the center prior that foreground objects tend to appear near the image center. In last step, the seeds of foreground and background are treated as two different agents in multiple random walkers to complete the process of salient object detection. Experimental results on three benchmark databases demonstrate the proposed method performs well when it against the state-of-the-art methods in terms of accuracy and robustness.

A Saliency Map based on Color Boosting and Maximum Symmetric Surround

  • Huynh, Trung Manh;Lee, Gueesang
    • Smart Media Journal
    • /
    • v.2 no.2
    • /
    • pp.8-13
    • /
    • 2013
  • Nowadays, the saliency region detection has become a popular research topic because of its uses for many applications like object recognition and object segmentation. Some of recent methods apply color distinctiveness based on an analysis of statistics of color image derivatives in order to boosting color saliency can produce the good saliency maps. However, if the salient regions comprise more than half the pixels of the image or the background is complex, it may cause bad results. In this paper, we introduce the method to handle these problems by using maximum symmetric surround. The results show that our method outperforms the previous algorithms. We also show the segmentation results by using Otsu's method.

  • PDF

Background memory-assisted zero-shot video object segmentation for unmanned aerial and ground vehicles

  • Kimin Yun;Hyung-Il Kim;Kangmin Bae;Jinyoung Moon
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.795-810
    • /
    • 2023
  • Unmanned aerial vehicles (UAV) and ground vehicles (UGV) require advanced video analytics for various tasks, such as moving object detection and segmentation; this has led to increasing demands for these methods. We propose a zero-shot video object segmentation method specifically designed for UAV and UGV applications that focuses on the discovery of moving objects in challenging scenarios. This method employs a background memory model that enables training from sparse annotations along the time axis, utilizing temporal modeling of the background to detect moving objects effectively. The proposed method addresses the limitations of the existing state-of-the-art methods for detecting salient objects within images, regardless of their movements. In particular, our method achieved mean J and F values of 82.7 and 81.2 on the DAVIS'16, respectively. We also conducted extensive ablation studies that highlighted the contributions of various input compositions and combinations of datasets used for training. In future developments, we will integrate the proposed method with additional systems, such as tracking and obstacle avoidance functionalities.

Window Production Method based on Low-Frequency Detection for Automatic Object Extraction of GrabCut (GrabCut의 자동 객체 추출을 위한 저주파 영역 탐지 기반의 윈도우 생성 기법)

  • Yoo, Tae-Hoon;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.211-217
    • /
    • 2012
  • Conventional GrabCut algorithm is semi-automatic algorithm that user must be set rectangle window surrounds the object. This paper studied automatic object detection to solve these problem by detecting salient region based on Human Visual System. Saliency map is computed using Lab color space which is based on color opposing theory of 'red-green' and 'blue-yellow'. Then Saliency Points are computed from the boundaries of Low-Frequency region that are extracted from Saliency Map. Finally, Rectangle windows are obtained from coordinate value of Saliency Points and these windows are used in GrabCut algorithm to extract objects. Through various experiments, the proposed algorithm computing rectangle windows of salient region and extracting objects has been proved.

Saliency Detection based on Global Color Distribution and Active Contour Analysis

  • Hu, Zhengping;Zhang, Zhenbin;Sun, Zhe;Zhao, Shuhuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5507-5528
    • /
    • 2016
  • In computer vision, salient object is important to extract the useful information of foreground. With active contour analysis acting as the core in this paper, we propose a bottom-up saliency detection algorithm combining with the Bayesian model and the global color distribution. Under the supports of active contour model, a more accurate foreground can be obtained as a foundation for the Bayesian model and the global color distribution. Furthermore, we establish a contour-based selection mechanism to optimize the global-color distribution, which is an effective revising approach for the Bayesian model as well. To obtain an excellent object contour, we firstly intensify the object region in the source gray-scale image by a seed-based method. The final saliency map can be detected after weighting the color distribution to the Bayesian saliency map, after both of the two components are available. The contribution of this paper is that, comparing the Harris-based convex hull algorithm, the active contour can extract a more accurate and non-convex foreground. Moreover, the global color distribution can solve the saliency-scattered drawback of Bayesian model, by the mutual complementation. According to the detected results, the final saliency maps generated with considering the global color distribution and active contour are much-improved.