• 제목/요약/키워드: image saliency detection

Search Result 44, Processing Time 0.024 seconds

Co-saliency Detection Based on Superpixel Matching and Cellular Automata

  • Zhang, Zhaofeng;Wu, Zemin;Jiang, Qingzhu;Du, Lin;Hu, Lei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2576-2589
    • /
    • 2017
  • Co-saliency detection is a task of detecting same or similar objects in multi-scene, and has been an important preprocessing step for multi-scene image processing. However existing methods lack efficiency to match similar areas from different images. In addition, they are confined to single image detection without a unified framework to calculate co-saliency. In this paper, we propose a novel model called Superpixel Matching-Cellular Automata (SMCA). We use Hausdorff distance adjacent superpixel sets instead of single superpixel since the feature matching accuracy of single superpixel is poor. We further introduce Cellular Automata to exploit the intrinsic relevance of similar regions through interactions with neighbors in multi-scene. Extensive evaluations show that the SMCA model achieves leading performance compared to state-of-the-art methods on both efficiency and accuracy.

Saliency Detection based on Global Color Distribution and Active Contour Analysis

  • Hu, Zhengping;Zhang, Zhenbin;Sun, Zhe;Zhao, Shuhuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5507-5528
    • /
    • 2016
  • In computer vision, salient object is important to extract the useful information of foreground. With active contour analysis acting as the core in this paper, we propose a bottom-up saliency detection algorithm combining with the Bayesian model and the global color distribution. Under the supports of active contour model, a more accurate foreground can be obtained as a foundation for the Bayesian model and the global color distribution. Furthermore, we establish a contour-based selection mechanism to optimize the global-color distribution, which is an effective revising approach for the Bayesian model as well. To obtain an excellent object contour, we firstly intensify the object region in the source gray-scale image by a seed-based method. The final saliency map can be detected after weighting the color distribution to the Bayesian saliency map, after both of the two components are available. The contribution of this paper is that, comparing the Harris-based convex hull algorithm, the active contour can extract a more accurate and non-convex foreground. Moreover, the global color distribution can solve the saliency-scattered drawback of Bayesian model, by the mutual complementation. According to the detected results, the final saliency maps generated with considering the global color distribution and active contour are much-improved.

A Saliency-Based Focusing Region Selection Method for Robust Auto-Focusing

  • Jeon, Jaehwan;Cho, Changhun;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.133-142
    • /
    • 2012
  • This paper presents a salient region detection algorithm for auto-focusing based on the characteristics of a human's visual attention. To describe the saliency at the local, regional, and global levels, this paper proposes a set of novel features including multi-scale local contrast, variance, center-surround entropy, and closeness to the center. Those features are then prioritized to produce a saliency map. The major advantage of the proposed approach is twofold; i) robustness to changes in focus and ii) low computational complexity. The experimental results showed that the proposed method outperforms the existing low-level feature-based methods in the sense of both robustness and accuracy for auto-focusing.

  • PDF

The Method to Measure Saliency Values for Salient Region Detection from an Image

  • Park, Seong-Ho;Yu, Young-Jung
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.1
    • /
    • pp.55-58
    • /
    • 2011
  • In this paper we introduce an improved method to measure saliency values of pixels from an image. The proposed saliency measure is formulated using local features of color and a statistical framework. In the preprocessing step, rough salient pixels are determined as the local contrast of an image region with respect to its neighborhood at various scales. Then, the saliency value of each pixel is calculated by Bayes' rule using rough salient pixels. The experiments show that our approach outperforms the current Bayes' rule based method.

Far Distance Face Detection from The Interest Areas Expansion based on User Eye-tracking Information (시선 응시 점 기반의 관심영역 확장을 통한 원 거리 얼굴 검출)

  • Park, Heesun;Hong, Jangpyo;Kim, Sangyeol;Jang, Young-Min;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.113-127
    • /
    • 2012
  • Face detection methods using image processing have been proposed in many different ways. Generally, the most widely used method for face detection is an Adaboost that is proposed by Viola and Jones. This method uses Haar-like feature for image learning, and the detection performance depends on the learned images. It is well performed to detect face images within a certain distance range, but if the image is far away from the camera, face images become so small that may not detect them with the pre-learned Haar-like feature of the face image. In this paper, we propose the far distance face detection method that combine the Aadaboost of Viola-Jones with a saliency map and user's attention information. Saliency Map is used to select the candidate face images in the input image, face images are finally detected among the candidated regions using the Adaboost with Haar-like feature learned in advance. And the user's eye-tracking information is used to select the interest regions. When a subject is so far away from the camera that it is difficult to detect the face image, we expand the small eye gaze spot region using linear interpolation method and reuse that as input image and can increase the face image detection performance. We confirmed the proposed model has better results than the conventional Adaboost in terms of face image detection performance and computational time.

Video Saliency Detection Using Bi-directional LSTM

  • Chi, Yang;Li, Jinjiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2444-2463
    • /
    • 2020
  • Significant detection of video can more rationally allocate computing resources and reduce the amount of computation to improve accuracy. Deep learning can extract the edge features of the image, providing technical support for video saliency. This paper proposes a new detection method. We combine the Convolutional Neural Network (CNN) and the Deep Bidirectional LSTM Network (DB-LSTM) to learn the spatio-temporal features by exploring the object motion information and object motion information to generate video. A continuous frame of significant images. We also analyzed the sample database and found that human attention and significant conversion are time-dependent, so we also considered the significance detection of video cross-frame. Finally, experiments show that our method is superior to other advanced methods.

Building Change Detection Using Deep Learning for Remote Sensing Images

  • Wang, Chang;Han, Shijing;Zhang, Wen;Miao, Shufeng
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.587-598
    • /
    • 2022
  • To increase building change recognition accuracy, we present a deep learning-based building change detection using remote sensing images. In the proposed approach, by merging pixel-level and object-level information of multitemporal remote sensing images, we create the difference image (DI), and the frequency-domain significance technique is used to generate the DI saliency map. The fuzzy C-means clustering technique pre-classifies the coarse change detection map by defining the DI saliency map threshold. We then extract the neighborhood features of the unchanged pixels and the changed (buildings) from pixel-level and object-level feature images, which are then used as valid deep neural network (DNN) training samples. The trained DNNs are then utilized to identify changes in DI. The suggested strategy was evaluated and compared to current detection methods using two datasets. The results suggest that our proposed technique can detect more building change information and improve change detection accuracy.

Saliency Detection Using Entropy Weight and Weber's Law (엔트로피 가중치와 웨버 법칙을 이용한 세일리언시 검출)

  • Lee, Ho Sang;Moon, Sang Whan;Eom, Il Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.88-95
    • /
    • 2017
  • In this paper, we present a saliency detection method using entropy weight and Weber contrast in the wavelet transform domain. Our method is based on the commonly exploited conventional algorithms that are composed of the local bottom-up approach and global top-down approach. First, we perform the multi-level wavelet transform for the CIE Lab color images, and obtain global saliency by adding the local Weber contrasts to the corresponding low-frequency wavelet coefficients. Next, the local saliency is obtained by applying Gaussian filter that is weighted by entropy of wavelet high-frequency subband. The final saliency map is detected by non-lineally combining the local and global saliencies. To evaluate the proposed saliency detection method, we perform computer simulations for two image databases. Simulations results show the proposed method represents superior performance to the conventional algorithms.

Saliency Attention Method for Salient Object Detection Based on Deep Learning (딥러닝 기반의 돌출 객체 검출을 위한 Saliency Attention 방법)

  • Kim, Hoi-Jun;Lee, Sang-Hun;Han, Hyun Ho;Kim, Jin-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.39-47
    • /
    • 2020
  • In this paper, we proposed a deep learning-based detection method using Saliency Attention to detect salient objects in images. The salient object detection separates the object where the human eye is focused from the background, and determines the highly relevant part of the image. It is usefully used in various fields such as object tracking, detection, and recognition. Existing deep learning-based methods are mostly Autoencoder structures, and many feature losses occur in encoders that compress and extract features and decoders that decompress and extend the extracted features. These losses cause the salient object area to be lost or detect the background as an object. In the proposed method, Saliency Attention is proposed to reduce the feature loss and suppress the background region in the Autoencoder structure. The influence of the feature values was determined using the ELU activation function, and Attention was performed on the feature values in the normalized negative and positive regions, respectively. Through this Attention method, the background area was suppressed and the projected object area was emphasized. Experimental results showed improved detection results compared to existing deep learning methods.

Salient Object Detection via Multiple Random Walks

  • Zhai, Jiyou;Zhou, Jingbo;Ren, Yongfeng;Wang, Zhijian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1712-1731
    • /
    • 2016
  • In this paper, we propose a novel saliency detection framework via multiple random walks (MRW) which simulate multiple agents on a graph simultaneously. In the MRW system, two agents, which represent the seeds of background and foreground, traverse the graph according to a transition matrix, and interact with each other to achieve a state of equilibrium. The proposed algorithm is divided into three steps. First, an initial segmentation is performed to partition an input image into homogeneous regions (i.e., superpixels) for saliency computation. Based on the regions of image, we construct a graph that the nodes correspond to the superpixels in the image, and the edges between neighboring nodes represent the similarities of the corresponding superpixels. Second, to generate the seeds of background, we first filter out one of the four boundaries that most unlikely belong to the background. The superpixels on each of the three remaining sides of the image will be labeled as the seeds of background. To generate the seeds of foreground, we utilize the center prior that foreground objects tend to appear near the image center. In last step, the seeds of foreground and background are treated as two different agents in multiple random walkers to complete the process of salient object detection. Experimental results on three benchmark databases demonstrate the proposed method performs well when it against the state-of-the-art methods in terms of accuracy and robustness.