• Title/Summary/Keyword: Saliency Detection

Search Result 72, Processing Time 0.024 seconds

A Study on Visual Saliency Detection in Infrared Images Using Boolean Map Approach

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1183-1195
    • /
    • 2020
  • Visual saliency detection is an essential task because it is an important part of various vision-based applications. There are many techniques for saliency detection in color images. However, the number of methods for saliency detection in infrared images is limited. In this paper, we introduce a simple approach for saliency detection in infrared images based on the thresholding technique. The input image is thresholded into several Boolean maps, and an initial saliency map is calculated as a weighted sum of the created Boolean maps. The initial map is further refined by using thresholding, morphology operation, and a Gaussian filter to produce the final, high-quality saliency map. The experiment showed that the proposed method has high performance when applied to real-life data.

An Improved Saliency Detection for Different Light Conditions

  • Ren, Yongfeng;Zhou, Jingbo;Wang, Zhijian;Yan, Yunyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1155-1172
    • /
    • 2015
  • In this paper, we propose a novel saliency detection framework based on illumination invariant features to improve the accuracy of the saliency detection under the different light conditions. The proposed algorithm is divided into three steps. First, we extract the illuminant invariant features to reduce the effect of the illumination based on the local sensitive histograms. Second, a preliminary saliency map is obtained in the CIE Lab color space. Last, we use the region growing method to fuse the illuminant invariant features and the preliminary saliency map into a new framework. In addition, we integrate the information of spatial distinctness since the saliency objects are usually compact. The experiments on the benchmark dataset show that the proposed saliency detection framework outperforms the state-of-the-art algorithms in terms of different illuminants in the images.

Pedestrian identification in infrared images using visual saliency detection technique

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.615-618
    • /
    • 2019
  • Visual saliency detection is an important part in various vision-based applications. There are a myriad of techniques for saliency detection in color images. However, the number of methods for saliency detection in infrared images is inadequate. In this paper, we introduce a simple approach for pedestrian identification in infrared images using saliency. The input image is thresholded into several Boolean maps, an initial saliency map is then calculated as a weighted sum of created Boolean maps. The initial map is further refined by using thresholding, morphology operation, and Gaussian filter to produce the final, high-quality saliency map. The experiment showed that the proposed method produced high performance results when applied to real-life data.

Visual Saliency Detection Based on color Frequency Features under Bayesian framework

  • Ayoub, Naeem;Gao, Zhenguo;Chen, Danjie;Tobji, Rachida;Yao, Nianmin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.676-692
    • /
    • 2018
  • Saliency detection in neurobiology is a vehement research during the last few years, several cognitive and interactive systems are designed to simulate saliency model (an attentional mechanism, which focuses on the worthiest part in the image). In this paper, a bottom up saliency detection model is proposed by taking into account the color and luminance frequency features of RGB, CIE $L^*a^*b^*$ color space of the image. We employ low-level features of image and apply band pass filter to estimate and highlight salient region. We compute the likelihood probability by applying Bayesian framework at pixels. Experiments on two publically available datasets (MSRA and SED2) show that our saliency model performs better as compared to the ten state of the art algorithms by achieving higher precision, better recall and F-Measure.

Pothole Detection Algorithm Based on Saliency Map for Improving Detection Performance (포트홀 탐지 정확도 향상을 위한 Saliency Map 기반 포트홀 탐지 알고리즘)

  • Jo, Young-Tae;Ryu, Seung-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.4
    • /
    • pp.104-114
    • /
    • 2016
  • Potholes have caused diverse problems such as wheel damage and car accident. A pothole detection technology is the most important to provide efficient pothole maintenance. The previous pothole detections have been performed by manual reporting methods. Thus, the problems caused by potholes have not been solved previously. Recently, many pothole detection systems based on video cameras have been studied, which can be implemented at low costs. In this paper, we propose a new pothole detection algorithm based on saliency map information in order to improve our previously developed algorithm. Our previous algorithm shows wrong detection with complicated situations such as the potholes overlapping with shades and similar surface textures with normal road surfaces. To address the problems, the proposed algorithm extracts more accurate pothole regions using the saliency map information, which consists of candidate extraction and decision. The experimental results show that the proposed algorithm shows better performance than our previous algorithm.

Robust Face Detection Based on Knowledge-Directed Specification of Bottom-Up Saliency

  • Lee, Yu-Bu;Lee, Suk-Han
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.600-610
    • /
    • 2011
  • This paper presents a novel approach to face detection by localizing faces as the goal-specific saliencies in a scene, using the framework of selective visual attention of a human with a particular goal in mind. The proposed approach aims at achieving human-like robustness as well as efficiency in face detection under large scene variations. The key is to establish how the specific knowledge relevant to the goal interacts with the bottom-up process of external visual stimuli for saliency detection. We propose a direct incorporation of the goal-related knowledge into the specification and/or modification of the internal process of a general bottom-up saliency detection framework. More specifically, prior knowledge of the human face, such as its size, skin color, and shape, is directly set to the window size and color signature for computing the center of difference, as well as to modify the importance weight, as a means of transforming into a goal-specific saliency detection. The experimental evaluation shows that the proposed method reaches a detection rate of 93.4% with a false positive rate of 7.1%, indicating the robustness against a wide variation of scale and rotation.

Saliency Detection based on Global Color Distribution and Active Contour Analysis

  • Hu, Zhengping;Zhang, Zhenbin;Sun, Zhe;Zhao, Shuhuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5507-5528
    • /
    • 2016
  • In computer vision, salient object is important to extract the useful information of foreground. With active contour analysis acting as the core in this paper, we propose a bottom-up saliency detection algorithm combining with the Bayesian model and the global color distribution. Under the supports of active contour model, a more accurate foreground can be obtained as a foundation for the Bayesian model and the global color distribution. Furthermore, we establish a contour-based selection mechanism to optimize the global-color distribution, which is an effective revising approach for the Bayesian model as well. To obtain an excellent object contour, we firstly intensify the object region in the source gray-scale image by a seed-based method. The final saliency map can be detected after weighting the color distribution to the Bayesian saliency map, after both of the two components are available. The contribution of this paper is that, comparing the Harris-based convex hull algorithm, the active contour can extract a more accurate and non-convex foreground. Moreover, the global color distribution can solve the saliency-scattered drawback of Bayesian model, by the mutual complementation. According to the detected results, the final saliency maps generated with considering the global color distribution and active contour are much-improved.

Face Detection through Implementation of adaptive Saliency map (적응적인 Saliency map 모델 구현을 통한 얼굴 검출)

  • Kim, Gi-Jung;Han, Yeong-Jun;Han, Hyeon-Su
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.153-156
    • /
    • 2007
  • 인간의 시각 시스템은 선택적 주의 집중에 의해 시각 수용체로 도달되는 많은 물체들 중에서 필요한 정보만을 추출하여 원하는 작업을 수행한다. Itti와 Koch는 시각적 주의를 제어할 수 있는, 신경계를 모방한 계산적 모델을 제안하였으나 조명환경에 고정적인 saliency map을 구성하였다. 따라서, 본 논문에서는 영상에서 ROI(region of interest)을 탐지하기 위한 조명환경에 적응적인 saliency map 모델을 구성하는 기법을 제시한다. 변화하는 환경에서 원하는 특징을 부각시키기 위하여 상황에 적응적인 동적 가중치를 부여한다. 동적 가중치는 conspicuity map에 S.K. Chang이 제안한 PIM(Picture Information Measure)을 적용시켜 정보량을 측정한 후, 이에 따라 정규화된 값을 부여함으로써 구현한다. 제안하는 조명환경에 강인한 적응적인 saliency map 모델 구현의 성능을 얼굴검출 실험을 통하여 검증하였다.

  • PDF

Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer (클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석)

  • Kim, JaeWook;Kim, Hyeoncheol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF

Automatic Change Detection Using Unsupervised Saliency Guided Method with UAV and Aerial Images

  • Farkoushi, Mohammad Gholami;Choi, Yoonjo;Hong, Seunghwan;Bae, Junsu;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1067-1076
    • /
    • 2020
  • In this paper, an unsupervised saliency guided change detection method using UAV and aerial imagery is proposed. Regions that are more different from other areas are salient, which make them more distinct. The existence of the substantial difference between two images makes saliency proper for guiding the change detection process. Change Vector Analysis (CVA), which has the capability of extracting of overall magnitude and direction of change from multi-spectral and temporal remote sensing data, is used for generating an initial difference image. Combined with an unsupervised CVA and the saliency, Principal Component Analysis(PCA), which is possible to implemented as the guide for change detection method, is proposed for UAV and aerial images. By implementing the saliency generation on the difference map extracted via the CVA, potentially changed areas obtained, and by thresholding the saliency map, most of the interest areas correctly extracted. Finally, the PCA method is implemented to extract features, and K-means clustering is applied to detect changed and unchanged map on the extracted areas. This proposed method is applied to the image sets over the flooded and typhoon-damaged area and is resulted in 95 percent better than the PCA approach compared with manually extracted ground truth for all the data sets. Finally, we compared our approach with the PCA K-means method to show the effectiveness of the method.