• Title/Summary/Keyword: Saliency Attention

Search Result 45, Processing Time 0.033 seconds

Saliency Map Based Color Image Compression for Visual Quality Enhancement of Image (영상의 시각적 품질향상을 위한 Saliency 맵 기반의 컬러 영상압축)

  • Jung, Sung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.3
    • /
    • pp.446-455
    • /
    • 2017
  • A color image compression based on saliency map was proposed. The proposed method provides higher quality in saliency blocks on which people's attention focuses, compared with non-saliency blocks on which the attention less focuses at a given bitrate. The proposed method uses 3 different quantization tables according to each block's saliency level. In the experiment using 6 typical images, we compared the proposed method with JPEG and other conventional methods. As the result, it showed that the proposed method (Qup=0.5*Qx) is about 3.1 to 1.2 dB better than JPEG and others in saliency blocks in PSNR at the almost similar bitrate. In the comparison of result images, the proposed one also showed less error than others in saliency blocks.

Saliency Attention Method for Salient Object Detection Based on Deep Learning (딥러닝 기반의 돌출 객체 검출을 위한 Saliency Attention 방법)

  • Kim, Hoi-Jun;Lee, Sang-Hun;Han, Hyun Ho;Kim, Jin-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.39-47
    • /
    • 2020
  • In this paper, we proposed a deep learning-based detection method using Saliency Attention to detect salient objects in images. The salient object detection separates the object where the human eye is focused from the background, and determines the highly relevant part of the image. It is usefully used in various fields such as object tracking, detection, and recognition. Existing deep learning-based methods are mostly Autoencoder structures, and many feature losses occur in encoders that compress and extract features and decoders that decompress and extend the extracted features. These losses cause the salient object area to be lost or detect the background as an object. In the proposed method, Saliency Attention is proposed to reduce the feature loss and suppress the background region in the Autoencoder structure. The influence of the feature values was determined using the ELU activation function, and Attention was performed on the feature values in the normalized negative and positive regions, respectively. Through this Attention method, the background area was suppressed and the projected object area was emphasized. Experimental results showed improved detection results compared to existing deep learning methods.

Implementation of Image Adaptive Map (적응적인 Saliency Map 모델 구현)

  • Park, Sang-Bum;Kim, Ki-Joong;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.25 no.2
    • /
    • pp.131-139
    • /
    • 2008
  • This paper presents a new saliency map which is constructed by providing dynamic weights on individual features in an input image to search ROI(Region Of Interest) or FOA(Focus Of Attention). To construct a saliency map on there is no a priori information, three feature-maps are constructed first which emphasize orientation, color, and intensity of individual pixels, respectively. From feature-maps, conspicuity maps are generated by using the It's algorithm and their information quantities are measured in terms of entropy. Final saliency map is constructed by summing the conspicuity maps weighted with their individual entropies. The prominency of the proposed algorithm has been proved by showing that the ROIs detected by the proposed algorithm in ten different images are similar with those selected by one-hundred person's naked eyes.

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

A Saliency-Based Focusing Region Selection Method for Robust Auto-Focusing

  • Jeon, Jaehwan;Cho, Changhun;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.133-142
    • /
    • 2012
  • This paper presents a salient region detection algorithm for auto-focusing based on the characteristics of a human's visual attention. To describe the saliency at the local, regional, and global levels, this paper proposes a set of novel features including multi-scale local contrast, variance, center-surround entropy, and closeness to the center. Those features are then prioritized to produce a saliency map. The major advantage of the proposed approach is twofold; i) robustness to changes in focus and ii) low computational complexity. The experimental results showed that the proposed method outperforms the existing low-level feature-based methods in the sense of both robustness and accuracy for auto-focusing.

  • PDF

Visual Information Selection Mechanism Based on Human Visual Attention (인간의 주의시각에 기반한 시각정보 선택 방법)

  • Cheoi, Kyung-Joo;Park, Min-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.3
    • /
    • pp.378-391
    • /
    • 2011
  • In this paper, we suggest a novel method of selecting visual information based on bottom-up visual attention of human. We propose a new model that improve accuracy of detecting attention region by using depth information in addition to low-level spatial features such as color, lightness, orientation, form and temporal feature such as motion. Motion is important cue when we derive temporal saliency. But noise obtained during the input and computation process deteriorates accuracy of temporal saliency Our system exploited the result of psychological studies in order to remove the noise from motion information. Although typical systems get problems in determining the saliency if several salient regions are partially occluded and/or have almost equal saliency, our system is able to separate the regions with high accuracy. Spatiotemporally separated prominent regions in the first stage are prioritized using depth value one by one in the second stage. Experiment result shows that our system can describe the salient regions with higher accuracy than the previous approaches do.

Detecting Salient Regions based on Bottom-up Human Visual Attention Characteristic (인간의 상향식 시각적 주의 특성에 바탕을 둔 현저한 영역 탐지)

  • 최경주;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.189-202
    • /
    • 2004
  • In this paper, we propose a new salient region detection method in an image. The algorithm is based on the characteristics of human's bottom-up visual attention. Several features known to influence human visual attention like color, intensity and etc. are extracted from the each regions of an image. These features are then converted to importance values for each region using its local competition function and are combined to produce a saliency map, which represents the saliency at every location in the image by a scalar quantity, and guides the selection of attended locations, based on the spatial distribution of saliency region of the image in relation to its Perceptual importance. Results shown indicate that the calculated Saliency Maps correlate well with human perception of visually important regions.

Robust Face Detection Based on Knowledge-Directed Specification of Bottom-Up Saliency

  • Lee, Yu-Bu;Lee, Suk-Han
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.600-610
    • /
    • 2011
  • This paper presents a novel approach to face detection by localizing faces as the goal-specific saliencies in a scene, using the framework of selective visual attention of a human with a particular goal in mind. The proposed approach aims at achieving human-like robustness as well as efficiency in face detection under large scene variations. The key is to establish how the specific knowledge relevant to the goal interacts with the bottom-up process of external visual stimuli for saliency detection. We propose a direct incorporation of the goal-related knowledge into the specification and/or modification of the internal process of a general bottom-up saliency detection framework. More specifically, prior knowledge of the human face, such as its size, skin color, and shape, is directly set to the window size and color signature for computing the center of difference, as well as to modify the importance weight, as a means of transforming into a goal-specific saliency detection. The experimental evaluation shows that the proposed method reaches a detection rate of 93.4% with a false positive rate of 7.1%, indicating the robustness against a wide variation of scale and rotation.

Face Detection through Implementation of adaptive Saliency map (적응적인 Saliency map 모델 구현을 통한 얼굴 검출)

  • Kim, Gi-Jung;Han, Yeong-Jun;Han, Hyeon-Su
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.153-156
    • /
    • 2007
  • 인간의 시각 시스템은 선택적 주의 집중에 의해 시각 수용체로 도달되는 많은 물체들 중에서 필요한 정보만을 추출하여 원하는 작업을 수행한다. Itti와 Koch는 시각적 주의를 제어할 수 있는, 신경계를 모방한 계산적 모델을 제안하였으나 조명환경에 고정적인 saliency map을 구성하였다. 따라서, 본 논문에서는 영상에서 ROI(region of interest)을 탐지하기 위한 조명환경에 적응적인 saliency map 모델을 구성하는 기법을 제시한다. 변화하는 환경에서 원하는 특징을 부각시키기 위하여 상황에 적응적인 동적 가중치를 부여한다. 동적 가중치는 conspicuity map에 S.K. Chang이 제안한 PIM(Picture Information Measure)을 적용시켜 정보량을 측정한 후, 이에 따라 정규화된 값을 부여함으로써 구현한다. 제안하는 조명환경에 강인한 적응적인 saliency map 모델 구현의 성능을 얼굴검출 실험을 통하여 검증하였다.

  • PDF

Query-based Visual Attention Algorithm for Object Recognition of A Mobile Robot (이동로봇의 물체인식을 위한 질의 기반 시각 집중 알고리즘)

  • Ryu, Gwang-Geun;Lee, Sang-Hoon;Suh, Il-Hong
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.1
    • /
    • pp.50-58
    • /
    • 2007
  • In this paper, we propose a query-based visual attention algorithm for effective object finding of a vision-based mobile robot. This algorithm is developed by extending conventional bottom-up visual attention algorithms. In our proposed algorithm various conspicuity maps are merged to make a saliency map, where weighting values are determined by query-dependent object properties. The saliency map is then used to find possible attentive location of queried object. To show the validities of our proposed algorithm, several objects are employed to compare performances of our proposed algorithm with those of conventional bottom-up approaches. Here, as one of exemplar query-dependent object property, color property is used.