• Title/Summary/Keyword: Visual Saliency

Search Result 63, Processing Time 0.024 seconds

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Saliency Map Based Color Image Compression for Visual Quality Enhancement of Image (영상의 시각적 품질향상을 위한 Saliency 맵 기반의 컬러 영상압축)

  • Jung, Sung-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.3
    • /
    • pp.446-455
    • /
    • 2017
  • A color image compression based on saliency map was proposed. The proposed method provides higher quality in saliency blocks on which people's attention focuses, compared with non-saliency blocks on which the attention less focuses at a given bitrate. The proposed method uses 3 different quantization tables according to each block's saliency level. In the experiment using 6 typical images, we compared the proposed method with JPEG and other conventional methods. As the result, it showed that the proposed method (Qup=0.5*Qx) is about 3.1 to 1.2 dB better than JPEG and others in saliency blocks in PSNR at the almost similar bitrate. In the comparison of result images, the proposed one also showed less error than others in saliency blocks.

A Study on Visual Saliency Detection in Infrared Images Using Boolean Map Approach

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1183-1195
    • /
    • 2020
  • Visual saliency detection is an essential task because it is an important part of various vision-based applications. There are many techniques for saliency detection in color images. However, the number of methods for saliency detection in infrared images is limited. In this paper, we introduce a simple approach for saliency detection in infrared images based on the thresholding technique. The input image is thresholded into several Boolean maps, and an initial saliency map is calculated as a weighted sum of the created Boolean maps. The initial map is further refined by using thresholding, morphology operation, and a Gaussian filter to produce the final, high-quality saliency map. The experiment showed that the proposed method has high performance when applied to real-life data.

Visual Search Model based on Saliency and Scene-Context in Real-World Images (실제 이미지에서 현저성과 맥락 정보의 영향을 고려한 시각 탐색 모델)

  • Choi, Yoonhyung;Oh, Hyungseok;Myung, Rohae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.4
    • /
    • pp.389-395
    • /
    • 2015
  • According to much research on cognitive science, the impact of the scene-context on human visual search in real-world images could be as important as the saliency. Therefore, this study proposed a method of Adaptive Control of Thought-Rational (ACT-R) modeling of visual search in real-world images, based on saliency and scene-context. The modeling method was developed by using the utility system of ACT-R to describe influences of saliency and scene-context in real-world images. Then, the validation of the model was performed, by comparing the data of the model and eye-tracking data from experiments in simple task in which subjects search some targets in indoor bedroom images. Results show that model data was quite well fit with eye-tracking data. In conclusion, the method of modeling human visual search proposed in this study should be used, in order to provide an accurate model of human performance in visual search tasks in real-world images.

Pedestrian identification in infrared images using visual saliency detection technique

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.615-618
    • /
    • 2019
  • Visual saliency detection is an important part in various vision-based applications. There are a myriad of techniques for saliency detection in color images. However, the number of methods for saliency detection in infrared images is inadequate. In this paper, we introduce a simple approach for pedestrian identification in infrared images using saliency. The input image is thresholded into several Boolean maps, an initial saliency map is then calculated as a weighted sum of created Boolean maps. The initial map is further refined by using thresholding, morphology operation, and Gaussian filter to produce the final, high-quality saliency map. The experiment showed that the proposed method produced high performance results when applied to real-life data.

Video Coding Method Using Visual Perception Model based on Motion Analysis (움직임 분석 기반의 시각인지 모델을 이용한 비디오 코딩 방법)

  • Oh, Hyung-Suk;Kim, Won-Ha
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.223-236
    • /
    • 2012
  • We develop a video processing method that allows the more advanced human perception oriented video coding. The proposed method necessarily reflects all influences by the rate-distortion based optimization and the human visual perception that is affected by the visual saliency, the limited space-time resolution and the regional moving history. For reflecting the human perceptual effects, we devise an online moving pattern classifier using the Hedge algorithm. Then, we embed the existing visual saliency into the proposed moving patterns so as to establish a human visual perception model. In order to realize the proposed human visual perception model, we extend the conventional foveation filtering method. Compared to the conventional foveation filter only smoothing less stimulus video signals, the developed foveation filter can locally smooth and enhance signals according to the human visual perception without causing any artifacts. Due to signal enhancement, the developed foveation filter more efficiently transfers the bandwidth saved at smoothed signals to the enhanced signals. Performance evaluation verifies that the proposed video processing method satisfies the overall video quality, while improving the perceptual quality by 12%~44%.

Salient Region Extraction based on Global Contrast Enhancement and Saliency Cut for Image Information Recognition of the Visually Impaired

  • Yoon, Hongchan;Kim, Baek-Hyun;Mukhriddin, Mukhiddinov;Cho, Jinsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2287-2312
    • /
    • 2018
  • Extracting key visual information from images containing natural scene is a challenging task and an important step for the visually impaired to recognize information based on tactile graphics. In this study, a novel method is proposed for extracting salient regions based on global contrast enhancement and saliency cuts in order to improve the process of recognizing images for the visually impaired. To accomplish this, an image enhancement technique is applied to natural scene images, and a saliency map is acquired to measure the color contrast of homogeneous regions against other areas of the image. The saliency maps also help automatic salient region extraction, referred to as saliency cuts, and assist in obtaining a binary mask of high quality. Finally, outer boundaries and inner edges are detected in images with natural scene to identify edges that are visually significant. Experimental results indicate that the method we propose in this paper extracts salient objects effectively and achieves remarkable performance compared to conventional methods. Our method offers benefits in extracting salient objects and generating simple but important edges from images containing natural scene and for providing information to the visually impaired.

Robust Face Detection Based on Knowledge-Directed Specification of Bottom-Up Saliency

  • Lee, Yu-Bu;Lee, Suk-Han
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.600-610
    • /
    • 2011
  • This paper presents a novel approach to face detection by localizing faces as the goal-specific saliencies in a scene, using the framework of selective visual attention of a human with a particular goal in mind. The proposed approach aims at achieving human-like robustness as well as efficiency in face detection under large scene variations. The key is to establish how the specific knowledge relevant to the goal interacts with the bottom-up process of external visual stimuli for saliency detection. We propose a direct incorporation of the goal-related knowledge into the specification and/or modification of the internal process of a general bottom-up saliency detection framework. More specifically, prior knowledge of the human face, such as its size, skin color, and shape, is directly set to the window size and color signature for computing the center of difference, as well as to modify the importance weight, as a means of transforming into a goal-specific saliency detection. The experimental evaluation shows that the proposed method reaches a detection rate of 93.4% with a false positive rate of 7.1%, indicating the robustness against a wide variation of scale and rotation.

Small Object Segmentation Based on Visual Saliency in Natural Images

  • Manh, Huynh Trung;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.592-601
    • /
    • 2013
  • Object segmentation is a challenging task in image processing and computer vision. In this paper, we present a visual attention based segmentation method to segment small sized interesting objects in natural images. Different from the traditional methods, we first search the region of interest by using our novel saliency-based method, which is mainly based on band-pass filtering, to obtain the appropriate frequency. Secondly, we applied the Gaussian Mixture Model (GMM) to locate the object region. By incorporating the visual attention analysis into object segmentation, our proposed approach is able to narrow the search region for object segmentation, so that the accuracy is increased and the computational complexity is reduced. The experimental results indicate that our proposed approach is efficient for object segmentation in natural images, especially for small objects. Our proposed method significantly outperforms traditional GMM based segmentation.

Visual Information Selection Mechanism Based on Human Visual Attention (인간의 주의시각에 기반한 시각정보 선택 방법)

  • Cheoi, Kyung-Joo;Park, Min-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.3
    • /
    • pp.378-391
    • /
    • 2011
  • In this paper, we suggest a novel method of selecting visual information based on bottom-up visual attention of human. We propose a new model that improve accuracy of detecting attention region by using depth information in addition to low-level spatial features such as color, lightness, orientation, form and temporal feature such as motion. Motion is important cue when we derive temporal saliency. But noise obtained during the input and computation process deteriorates accuracy of temporal saliency Our system exploited the result of psychological studies in order to remove the noise from motion information. Although typical systems get problems in determining the saliency if several salient regions are partially occluded and/or have almost equal saliency, our system is able to separate the regions with high accuracy. Spatiotemporally separated prominent regions in the first stage are prioritized using depth value one by one in the second stage. Experiment result shows that our system can describe the salient regions with higher accuracy than the previous approaches do.