• 제목/요약/키워드: Visual attention information

Search Result 269, Processing Time 0.026 seconds

Stereo Image Quality Assessment Using Visual Attention and Distortion Predictors

  • Hwang, Jae-Jeong;Wu, Hong Ren
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.9
    • /
    • pp.1613-1631
    • /
    • 2011
  • Several metrics have been reported in the literature to assess stereo image quality, mostly based on visual attention or human visual sensitivity based distortion prediction with the help of disparity information, which do not consider the combined aspects of human visual processing. In this paper, visual attention and depth assisted stereo image quality assessment model (VAD-SIQAM) is devised that consists of three main components, i.e., stereo attention predictor (SAP), depth variation (DV), and stereo distortion predictor (SDP). Visual attention is modeled based on entropy and inverse contrast to detect regions or objects of interest/attention. Depth variation is fused into the attention probability to account for the amount of changed depth in distorted stereo images. Finally, the stereo distortion predictor is designed by integrating distortion probability, which is based on low-level human visual system (HVS), responses into actual attention probabilities. The results show that regions of attention are detected among the visually significant distortions in the stereo image pair. Drawbacks of human visual sensitivity based picture quality metrics are alleviated by integrating visual attention and depth information. We also show that positive correlation with ground-truth attention and depth maps are increased by up to 0.949 and 0.936 in terms of the Pearson and the Spearman correlation coefficients, respectively.

Computer Vision System using the mechanisms of human visual attention (인간의 시각적 주의 능력을 이용한 컴퓨터 시각 시스템)

  • 최경주;이일병
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.239-242
    • /
    • 2001
  • As systems for real time computer vision are confronted with prodigious amounts of visual information, it has become a priority to locate and analyze just that information essential to the task at hand, while ignoring the vast flow of irrelevant detail. A method of achieving this is to using human visual attention mechanism. In this paper, short review of human visual attention mechanisms and some computation models of visual attention were shown. This paper can be used as the basic data for researches on development of visual attention system that can perform various complex tasks more efficiently.

  • PDF

Visual Attention Model Based on Particle Filter

  • Liu, Long;Wei, Wei;Li, Xianli;Pan, Yafeng;Song, Houbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3791-3805
    • /
    • 2016
  • The visual attention mechanism includes 2 attention models, the bottom-up (B-U) and the top-down (T-D), the physiology of which have not yet been accurately described. In this paper, the visual attention mechanism is regarded as a Bayesian fusion process, and a visual attention model based on particle filter is proposed. Under certain particular assumed conditions, a calculation formula of Bayesian posterior probability is deduced. The visual attention fusion process based on the particle filter is realized through importance sampling, particle weight updating, and resampling, and visual attention is finally determined by the particle distribution state. The test results of multigroup images show that the calculation result of this model has better subjective and objective effects than that of other models.

Visible Distortion Predictors Based on Visual Attention in Color Images

  • Cho, Sang-Gyu;Hwang, Jae-Jeong;Kwak, Nae-Joung
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.3
    • /
    • pp.300-306
    • /
    • 2012
  • An image attention model and its application to image quality assessment are discussed in this paper. The attention model is based on rarity quantification, which is related to self-information to attract the attention in an image. It is relatively simpler than the others but results in taking more consideration of global contrasts between a pixel and the whole image. The visual attention model is used to develop a local distortion predictor, named color visual differences predictor (CVDP), in color images in order to effectively detect luminance and color distortions.

A New Performance Evaluation Method for Visual Attention System (시각주의 탐색 시스템을 위한 새로운 성능 평가 기법)

  • Cheoi, Kyungjoo
    • Journal of Information Technology Services
    • /
    • v.16 no.1
    • /
    • pp.55-72
    • /
    • 2017
  • Many of the studies of visual attention that are currently underway are seeking ways to make application systems that can be used in practice, and obtained good results using not only simulated images but also real-world images. However, despite that previous studies of selective visual attention are models intended to implement the human vision, few experiments verified the models with actual humans and there is no standardized data nor standardized experimental method for actual images. Therefore, in this paper, we propose a new performance evaluation techniques necessary for evaluation of visual attention systems. We developed an evaluation method for evaluating the performance of the visual attention system through comparison with the results of the human experiments on visual attention. Human experiments on visual attention is an experiments where human beings are instinctively aware of the unconscious when images are given to humans. So it can be useful for evaluating performance of the bottom-up attention system. Also we propose a new selective attention system that guides the user to effectively detect ROI regions by using spatial and temporal features adaptively selected according to the input image. We evaluated the performance of proposed visual attention system through the developed performance evaluation method, and we could confirm that the results of the visual attention system are similar to those of the human visual attention.

A Novel Feature Map Generation and Integration Method for Attention Based Visual Information Processing System using Disparity of a Stereo Pair of Images (주의 기반 시각정보처리체계 시스템 구현을 위한 스테레오 영상의 변위도를 이용한 새로운 특징맵 구성 및 통합 방법)

  • Park, Min-Chul;Cheoi, Kyung-Joo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.55-62
    • /
    • 2010
  • Human visual attention system has a remarkable ability to interpret complex scenes with the ease and simplicity by selecting or focusing on a small region of visual field without scanning the whole images. In this paper, a novel feature map generation and integration method for attention based visual information processing system is proposed. The depth information obtained from a stereo pair of images is exploited as one of spatial visual features to form a set of topographic feature maps in our approach. Comparative experiments show that correct detection rate of visual attention regions improves by utilizing depth feature compared to the case of not using depth feature.

A Study on Visual Behavior for Presenting Consumer-Oriented Information on an Online Fashion Store

  • Kim, Dahyun;Lee, Seunghee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.44 no.5
    • /
    • pp.789-809
    • /
    • 2020
  • Growth in online channels has created fierce competition; consequently, retailers have to invest an increasing amount of effort into attracting consumers. In this study, eye-tracking technology examined consumers' visual behavior to gain an understanding of information searching behavior in exploring product information for fashion products. Product attribute information was classified into two image-based elements (model image information and detail image information) and two text-based elements (basic text information, detail text information), after which consumers' visual behavior for each information element was analyzed. Furthermore, whether involvement affects consumers' information search behavior was investigated. The results demonstrated that model image information attracted visual attention the quickest, while detail text information and model image information received the most visual attention. Additionally, high-involvement consumers tended to pay more attention to detailed information while low-involvement consumers tended to pay more attention to image-based and basic information. This study is expected to help broaden the understanding of consumer behavior and provide implications for establishing strategies on how to efficiently organize product information for online fashion stores.

Modeling of Visual Attention Probability for Stereoscopic Videos and 3D Effect Estimation Based on Visual Attention (3차원 동영상의 시각 주의 확률 모델 도출 및 시각 주의 기반 입체감 추정)

  • Kim, Boeun;Song, Wonseok;Kim, Taejeong
    • Journal of KIISE
    • /
    • v.42 no.5
    • /
    • pp.609-620
    • /
    • 2015
  • Viewers of videos are likely to absorb more information from the part of the screen that attracts visual attention. This fact has led to the visual attention models that are being used in producing and evaluating videos. In this paper, we investigate the factors that are significant to visual attention and the mathematical form of the visual attention model. We then estimated the visual attention probability using the statistical design of experiments. The analysis of variance (ANOVA) verifies that the motion velocity, distance from the screen, and amount of defocus blur affect human visual attention significantly. Using the response surface modeling (RSM), we created a visual attention score model that concerns the three factors, from which we calculate the visual attention probabilities (VAPs) of image pixels. The VAPs are directly applied to existing gradient based 3D effect perception measurement. By giving weights according to our VAPs, our algorithm achieves more accurate measurement than the existing method. The performance of the proposed measurement is assessed by comparing them with subjective evaluation as well as with existing methods. The comparison verifies that the proposed measurement outperforms the existing ones.

Study on Visual Recognition Enhancement of Yellow Carpet Placed at Near Pedestrian Crossing Areas : Visual Attention Software Implementation (횡단보도 옐로카펫 설치에 따른 시인성 증진효과 연구 : Visual Attention Software 분석 중심으로)

  • Ahn, Hyo-Sub;Kim, Jin-Tae
    • Journal of Information Technology Services
    • /
    • v.15 no.4
    • /
    • pp.73-83
    • /
    • 2016
  • Pedestrian safety was recently highlighted with a yellow carpet, a yellow-colored pavement material prepared for children waiting for signals for pedestrian crossing, without validation of its efficiency in practice. It was a promising device likely to assist highway safety by stimulating pedestrian to step on the yellow-colored area; it was generally called nudge effects. This paper delivers a study conducted to check the effectiveness of yellow carpet in three different aspects in vehicle driver's perspective by applying the newly introduced information technology (IT) service: Visual Attention Software (VAS). It was assumed that VAS developed by 3M in the United States should be able explain the Korean drivers' visual reaction behaviors since technology embedded in VAS was developed based on and proved by other various international countries and continents in the world. A set of pictures was taken at thirteen different field sites in seven school zone areas in the Seoul metropolitan area before and after the installation of a yellow carpet, respectively. Sets of those pictures were analyzed with VAS, and the results were compared based on the selective safety measures: the likely focusing on standing pedestrians (waiting for a pedestrian's green signal time) affected by its background (yellow-colored pavement) contrasting him or her. The test results from a set of before-and-after comparison analyses showed that the placement of yellow carpet would (1) increase 71% of driver's visual attention on pedestrian crossing areas and (2) change the sequential order of visual attention on that area 2.4 steps ahead. The findings would enhance deployment of such promising efficiency and thus increase children safety in pedestrian crossing. The result was promising to highlight the way to support the changes in conservative traffic safety engineering field by applying the advanced IT services, while much robust research was recommended to overcome the limitation of simplification of this study.

Region of Interest Detection Based on Visual Attention and Threshold Segmentation in High Spatial Resolution Remote Sensing Images

  • Zhang, Libao;Li, Hao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.8
    • /
    • pp.1843-1859
    • /
    • 2013
  • The continuous increase of the spatial resolution of remote sensing images brings great challenge to image analysis and processing. Traditional prior knowledge-based region detection and target recognition algorithms for processing high resolution remote sensing images generally employ a global searching solution, which results in prohibitive computational complexity. In this paper, a more efficient region of interest (ROI) detection algorithm based on visual attention and threshold segmentation (VA-TS) is proposed, wherein a visual attention mechanism is used to eliminate image segmentation and feature detection to the entire image. The input image is subsampled to decrease the amount of data and the discrete moment transform (DMT) feature is extracted to provide a finer description of the edges. The feature maps are combined with weights according to the amount of the "strong points" and the "salient points". A threshold segmentation strategy is employed to obtain more accurate region of interest shape information with the very low computational complexity. Experimental statistics have shown that the proposed algorithm is computational efficient and provide more visually accurate detection results. The calculation time is only about 0.7% of the traditional Itti's model.