• Title/Summary/Keyword: visual Attention

Search Result 761, Processing Time 0.026 seconds

A Study on the Visual Attention of Sexual Appeal Advertising Image Utilizing Eye Tracking (아이트래킹을 활용한 성적소구광고 이미지의 시각적 주의에 관한 연구)

  • Hwang, Mi-Kyung;Kwon, Mahn-Woo;Lee, Sang-Ho;Kim, Chee-Yong
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.207-212
    • /
    • 2020
  • This study analyzes the Soju(Korean alcohol) advertisement image, which is relatively easy to interpret subjectively, among sexual appeal advertisements that stimulate consumers' curiosity, where the image is verified through AOI (area of interest) 3 areas (face, body, product), and eye-tracking, one of the psychophysiological indicators. The result of the analysis reveals that visual attention, the interest in the advertising model, was higher in the face than in the body shape. Contrary to the prediction that men would be more interested in body shape than women, both men and women showed higher interest in the face than a body. Besides, it was derived that recognition and recollection of the product were not significant. This study is significant in terms of examining the pattern of visual attention such as the gaze point and gaze time of male and female consumers on sexual appeal advertisements. In further, the study looks forward to bringing a positive influence to the soju advertisement image by presenting the expression method that the soju advertisement image should pursue as well as the appropriate marketing direction.

A Neural Network Model for Visual Selection: Top-down mechanism of Feature Gate model (시각적 선택에 대한 신경 망 모형FeatureGate 모형의 하향식 기제)

  • 김민식
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.3
    • /
    • pp.1-15
    • /
    • 1999
  • Based on known physiological and psychophysical results, a neural network model for visual selection, called FeaureGate is proposed. The model consists of a hierarchy of spatial maps. and the flow of information from each level of the hierarchy to the next is controlled by attentional gates. The gates are jointly controlled by a bottom-up system favoring locations with unique features. and a top-down mechanism favoring locations with features designated as target features. The present study focuses on the top-down mechanism of the FeatureGate model that produces results similar to Moran and Desimone's (1985), which many current models have failed to explain, The FeatureGate model allows a consistent interpretation of many different experimental results in visual attention. including parallel feature searches and serial conjunction searches. attentional gradients triggered by cuing, feature-driven spatial selection, split a attention, inhibition of distractor locations, and flanking inhibition. This framework can be extended to produce a model of shape recognition using upper-level units that respond to configurations of features.

  • PDF

Detecting Salient Regions based on Bottom-up Human Visual Attention Characteristic (인간의 상향식 시각적 주의 특성에 바탕을 둔 현저한 영역 탐지)

  • 최경주;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.189-202
    • /
    • 2004
  • In this paper, we propose a new salient region detection method in an image. The algorithm is based on the characteristics of human's bottom-up visual attention. Several features known to influence human visual attention like color, intensity and etc. are extracted from the each regions of an image. These features are then converted to importance values for each region using its local competition function and are combined to produce a saliency map, which represents the saliency at every location in the image by a scalar quantity, and guides the selection of attended locations, based on the spatial distribution of saliency region of the image in relation to its Perceptual importance. Results shown indicate that the calculated Saliency Maps correlate well with human perception of visually important regions.

Effects of Object- and Space-Based Attention on Working Memory (대상- 및 공간-기반 주의가 작업기억에 미치는 영향)

  • Min, Yoon-Ki;Kim, Bo-Seong;Chung, Chong-Wook
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.2
    • /
    • pp.125-142
    • /
    • 2008
  • This study investigated the effects of space- and object-based attention on spatial and visual working memory, by measuring recognition of working memory on the spatial Stroop task including two modalities of attention resource. The similarity condition of stimulus arrangement between working memory task and spatial stroop task was manipulated in order to examine the effects of space-based attention on spatial rehearsal during working memory task, while Stroop rendition was manipulated in order to examine the effects of object-based attention on object rehearsal during working memory task. The results showed that in a condition that stimulus arrangement was highly similar for the spatial working memory task and the spatial Stroop task, recognition accuracy of the spatial working memory was high, but it was not significantly different with the Stroop conditions. In contrast, the recognition accuracy of visual working memory in the incongruent Stroop condition was lower than that in the congruent Stroop condition, but it was not significantly different with the similarity conditions (25% vs. 75%). The results indicated that selective attention has effects on working memory only when resource modality of working memory is the same as that of selective attention.

  • PDF

2D-to-3D Conversion System using Depth Map Enhancement

  • Chen, Ju-Chin;Huang, Meng-yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1159-1181
    • /
    • 2016
  • This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer's main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.

A Pilot MEG Study During A Visual Search Task (시각추적과제의 뇌자도 : 예비실험)

  • Kim, Sung Hun;Lee, Sang Kun;Kim, Kwang-Ki
    • Annals of Clinical Neurophysiology
    • /
    • v.8 no.1
    • /
    • pp.44-47
    • /
    • 2006
  • Background: The present study used magnetoencephalography (MEG) to investigate the neural substrates for modified version of Treisman's visual search task. Methods: Two volunteers who gave informed consent participated MEG experiment. One was 27- year old male and another was 24-year-old female. All were right handed. Experiment were performed using a 306-channel biomagnetometer (Neuromag LTD). There were three task conditions in this experiment. The first was searching an open circle among seven closed circles (open condition). The second was searching a closed circle among seven uni-directionally open circles (closed condition). And the third was searching a closed circle among seven eight-directionally open circles (random (closed) condition). In one run, participants performed one task condition so there were three runs in one session of experiment. During one session, 128 trials were performed during every three runs. One participant underwent one session of experiment. The participant pressed button when they found targets. Magnetic source localization images were generated using software programs that allowed for interactive identification of a common set of fiduciary points in the MRI and MEG coordinate frames. Results: In each participant we can found activations of anterior cingulate, primary visual and association cortices, posterior parietal cortex and brain areas in the vicinity of thalamus. Conclusions: we could find activations corresponding to anterior and posterior visual attention systems.

  • PDF

Measuring Visual Attention Processing of Virtual Environment Using Eye-Fixation Information

  • Kim, Jong Ha;Kim, Ju Yeon
    • Architectural research
    • /
    • v.22 no.4
    • /
    • pp.155-162
    • /
    • 2020
  • Numerous scholars have explored the modeling, control, and optimization of energy systems in buildings, offering new insights about technology and environments that can advance industry innovation. Eye trackers deliver objective eye-gaze data about visual and attentional processes. Due to its flexibility, accuracy, and efficiency in research, eye tracking has a control scheme that makes measuring rapid eye movement in three-dimensional space possible (e.g., virtual reality, augmented reality). Because eye movement is an effective modality for digital interaction with a virtual environment, tracking how users scan a visual field and fix on various digital objects can help designers optimize building environments and materials. Although several scholars have conducted Virtual Reality studies in three-dimensional space, scholars have not agreed on a consistent way to analyze eye tracking data. We conducted eye tracking experiments using objects in three-dimensional space to find an objective way to process quantitative visual data. By applying a 12 × 12 grid framework for eye tracking analysis, we investigated how people gazed at objects in a virtual space wearing a headmounted display. The findings provide an empirical base for a standardized protocol for analyzing eye tracking data in the context of virtual environments.

Analysis of Visual Art Elements of Game Characters Illustrated by the Case of Glory of Kings

  • He, Yangyang;Choi, Chulyoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.213-219
    • /
    • 2020
  • Visual art elements can most intuitively express the information expressed by a game character and play an important role in shaping a successful game character. We compares and analyzes the role design in the skins of the "Five Tiger-like Generals" series in Glory of Kings in terms of color, line, graphic and other visual elements. Different visual elements form different visual impacts and influences, which brings people different psychological feelings and presents different emotional colors. The reasonable use of visual elements gives people a refined visual experience and a comfortable psychological feeling, adding more influencing factors to the "immersion" of the game. It not only can strongly attract the attention and love of players, but also spreads traditional culture of the nation and the country. Therefore, whether the visual art elements of a game character can be appropriately used is an important part of the success of the game work. The analysis of visual art elements of game characters has important learning and research value.

An Approach for Localization Around Indoor Corridors Based on Visual Attention Model (시각주의 모델을 적용한 실내 복도에서의 위치인식 기법)

  • Yoon, Kook-Yeol;Choi, Sun-Wook;Lee, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

Using similarity based image caption to aid visual question answering (유사도 기반 이미지 캡션을 이용한 시각질의응답 연구)

  • Kang, Joonseo;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.191-204
    • /
    • 2021
  • Visual Question Answering (VQA) and image captioning are tasks that require understanding of the features of images and linguistic features of text. Therefore, co-attention may be the key to both tasks, which can connect image and text. In this paper, we propose a model to achieve high performance for VQA by image caption generated using a pretrained standard transformer model based on MSCOCO dataset. Captions unrelated to the question can rather interfere with answering, so some captions similar to the question were selected to use based on a similarity to the question. In addition, stopwords in the caption could not affect or interfere with answering, so the experiment was conducted after removing stopwords. Experiments were conducted on VQA-v2 data to compare the proposed model with the deep modular co-attention network (MCAN) model, which showed good performance by using co-attention between images and text. As a result, the proposed model outperformed the MCAN model.