• Title/Summary/Keyword: Visual context

Search Result 365, Processing Time 0.025 seconds

Utilization of Visual Context for Robust Object Recognition in Intelligent Mobile Robots (지능형 이동 로봇에서 강인 물체 인식을 위한 영상 문맥 정보 활용 기법)

  • Kim, Sung-Ho;Kim, Jun-Sik;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.36-45
    • /
    • 2006
  • In this paper, we introduce visual contexts in terms of types and utilization methods for robust object recognition with intelligent mobile robots. One of the core technologies for intelligent robots is visual object recognition. Robust techniques are strongly required since there are many sources of visual variations such as geometric, photometric, and noise. For such requirements, we define spatial context, hierarchical context, and temporal context. According to object recognition domain, we can select such visual contexts. We also propose a unified framework which can utilize the whole contexts and validates it in real working environment. Finally, we also discuss the future research directions of object recognition technologies for intelligent robots.

  • PDF

Visual Search Model based on Saliency and Scene-Context in Real-World Images (실제 이미지에서 현저성과 맥락 정보의 영향을 고려한 시각 탐색 모델)

  • Choi, Yoonhyung;Oh, Hyungseok;Myung, Rohae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.4
    • /
    • pp.389-395
    • /
    • 2015
  • According to much research on cognitive science, the impact of the scene-context on human visual search in real-world images could be as important as the saliency. Therefore, this study proposed a method of Adaptive Control of Thought-Rational (ACT-R) modeling of visual search in real-world images, based on saliency and scene-context. The modeling method was developed by using the utility system of ACT-R to describe influences of saliency and scene-context in real-world images. Then, the validation of the model was performed, by comparing the data of the model and eye-tracking data from experiments in simple task in which subjects search some targets in indoor bedroom images. Results show that model data was quite well fit with eye-tracking data. In conclusion, the method of modeling human visual search proposed in this study should be used, in order to provide an accurate model of human performance in visual search tasks in real-world images.

(Types of metonymy applied to emoticons and their salience attributes - Focusing on the comparison of high-context and low-context emoticons -) (이모티콘에 적용된 환유 유형과 현저성 속성 - 고 맥락과 저 맥락 이모티콘의 비교를 중심으로 -)

  • Kim, Chan Hee;You, Si Cheon
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.91-101
    • /
    • 2021
  • Visual communication based on socio-cultural context, such as emoticons on social media, is increasing. Therefore, it is necessary to study the visual expression of metonymy as a means to correctly understand the communication method in the age of visual culture. The purpose of this study is to explore how metonymy is visualized within a cultural context. Specifically, , a typical underlying phenomenon of metonymy expression, and the expression principles of various reproduced through it are identified by pairing them with the cultural context. Based on context theory, which is a representative discourse in the social science field, emoticons from in high context and emoticons in in low context were selected and compared as case study subjects. The major findings are: First, a visual application model of metonymy was proposed regarding the process through which metonymy is reproduced as a visual result. Second, the types of metonymy and their salience attribute applied to the emoticon expression method was identified in detail. Third, based on the contextual theory, how the characteristics of high-context visual metonymy differ from that of low-context visual metonymy were presented. In the future, the results of this study can be used as a criterion for judging the local acceptability and local suitability of design results in the design development process that requires the use of localization strategies.

Multimodal Context Embedding for Scene Graph Generation

  • Jung, Gayoung;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1250-1260
    • /
    • 2020
  • This study proposes a novel deep neural network model that can accurately detect objects and their relationships in an image and represent them as a scene graph. The proposed model utilizes several multimodal features, including linguistic features and visual context features, to accurately detect objects and relationships. In addition, in the proposed model, context features are embedded using graph neural networks to depict the dependencies between two related objects in the context feature vector. This study demonstrates the effectiveness of the proposed model through comparative experiments using the Visual Genome benchmark dataset.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.

Online Visual Merchandising: an Impression Formation Perspective

  • Kwon, Wi-Suk
    • International Journal of Costume and Fashion
    • /
    • v.9 no.2
    • /
    • pp.19-34
    • /
    • 2009
  • The purpose of this paper was to provide an overview of the existing literature on online visual merchandising and to propose an alternative theoretical framework in which online visual merchandising research can be conducted. Two streams of research including the e-tail service quality literature and the store environment literature from environmental psychology perspectives were reviewed in the context of online visual merchandising. An impression formation paradigm from social psychology was adopted to establish the alternative framework to supplement the existing online visual merchandising research and generate deeper insights into the online visual merchandising phenomenon.

A Comparison Study on the Error Criteria in Nonparametric Regression Estimators

  • Chung, Sung-S.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.2
    • /
    • pp.335-345
    • /
    • 2000
  • Most context use the classical norms on function spaces as the error criteria. Since these norms are all based on the vertical distances between the curves, these can be quite inappropriate from a visual notion of distance. Visual errors in Marron and Tsybakov(1995) correspond more closely to "what the eye sees". Simulation is performed to compare the performance of the regression smoothers in view of MISE and the visual error. It shows that the visual error can be used as a possible candidate of error criteria in the kernel regression estimation.

  • PDF

Conformance Test for MPEG-4 Shape Decoders (MPEG-4 Shape Decoder의 적합성 검사)

  • 황혜전;박인수;박수현;이병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.1060-1067
    • /
    • 2000
  • MPEG-4 visual coding is an object-based system. The current video coding standards, H.261, MPEG-1, and MPEG-2 encode frame by frame. On the other hand, MPEG-4 separately encodes several objects, such as video objects and audio objects, in the same frame. Each transmitted object is decoded and composed in one frame. Shape coding is a process of coding visual objects in a frame. In this paper we present conformance test method for MPEG-4 shape decoders. This paper reviews the basic shape decoding standard, and proposes conformance test methods for BAB type decoder, and CAE decoder for intra and inter VOPs. Our test generates all possible cases of shape motion vector difference and context.

  • PDF

Afterlife with Image: Life and Death in Portraiture (이미지 속에서 살아남다? 초상화에서의 삶과 죽음)

  • Shin, Seung-Chol
    • The Journal of Art Theory & Practice
    • /
    • no.16
    • /
    • pp.139-174
    • /
    • 2013
  • Pliny the Elder said that multiple cultures agree that the painting began as a shadow trace. A daughter of Butades, the potter in Corinth, traced an outline around a man's shadow, and it was the very beginning of painting. In this anecdote, the profile, i. e. the portrait substitutes body of the absent lover. It makes the absent body present and replaces his place. In this context Hans Belting put the anthropological value to this visual practice. Human being made images to cope actively with the shock of death and the disappearing of body. With the aid of the representation of the bodily presence, the image struggles to resist the death. This paper is a study on the critical meaning of representation in the context of bodily survival by image. The representation is the paradoxical trick of consciousness, an ability to see something as 'there' and 'not there' at the same time. So the connection between image and the body would be suspicious. Although this relation was tight in the ancient shadow painting and the medieval effigies, the modern visual practice forsakes this connection and exposes the trick of representation. It insists that image was not real and even expels the medieval visual practice from the boundary of fine arts. The genealogy of the portraiture is formed by two different visual practices. The belief and the disbelief in the image are observed in the process of representation and anti-representation, and this ambivalence transforms the ontological meaning of portrait in the visual representation.

  • PDF

Visual Mapping from Spatiotemporal Table Information to 3-Dimensional Map (시-공간 도표정보의 3차원 지도 기반 가시화기법)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.51-58
    • /
    • 2006
  • Information visualization, generally speaking, consists of three steps: transform from raw data to data model, visual mapping from data model to visual structure, and transform from visual structure to information model. In this paper, we propose a visual mapping method from spatiotemporal table information, which is related to events in large-scale building, to 3D map metaphor. The process has also three steps as follows. First, after analyzing the table attributes, we carefully define a context to fully represent the table-information. Second, we choose meaningful attribute sets from the context. Third, each meaningful attribute set is mapped to one well defined visual structure. Our method has several advantages. First, users can intuitively achieve non-spatial information through the 3D map which is a powerful spatial metaphor. Second, this system shows various visual mapping method applicable to other data models in the form of table, especially GIS. After describing the whole concept of our visual mapping, we will show the results of implementation for several requests.

  • PDF