• Title/Summary/Keyword: visual model

Search Result 2,021, Processing Time 0.029 seconds

Visual Mapping from Spatiotemporal Table Information to 3-Dimensional Map (시-공간 도표정보의 3차원 지도 기반 가시화기법)

  • Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.51-58
    • /
    • 2006
  • Information visualization, generally speaking, consists of three steps: transform from raw data to data model, visual mapping from data model to visual structure, and transform from visual structure to information model. In this paper, we propose a visual mapping method from spatiotemporal table information, which is related to events in large-scale building, to 3D map metaphor. The process has also three steps as follows. First, after analyzing the table attributes, we carefully define a context to fully represent the table-information. Second, we choose meaningful attribute sets from the context. Third, each meaningful attribute set is mapped to one well defined visual structure. Our method has several advantages. First, users can intuitively achieve non-spatial information through the 3D map which is a powerful spatial metaphor. Second, this system shows various visual mapping method applicable to other data models in the form of table, especially GIS. After describing the whole concept of our visual mapping, we will show the results of implementation for several requests.

  • PDF

Visual Attention Model Based on Particle Filter

  • Liu, Long;Wei, Wei;Li, Xianli;Pan, Yafeng;Song, Houbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3791-3805
    • /
    • 2016
  • The visual attention mechanism includes 2 attention models, the bottom-up (B-U) and the top-down (T-D), the physiology of which have not yet been accurately described. In this paper, the visual attention mechanism is regarded as a Bayesian fusion process, and a visual attention model based on particle filter is proposed. Under certain particular assumed conditions, a calculation formula of Bayesian posterior probability is deduced. The visual attention fusion process based on the particle filter is realized through importance sampling, particle weight updating, and resampling, and visual attention is finally determined by the particle distribution state. The test results of multigroup images show that the calculation result of this model has better subjective and objective effects than that of other models.

Visual Dynamics Model for 3D Text Visualization

  • Lim, Sooyeon
    • International Journal of Contents
    • /
    • v.14 no.4
    • /
    • pp.86-91
    • /
    • 2018
  • Text has evolved along with the history of art as a means of communicating human intentions and emotions. In addition, text visualization artworks have been combined with the social form and contents of new media to produce social messages and related meanings. Recently, in text visualization artworks combined with digital media, communication forms with viewers are changing instantly and interactively, and viewers are actively participating in creating artworks by direct engagement. Interactive text visualization with additional viewer's interaction, generates external dynamics from text shapes and internal dynamics from embedded meanings of text. The purpose of this study is to propose a visual dynamics model to express the dynamics of text and to implement a text visualization system based on the model. It uses the deconstruction of the imaged text to create an interactive text visualization system that reacts to the gestures of the viewer in real time. Visual Transformation synchronized with the intentions of the viewer prevent the text from remaining in the interpretation of language symbols and extend the various meanings of the text. The visualized text in various forms shows visual dynamics that interpret the meaning according to the cultural background of the viewer.

Image Understanding for Visual Dialog

  • Cho, Yeongsu;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1171-1178
    • /
    • 2019
  • This study proposes a deep neural network model based on an encoder-decoder structure for visual dialogs. Ongoing linguistic understanding of the dialog history and context is important to generate correct answers to questions in visual dialogs followed by questions and answers regarding images. Nevertheless, in many cases, a visual understanding that can identify scenes or object attributes contained in images is beneficial. Hence, in the proposed model, by employing a separate person detector and an attribute recognizer in addition to visual features extracted from the entire input image at the encoding stage using a convolutional neural network, we emphasize attributes, such as gender, age, and dress concept of the people in the corresponding image and use them to generate answers. The results of the experiments conducted using VisDial v0.9, a large benchmark dataset, confirmed that the proposed model performed well.

A Spatial Planning Model for Supporting Facilities Allocation and Visual Evaluation in Improvement of Rural Villages (농촌마을개발의 시설배치 및 시각적 평가 지원을 위 한 공간계획 모형)

  • 김대식;정하우
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.44 no.6
    • /
    • pp.71-82
    • /
    • 2002
  • The purpose of this study is to develop a 3 dimensional spatial planning model (3DSPLAM) for facilities allocation and visual evaluation in improvement planning of rural village. For the model development, this study developed both planning layers and a modelling process for spatial planning of rural villages. The 3DSPLAM generates road networks and village facilities location automatically from built area plan map and digital elevation model generated by geographic information system. The model also simulates 3-dimensional villagescape for visual presentation of the planned results. The 3DSPLAM could be conveniently used for automatic allocation of roads, easy partition of land lots and reasonable locating of facilities. The planned results could be also presented in the stereoscopic models with varied viewing positions and angles.

An Basic Study on the Curriculum Evaluation of Gifted Education in Visual Art (미술영재 교육과정 평가를 위한 이론적 기초)

  • Lee, Kyung-Jin;Kim, Sun-Ah
    • Journal of Gifted/Talented Education
    • /
    • v.22 no.3
    • /
    • pp.639-662
    • /
    • 2012
  • The purpose of this study is to develop the evaluation model of gifted curriculum in visual art. For this purpose, first, it discusses about what kinds of issues raised about gifted education in visual art. Second, it critically reviews the evaluation models of gifted curriculum, and investigates the suitable model for developing curriculum evaluation model of gifted in visual art. Third, it suggests the appropriate perspective and evaluation model of gifted curriculum in visual art. Along with the change in the concept of creativity, recent studies on gifted education in visual art concentrate that gifted learners who have the potential find their own way of creating art. Also they emphasize the contextual implementation which recognizes the significance of interaction among field, domain and individual. Based of these inquiry, existing evaluation models of gifted curriculum have limitations in suitability as a evaluation model of gifted curriculum in visual art. This study suggests that the curriculum evaluation of visual art gifted programs should be approached from the decision-making perspective. Also it develops the conceptual framework and the evaluation model of gifted curriculum in visual art based on the CIPP model, which is the representative model of decision-making approach. It concludes with its implications and the discussion about the role of evaluators.

Runway visual range prediction using Convolutional Neural Network with Weather information

  • Ku, SungKwan;Kim, Seungsu;Hong, Seokmin
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.190-194
    • /
    • 2018
  • The runway visual range is one of the important factors that decide the possibility of taking offs and landings of the airplane at local airports. The runway visual range is affected by weather conditions like fog, wind, etc. The pilots and aviation related workers check a local weather forecast such as runway visual range for safe flight. However there are several local airfields at which no other forecasting functions are provided due to realistic problems like the deterioration, breakdown, expensive purchasing cost of the measurement equipment. To this end, this study proposes a prediction model of runway visual range for a local airport by applying convolutional neural network that has been most commonly used for image/video recognition, image classification, natural language processing and so on to the prediction of runway visual range. For constituting the prediction model, we use the previous time series data of wind speed, humidity, temperature and runway visibility. This paper shows the usefulness of the proposed prediction model of runway visual range by comparing with the measured data.

Motion Detection Model Based on PCNN

  • Yoshida, Minoru;Tanaka, Masaru;Kurita, Takio
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.273-276
    • /
    • 2002
  • Pulse-Coupled Neural Network (PCNN), which can explain the synchronous burst of neurons in a cat visual cortex, is a fundamental model for the biomimetic vision. The PCNN is a kind of pulse coded neural network models. In order to get deep understanding of the visual information Processing, it is important to simulate the visual system through such biologically plausible neural network model. In this paper, we construct the motion detection model based on the PCNN with the receptive field models of neurons in the lateral geniculate nucleus and the primary visual cortex. Then it is shown that this motion detection model can detect the movements and the direction of motion effectively.

  • PDF

Dongeui Visual-PERT/CPM for R&D Project Management (연구개발 프로젝트관리를 위한 시각화모델)

  • 황흥석
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2000.10a
    • /
    • pp.268-271
    • /
    • 2000
  • In these days, the technical advances and complexities have generated much of the difficulties in managing the project resources, both time and costing to accomplish the project in the most efficient manner. The project manager is frequently required to render judgements concerning the schedule and resource adjustments. This research develops an analytical model for a schedule-cost and risk analysis based on visual PERT/CPM. We used a two-step approaches :in the step 1, a deterministic PERT/CPM model for the critical path and estimating the project time schedule and related resource planning, In the second step, we developed a heuristic model for crash and stretch out analysis based upon a time-cost trade-off associated with the crash and stretch out of the project. Computer implementation of this model is provided based on GUI-Type objective-oriented programming for the users and provided displays of all the inputs and outputs in the form of visual graphical. Also developed GUI-type program, Dongeui Visual-PERT/CPM. The results of this research will provide the project managers with an efficient management tool.

  • PDF

Visual Processing System based on Client-Server Model (Client-Server 모델에 의한 시각처리시스템)

  • Moon, Yong-Seon;Her, Hyung-Pal;Lim, Seung-Woo;Park, Kyung-Sug
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.2
    • /
    • pp.42-47
    • /
    • 1999
  • This paper suggests a model of a visual information system for factory automation. The model is composed of a visual processing system with client-server model and RPC(Remote Procedure Calling); a main server for controlling the whole factory automation process; and a processing server which processes the visual information only. To verify its efficiency, the suggested model is realized on the bill recognition system.

  • PDF