• Title/Summary/Keyword: Interactive user interface

Search Result 306, Processing Time 0.024 seconds

Image-Based Approach for Modeling 3D Shapes with Curved Surfaces (곡면을 포함하는 형상의 영상을 이용한 모델링)

  • Lee, Man-Hee;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.1
    • /
    • pp.38-48
    • /
    • 2007
  • In this paper, we propose an image-based method for modeling 3D objects with curved surfaces based on the NURBS (Non-Uniform Rational B-Splines) representation. Starting from a few calibrated images, the user specifies the corresponding curves by means of an interactive user interface. Then, the 3D curves are reconstructed using stereo reconstruction. In order to fit the curves easily using the interactive user interface, NURBS curves and surfaces are employed. The proposed surface modeling techniques include surface building methods such as bilinear surfaces, ruled surfaces, generalized cylinders, and surfaces of revolution. In addition to these methods, we also propose various advanced surface modeling techniques, including skinned surfaces, swept surfaces, and boundary patches. Based on these surface modeling techniques, it is possible to build various types of 3D shape models with textured curved surfaces without much effort. Also, it is possible to reconstruct more realistic surfaces by using proposed view-dependent texture acquisition algorithm. Constructed 3D shape model with curves and curved surfaces can be exported in VRML format, making it possible to be used in different 3D graphics softwares.

A Study on Interaction of Gaze-based User Interface in Mobile Virtual Reality Environment (모바일 가상현실 환경에서의 시선기반 사용자 인터페이스 상호 작용에 관한 연구)

  • Kim, Mingyu;Lee, Jiwon;Jeon, Changyu;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.39-46
    • /
    • 2017
  • This study proposes a gaze-based user interface to provide user oriented interaction suitable for the virtual reality environment on mobile platforms. For this purpose, a mobile platform-based three-dimensional interactive content is produced, to test whether the proposed interface increases user satisfaction through the interactions in a mobile virtual reality environment. The gaze-based interface, the most common input method for mobile virtual reality content, is designed by considering two factors: the field of view and the feedback system. The performance of the proposed gaze-based interface is analyzed by conducting experiments on whether or not it offers motives for user interest, effects of enhanced immersion, differentiated formats from existing ones, and convenience in operating content.

Technical Development of Interactive Game Interface Using Multi-Channel EMG Signal (다채널 근전도 신호를 이용한 체감형 게임 인터페이스 개발)

  • Kim, Kang-Soo;Han, Yong-Hee;Jung, Won-Beom;Lee, Young-Ho;Kang, Jung-Hoon;Choi, Heung-Ho;Mun, Chi-Woong
    • Journal of Korea Game Society
    • /
    • v.10 no.5
    • /
    • pp.65-73
    • /
    • 2010
  • In this paper, we developed the device for an interactive game interface using bio signals which were able to recognize user's motion intention using EMG signals and it was applied to the games which need the information of the muscle motion directions. The module for acquiring EMG signals consists of 4-Ch, wrist-motions were defined as up, right, down and left state. The user's intent was recognized through thresholding and comparing signals of each channel. The classification result of the motion directions could control the arrow keys on the keyboard of PC and it was applied on the various games. This proposed game device can be expected to induce an effective exercise with an interesting and enjoyment, and it can use both self-developed or commercial games.

Investigating the Combination of Bag of Words and Named Entities Approach in Tracking and Detection Tasks among Journalists

  • Mohd, Masnizah;Bashaddadh, Omar Mabrook A.
    • Journal of Information Science Theory and Practice
    • /
    • v.2 no.4
    • /
    • pp.31-48
    • /
    • 2014
  • The proliferation of many interactive Topic Detection and Tracking (iTDT) systems has motivated researchers to design systems that can track and detect news better. iTDT focuses on user interaction, user evaluation, and user interfaces. Recently, increasing effort has been devoted to user interfaces to improve TDT systems by investigating not just the user interaction aspect but also user and task oriented evaluation. This study investigates the combination of the bag of words and named entities approaches implemented in the iTDT interface, called Interactive Event Tracking (iEvent), including what TDT tasks these approaches facilitate. iEvent is composed of three components, which are Cluster View (CV), Document View (DV), and Term View (TV). User experiments have been carried out amongst journalists to compare three settings of iEvent: Setup 1 and Setup 2 (baseline setups), and Setup 3 (experimental setup). Setup 1 used bag of words and Setup 2 used named entities, while Setup 3 used a combination of bag of words and named entities. Journalists were asked to perform TDT tasks: Tracking and Detection. Findings revealed that the combination of bag of words and named entities approaches generally facilitated the journalists to perform well in the TDT tasks. This study has confirmed that the combination approach in iTDT is useful and enhanced the effectiveness of users' performance in performing the TDT tasks. It gives suggestions on the features with their approaches which facilitated the journalists in performing the TDT tasks.

A Study on Interactive Sound Installation and User Intention Analysis - Focusing on an Installation: Color note (인터렉티브 사운드 설치와 사용자 의도 분석에 관한 연구 - 작품 Color note 를 중심으로)

  • Han, Yoon-Jung;Han, Byeong-Jun
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.268-273
    • /
    • 2008
  • This work defines user intention according to intention range, and also proposes an interactive sound installation which reflects and varies above features. User intention consists of several decomposition concepts, which are elemental intentions, partial intentions, and a universal intention. And also, each concept is defined as inclusion/affiliation relationship with other concepts. For the representation of elemental intention, we implemented an musical interface, Color note, which represents the colors and notes according to response of participants. We also propose Harmonic Defragmentation (HD), which arranges the partial intentions with harmonic rule. Finally, the universal intention is inferred to the comprehensive direction of elemental intentions. We used Karhunen-Lo$\`{e}$ve(K-L) Transform for the inference. For verifying the validity of our proposed interface, the "Color Note," and the various techniques, we installed our work and surveyed various users for the evaluation of HD and statistical techniques. Also, we commissioned another survey to find out satisfaction measurement which was used for expressing universal intention.

  • PDF

A Real-time Interactive Shadow Avatar with Facial Emotions (감정 표현이 가능한 실시간 반응형 그림자 아바타)

  • Lim, Yang-Mi;Lee, Jae-Won;Hong, Euy-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.4
    • /
    • pp.506-515
    • /
    • 2007
  • In this paper, we propose a Real-time Interactive Shadow Avatar(RISA) which can express facial emotions changing as response of user's gestures. The avatar's shape is a virtual Shadow constructed from the real-time sampled picture of user's shape. Several predefined facial animations overlap on the face area of the virtual Shadow, according to the types of hand gestures. We use the background subtraction method to separate the virtual Shadow, and a simplified region-based tracking method is adopted for tracking hand positions and detecting hand gestures. In order to express smooth change of emotions, we use a refined morphing method which uses many more frames in contrast with traditional dynamic emoticons. RISA can be directly applied to the area of interface media arts and we expect the detecting scheme of RISA would be utilized as an alternative media interface for DMB and camera phones which need simple input devices, in the near future.

  • PDF

A Study on the Creation of Interactive Text Collage using Viewer Narratives (관람자 내러티브를 활용한 인터랙티브 텍스트 콜라주 창작 연구)

  • Lim, Sooyeon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.4
    • /
    • pp.297-302
    • /
    • 2022
  • Contemporary viewers familiar with the digital space show their desire for self-expression and use voice, text and gestures as tools for expression. The purpose of this study is to create interactive art that expresses the narrative uttered by the viewer in the form of a collage using the viewer's figure, and reproduces and expands the story by the viewer's movement. The proposed interactive art visualizes audio and video information acquired from the viewer in a text collage, and uses gesture information and a natural user interface to easily and conveniently interact in real time and express personalized emotions. The three pieces of information obtained from the viewer are connected to each other to express the viewer's current temporary emotions. The rigid narrative of the text has some degree of freedom through the viewer's portrait images and gestures, and at the same time produces and expands the structure of the story close to reality. The artwork space created in this way is an experience space where the viewer's narrative is reflected, updated, and created in real time, and it is a reflection of oneself. It also induces active appreciation through the active intervention and action of the viewer.

An Implementation of User Interface Simulator dedicated to a Mobile Terminal (이동 단말기용 사용자 인터페이스 시뮬레이터 구현)

  • 이효상;허혜선;홍윤식
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1049-1052
    • /
    • 1999
  • We present a use. interface(UI) simulator for developing a mobile phone. This simulator consists of 3 major modules: Graphic Tool Editor, User Interface Software(UI), and Network Command Processor(NCP). The Graphic Tool Editor can design a virtual mobile terminal. The NCP sends a command to the phone and then receives its status from the phone after completion of the command. We can add or modify lots of features easily to the phone using the UI module. These modules can interact each other by sharing the common area in the memory. By doing so, these modules can exchange their status and data to operate in real-time. We have designed and tested a virtual prototyping phone for the LGP 3200 manufactured by LGIC by using the simulator. Through a series of experiment, we have believed that our virtual prototyping interactive simulator can do shorten its development and testing cycle by applying it in the early design phase.

  • PDF

Interactive Rehabilitation Support System for Dementia Patients

  • Kim, Sung-Ill
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.221-225
    • /
    • 2010
  • This paper presents the preliminary study of an interactive rehabilitation support system for both dementia patients and their caregivers, the goal of which is to improve the quality of life(QOL) of the patients suffering from dementia through virtual interaction. To achieve the virtual interaction, three kinds of recognition modules for speech, facial image and pen-mouse gesture are studied. The results of both practical tests and questionnaire surveys show that the proposed system had to be further improved, especially in both speech recognition and user interface for real-world applications. The surveys also revealed that the pen-mouse gesture recognition, as one of possible interactive aids, show us a probability to support weakness of speech recognition.