• Title/Summary/Keyword: human and computer interaction

Search Result 614, Processing Time 0.032 seconds

A Study on the Reduction in VR Cybersickness using an Interactive Wind System (Interactive Wind System을 이용한 VR 사이버 멀미 개선 연구)

  • Lim, Dojeon;Lee, Yewon;Cho, Yesol;Ryoo, Taedong;Han, Daseong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.43-53
    • /
    • 2021
  • This paper presents an interactive wind system that generates artificial winds in a virtual reality (VR) environment according to online user inputs from a steering wheel and an acceleration pedal. Our system is composed of a head-mounted display (HMD) and three electric fans to make the user sense touch from the winds blowing from three different directions in a racing car VR application. To evaluate the effectiveness of the winds for reducing VR cybersickness, we employ the simulator sickness questionnaire (SSQ), which is one of the most common measures for cybersickness. We conducted experiments on 13 subjects for the racing car contents first with the winds and then without them or vice versa. Our results showed that the VR contents with the artificial winds clearly reduce cybersickness while providing a positive user experience.

Multi Function Console display configuration and HCI design to improve Naval Combat System operability

  • Park, Dae-Young;Jung, Dong-Han;Yang, Moon-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.75-84
    • /
    • 2019
  • The Naval Combat System has several equipment needed to operate the system, such as radar equipment, underwater sensor equipment, guns and missile control and armed control equipment, and a multi function console is configured to control it. The multi function console is equipped with HCI(Human Computer Interaction)-based software for displaying the status information of equipment and controlling equipment, and the operator uses the installed software to operate the Naval Combat System. However, when operating a Naval Combat System for a long time, there are problems such as physical discomfort caused by the structure of the multi function console display and increase in fatigue of the person who operates various and complicated user interface configuration. These issues are important factors in reducing Naval Combat System operability. In order to solve these issues, in this paper, based on a questionnaire survey conducted for Naval Combat System development personnel, multi function console screen design to reduce physical discomfort and HCI design to reduce fatigue and increase intuition are proposed. The proposed design is expected to provide convenience to future Naval Combat System operators and improve operation over existing Naval Combat System.

A Study on Verification of Back TranScription(BTS)-based Data Construction (Back TranScription(BTS)기반 데이터 구축 검증 연구)

  • Park, Chanjun;Seo, Jaehyung;Lee, Seolhwa;Moon, Hyeonseok;Eo, Sugyeong;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.109-117
    • /
    • 2021
  • Recently, the use of speech-based interfaces is increasing as a means for human-computer interaction (HCI). Accordingly, interest in post-processors for correcting errors in speech recognition results is also increasing. However, a lot of human-labor is required for data construction. in order to manufacture a sequence to sequence (S2S) based speech recognition post-processor. To this end, to alleviate the limitations of the existing construction methodology, a new data construction method called Back TranScription (BTS) was proposed. BTS refers to a technology that combines TTS and STT technology to create a pseudo parallel corpus. This methodology eliminates the role of a phonetic transcriptor and can automatically generate vast amounts of training data, saving the cost. This paper verified through experiments that data should be constructed in consideration of text style and domain rather than constructing data without any criteria by extending the existing BTS research.

Adaptive Mass-Spring Method for the Synchronization of Dual Deformable Model (듀얼 가변형 모델 동기화를 위한 적응성 질량-스프링 기법)

  • Cho, Jae-Hwan;Park, Jin-Ah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.3
    • /
    • pp.1-9
    • /
    • 2009
  • Traditional computer simulation uses only traditional input and output devices. With the recent emergence of haptic techniques, which can give users kinetic and tactile feedback, the field of computer simulation is diversifying. In particular, as the virtual-reality-based surgical simulation has been recognized as an effective training tool in medical education, the practical virtual simulation of surgery becomes a stimulating new research area. The surgical simulation framework should represent the realistic properties of human organ for the high immersion of a user interaction with a virtual object. The framework should make proper both haptic and visual feedback for high immersed virtual environment. However, one model may not be suitable to simulate both haptic and visual feedback because the perceptive channels of two feedbacks are different from each other and the system requirements are also different. Therefore, we separated two models to simulate haptic and visual feedback independently but at the same time. We propose an adaptive mass-spring method as a multi-modal simulation technique to synchronize those two separated models and present a framework for a dual model of simulation that can realistically simulate the behavior of the soft, pliable human body, along with haptic feedback from the user's interaction.

  • PDF

The User Interface of Button Type for Stereo Video-See-Through (Stereo Video-See-Through를 위한 버튼형 인터페이스)

  • Choi, Young-Ju;Seo, Young-Duek
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.2
    • /
    • pp.47-54
    • /
    • 2007
  • This paper proposes a user interface based on video see-through environment which shows the images via stereo-cameras so that the user can control the computer systems or other various processes easily. We include an AR technology to synthesize virtual buttons; the graphic images are overlaid on the captured frames taken by the camera real-time. We search for the hand position in the frames to judge whether or not the user selects the button. The result of judgment is visualized through changing of the button color. The user can easily interact with the system by selecting the virtual button in the screen with watching the screen and moving her fingers at the air.

  • PDF

Towards Establishing a Touchless Gesture Dictionary based on User Participatory Design

  • Song, Hae-Won;Kim, Huhn
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.515-523
    • /
    • 2012
  • Objective: The aim of this study is to investigate users' intuitive stereotypes on non-touch gestures and establish the gesture dictionary that can be applied to gesture-based interaction designs. Background: Recently, the interaction based on non-touch gestures is emerging as an alternative for natural interactions between human and systems. However, in order for non-touch gestures to become a universe interaction method, the studies on what kinds of gestures are intuitive and effective should be prerequisite. Method: In this study, as applicable domains of non-touch gestures, four devices(i.e. TV, Audio, Computer, Car Navigation) and sixteen basic operations(i.e. power on/off, previous/next page, volume up/down, list up/down, zoom in/out, play, cancel, delete, search, mute, save) were drawn from both focus group interview and survey. Then, a user participatory design was performed. The participants were requested to design three gestures suitable to each operation in the devices, and they evaluated intuitiveness, memorability, convenience, and satisfaction of their derived gestures. Through the participatory design, agreement scores, frequencies and planning times of each distinguished gesture were measured. Results: The derived gestures were not different in terms of four devices. However, diverse but common gestures were derived in terms of kinds of operations. In special, manipulative gestures were suitable for all kinds of operations. On the contrary, semantic or descriptive gestures were proper to one-shot operations like power on/off, play, cancel or search. Conclusion: The touchless gesture dictionary was established by mapping intuitive and valuable gestures onto each operation. Application: The dictionary can be applied to interaction designs based on non-touch gestures. Moreover, it will be used as a basic reference for standardizing non-touch gestures.

The interaction between emotion recognition through facial expression based on cognitive user-centered television (이용자 중심의 얼굴 표정을 통한 감정 인식 TV의 상호관계 연구 -인간의 표정을 통한 감정 인식기반의 TV과 인간의 상호 작용 연구)

  • Lee, Jong-Sik;Shin, Dong-Hee
    • Journal of the HCI Society of Korea
    • /
    • v.9 no.1
    • /
    • pp.23-28
    • /
    • 2014
  • In this study we focus on the effect of the interaction between humans and reactive television when emotion recognition through facial expression mechanism is used. Most of today's user interfaces in electronic products are passive and are not properly fitted into users' needs. In terms of the user centered device, we propose that the emotion based reactive television is the most effective in interaction compared to other passive input products. We have developed and researched next generation cognitive TV models in user centered. In this paper we present a result of the experiment that had been taken with Fraunhofer IIS $SHORE^{TM}$ demo software version to measure emotion recognition. This new approach was based on the real time cognitive TV models and through this approach we studied the relationship between humans and cognitive TV. This study follows following steps: 1) Cognitive TV systems can be on automatic ON/OFF mode responding to motions of people 2) Cognitive TV can directly select channels as face changes (ex, Neutral Mode and Happy Mode, Sad Mode, Angry Mode) 3) Cognitive TV can detect emotion recognition from facial expression of people within the fixed time and then if Happy mode is detected the programs of TV would be shifted into funny or interesting shows and if Angry mode is detected it would be changed to moving or touching shows. In addition, we focus on improving the emotion recognition through facial expression. Furthermore, the improvement of cognition TV based on personal characteristics is needed for the different personality of users in human to computer interaction. In this manner, the study on how people feel and how cognitive TV responds accordingly, plus the effects of media as cognitive mechanism will be thoroughly discussed.

The Modified Block Matching Algorithm for a Hand Tracking of an HCI system (HCI 시스템의 손 추적을 위한 수정 블록 정합 알고리즘)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.4 no.4
    • /
    • pp.9-14
    • /
    • 2003
  • A GUI (graphical user interface) has been a dominant platform for HCI (human computer interaction). A GUI - based interaction has made computers simpler and easier to use. The GUI - based interaction, however, does not easily support the range of interaction necessary to meet users' needs that are natural. intuitive, and adaptive. In this paper, the modified BMA (block matching algorithm) is proposed to track a hand in a sequence of an image and to recognize it in each video frame in order to replace a mouse with a pointing device for a virtual reality. The HCI system with 30 frames per second is realized in this paper. The modified BMA is proposed to estimate a position of the hand and segmentation with an orientation of motion and a color distribution of the hand region for real - time processing. The experimental result shows that the modified BMA with the YCbCr (luminance Y, component blue, component red) color coordinate guarantees the real - time processing and the recognition rate. The hand tracking by the modified BMA can be applied to a virtual reclity or a game or an HCI system for the disable.

  • PDF

A Study on Multibiometrics derived from Calling Activity Context using Smartphone for Implicit User Authentication System

  • Negara, Ali Fahmi Perwira;Yeom, Jaekeun;Choi, Deokjai
    • International Journal of Contents
    • /
    • v.9 no.2
    • /
    • pp.14-21
    • /
    • 2013
  • Current smartphone authentication systems are deemed inconvenient and difficult for users on remembering their password as well as privacy issues on stolen or forged biometrics. New authentication system is demanded to be implicit to users with very minimum user involvement being. This idea aims towards a future model of authentication system for smartphones users without users realizing them being authenticated. We use the most frequent activity that users carry out with their smartphone, which is the calling activity. We derive two basics related interactions that are first factor being arm's flex (AF) action to pick a phone to be near ones' ears and then once getting near ear using second factor from ear shape image. Here, we combine behavior biometrics from AF in first factor and physical biometrics from ear image in second factor. Our study shows our dual-factor authentication system does not require explicit user interaction thereby improving convenience and alleviating burden from users from persistent necessity to remember password. These findings will augment development of novel implicit authentication system being transparent, easier, and unobtrusive for users.

Development of an Organism-specific Protein Interaction Database with Supplementary Data from the Web Sources (다양한 웹 데이터를 이용한 특정 유기체의 단백질 상호작용 데이터베이스 개발)

  • Hwang, Doo-Sung
    • The KIPS Transactions:PartD
    • /
    • v.9D no.6
    • /
    • pp.1091-1096
    • /
    • 2002
  • This paper presents the development of a protein interaction database. The developed system is characterized as follows. First, the proposed system not only maintains interaction data collected by an experiment, but also the genomic information of the protein data. Secondly, the system can extract details on interacting proteins through the developed wrappers. Thirdly, the system is based on wrapper-based system in order to extract the biologically meaningful data from various web sources and integrate them into a relational database. The system inherits a layered-modular architecture by introducing a wrapper-mediator approach in order to solve the syntactic and semantic heterogeneity among multiple data sources. Currently the system has wrapped the relevant data for about 40% of about 11,500 proteins on average from various accessible sources. A wrapper-mediator approach makes a protein interaction data comprehensive and useful with support of data interoperability and integration. The developing database will be useful for mining further knowledge and analysis of human life in proteomics studies.