• Title/Summary/Keyword: 청각적 사용자 인터페이스

Search Result 36, Processing Time 0.029 seconds

Development of a Hand Shape Editor for Sign Language Expression (수화 표현을 위한 손 모양 편집 프로그램의 개발)

  • Oh, Young-Joon;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.4 s.316
    • /
    • pp.48-54
    • /
    • 2007
  • Hand shape is one of important elements in Korean Sign Language (KSL), which is a communication method for the deaf. To express sign motion in a virtual reality environment based on OpenGL, we need an editor which can insert and modify sign motion data. However, it is very difficult that people, who lack knowledge of sign 1anguage, exactly edit and express hand shape using the existing editors. We also need a program to efficiently construct and store the hand shape data because the number of data is very large in a sign word dictionary. In this paper we developed a KSL hand shape editor to easily construct and edit hand shape by a graphical user interface (GUI), and to store it in a database. Hand shape codes are used in a sign word editor to synthesize sign motion and decreases total amount of KSL data.

Development of A Haptic Interactive Virtual Exhibition Space (햅틱 상호작용을 제공하는 가상 전시공간 개발)

  • You, Yong-Hee;Cho, Yun-Hye;Choi, Geon-Suk;Sung, Mee-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.6
    • /
    • pp.412-416
    • /
    • 2007
  • In this paper, we present a haptic virtual exhibition space that allows users to interact with 3D graphic objects not only through the sense of sight but also through the sense of touch. The haptic virtual exhibition space offers users in different places some efficient ways to experience the exhibitions of a virtual musical museum using the basic human senses of perception, such as vision, audition, and touch. Depending on 3D graphic objects, we apply different properties to let those feel realistic. We also provide haptic device based navigation which prevents users from rushing between various interfaces: keyboard and mouse. The haptic virtual museum is based on Client-Server architecture and clients are represented in the 3D space in the form of avatars. In this paper, we mainly discuss the design of the haptic virtual exhibition space in detail and in the end, we provide performance analysis in comparison to other similar applications such as QTVR and VRML).

Design and Implementation of Korean Voice Web Browser (한국어 음성 웹브라우저 설계 및 구현)

  • Jang, Young-Gun;Jo, Kyoung-Hwan
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.5
    • /
    • pp.458-466
    • /
    • 2001
  • This paper is addressed to a design and implementation of Korean voice web browser using voice technologies for controling web browser and selecting contents in the web document, and converting them to voice after HTML analysis. Main feature of this web browser is universal design which considers both of normal person and visual disabled, allows multi-modal interface. As voice interface for visual disabled, it supports tree structure which allows to recognize web document structure easily by only voice guidance regardless of frame usage, can handle all elements described as tag in the web document, identify them as predefined different voice property according to element property. This method gets rid of additional guidance voice for element property without audio style sheet or additional programming effort.

  • PDF

Real-Time Stereoscopic Visualization of Very Large Volume Data on CAVE (CAVE상에서의 방대한 볼륨 데이타의 실시간 입체 영상 가시화)

  • 임무진;이중연;조민수;이상산;임인성
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.679-691
    • /
    • 2002
  • Volume visualization is an important subarea of scientific visualization, and is concerned with techniques that are effectively used in generating meaningful and visual information from abstract and complex volume datasets, defined in three- or higher-dimensional space. It has been increasingly important in various fields including meteorology, medical science, and computational fluid dynamics, and so on. On the other hand, virtual reality is a research field focusing on various techniques that aid gaining experiences in virtual worlds with visual, auditory and tactile senses. In this paper, we have developed a visualization system for CAVE, an immersive 3D virtual environment system, which generates stereoscopic images from huge human volume datasets in real-time using an improved volume visualization technique. In order to complement the 3D texture-mapping based volume rendering methods, that easily slow down as data sizes increase, our system utilizes an image-based rendering technique to guarantee real-time performance. The system has been designed to offer a variety of user interface functionality for effective visualization. In this article, we present detailed description on our real-time stereoscopic visualization system, and show how the Visible Korean Human dataset is effectively visualized on CAVE.

Speech Visualization of Korean Vowels Based on the Distances Among Acoustic Features (음성특징의 거리 개념에 기반한 한국어 모음 음성의 시각화)

  • Pok, Gouchol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.5
    • /
    • pp.512-520
    • /
    • 2019
  • It is quite useful to represent speeches visually for learners who study foreign languages as well as the hearing impaired who cannot directly hear speeches, and a number of researches have been presented in the literature. They remain, however, at the level of representing the characteristics of speeches using colors or showing the changing shape of lips and mouth using the animation-based representation. As a result of such approaches, those methods cannot tell the users how far their pronunciations are away from the standard ones, and moreover they make it technically difficult to develop such a system in which users can correct their pronunciation in an interactive manner. In order to address these kind of drawbacks, this paper proposes a speech visualization model based on the relative distance between the user's speech and the standard one, furthermore suggests actual implementation directions by applying the proposed model to the visualization of Korean vowels. The method extract three formants F1, F2, and F3 from speech signals and feed them into the Kohonen's SOM to map the results into 2-D screen and represent each speech as a pint on the screen. We have presented a real system implemented using the open source formant analysis software on the speech of a Korean instructor and several foreign students studying Korean language, in which the user interface was built using the Javascript for the screen display.

Development and Utility Evaluation of Portable Respiration Training Device for Image-guided Stereotactic Body Radiation Therapy (SBRT) (영상유도 체부정위방사선 치료시 호흡동조를 위한 휴대형 호흡연습장치의 개발 및 유용성 평가)

  • Hwang, Seon Bung;Park, Mun Kyu;Park, Seung Woo;Cho, Yu Ra;Lee, Dong Han;Jung, Hai Jo;Ji, Young Hoon;Kwon, Soo-Il
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.264-270
    • /
    • 2014
  • This study developed a portable respiratory training device to improve breathing stability, which is an important element in using the CyberKnife Synchrony respiratory tracking device, one of the typical Stereotactic Radiation Therapy (SRT) devices. It produced an interface for users to be able to select one of two displays, a graph type and a bar type, supported an auditory system that helps them expect next respiration by improving a sense of rhythm of their respiratory period, and provided comfortable respiratory inducement. By targeting 5 applicants and applying individual respiratory period detected through a self-developed program, it acquired signal data of 'guide respiration' that induces breathing through signal data gained from 'free respiration' and an auditory system, and evaluated the usability by comparing deviation average values of respiratory period and respiratory amplitude. It could be identified that respiratory period decreased $55.74{\pm}0.14%$ compared to free respiration, and respiratory amplitude decreased $28.12{\pm}0.10%$ compared to free respiration, which confirmed the consistency and stability of respiratory. SBRT, developed based on these results, using the portable respiratory training device, for liver cancer or lung cancer, is evaluated to be able to help reduce delayed treatment time due to respiratory instability and improve treatment accuracy, and if it could be applied to developing respiratory training applications targeting an android-based portable device in the future, even use convenience and economic efficiency are expected.