• Title/Summary/Keyword: human and computer interaction

Search Result 607, Processing Time 0.029 seconds

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

Adaptive Postural Control for Trans-Femoral Prostheses Based on Neural Networks and EMG Signals

  • Lee Ju-Won;Lee Gun-Ki
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.6 no.3
    • /
    • pp.37-44
    • /
    • 2005
  • Gait control capacity for most trans-femoral prostheses is significantly different from that of a normal person, and training is required for a long period of time in order for a patient to walk properly. People become easily tired when wearing a prosthesis or orthosis for a long period typically because the gait angle cannot be smoothly adjusted during wearing. Therefore, to improve the gait control problems of a trans-femoral prosthesis, the proper gait angle is estimated through surface EMG(electromyogram) signals on a normal leg, then the gait posture which the trans-femoral prosthesis should take is calculated in the neural network, which learns the gait kinetics on the basis of the normal leg's gait angle. Based on this predicted angle, a postural control method is proposed and tested adaptively following the patient's gait habit based on the predicted angle. In this study, the gait angle prediction showed accuracy of over $97\%$, and the posture control capacity of over $90\%$.

A Research on Improvement of Usability Testing Techniques for the Military Logistics Software (군수 정보체계 사용성 평가 기법개발에 관한 연구)

  • Um, Tae-Woong;Park, K.S.;Kim, Sang-Soo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.8 no.4 s.23
    • /
    • pp.77-84
    • /
    • 2005
  • In this research, usability testing techniques had been applied to currently operating Military Logistics System software, and the effects of improvements made were verified. Furthermore, this research aims to assist the army to execute their researches on usability testing in the future. Various usability testing techniques and guidelines presented in this paper will be easily adapted to the research environment of the army. Consequently, the results of this research would play an important role in future development and evaluation of the Military Logistics System software.

An ANN-based gesture recognition algorithm for smart-home applications

  • Huu, Phat Nguyen;Minh, Quang Tran;The, Hoang Lai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.1967-1983
    • /
    • 2020
  • The goal of this paper is to analyze and build an algorithm to recognize hand gestures applying to smart home applications. The proposed algorithm uses image processing techniques combing with artificial neural network (ANN) approaches to help users interact with computers by common gestures. We use five types of gestures, namely those for Stop, Forward, Backward, Turn Left, and Turn Right. Users will control devices through a camera connected to computers. The algorithm will analyze gestures and take actions to perform appropriate action according to users requests via their gestures. The results show that the average accuracy of proposal algorithm is 92.6 percent for images and more than 91 percent for video, which both satisfy performance requirements for real-world application, specifically for smart home services. The processing time is approximately 0.098 second with 10 frames/sec datasets. However, accuracy rate still depends on the number of training images (video) and their resolution.

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

An Injection Molding Process Management System based on Mobile Augmented Reality (모바일 증강현실 기반 사출성형공정 관리시스템)

  • Hong, Won-Pyo;Song, Jun-Yeob
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.31 no.7
    • /
    • pp.591-596
    • /
    • 2014
  • Augmented reality is a novel human-machine interaction that overlays virtual computer-generated information on a real world environment. It has found good potential applications in many fields, such as training, surgery, entertainment, maintenance, assembly, product design and other manufacturing operations. In this study, a smartphone-based augmented reality system was developed for the purpose of monitoring and managing injection molding production lines. Required management items were drawn from a management content analysis, and then the items were divided into two broad management categories: line management and equipment management. Effective work management was enabled by providing those working on the shop floor with management content information combined with the actual images of an injection molding production line through augmented reality.

Haptic Technology for the Mobile Device: Future Research and Opportunity in Business

  • Park, Joo-Won;Jo, Soo-Ran;Jeon, Se-Bom;Moon, Jung-Hoon
    • 한국경영정보학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.79-84
    • /
    • 2008
  • Haptics, the science and physiology of the sense of touch, has been investigated in the field of engineering and HCI to provide better computing environments for users. Previous haptic technology being focused was mainly on the PC environments; however, beginning with the i-Phone of Apple recent haptic technology has entered our daily lives. Despite its popularization, the business opportunities the technology will bring have not yet been investigated thoroughly. This research forecasts the application of haptic technology on mobile devices and the consequential business opportunity. Also, the direction of future research in the field of MIS will be proposed.

  • PDF

Cognitive Analysis of User Interactions with UNIX and its Application to System Design (시스템 디자인을 위한 유닉스 사용성의 인지적 분석)

  • Son, Yeong-U;Lee, Ji-Seon;Yuk, Hyeong-Min
    • Journal of the Ergonomics Society of Korea
    • /
    • v.22 no.3
    • /
    • pp.93-111
    • /
    • 2003
  • This research extends a general theory of cognition to address cognitive constraints on complex command production and allows us to make system design recommendations. The research described in this paper addresses the cognitive origins of problems user have producing sequence-dependent command strings while interacting with the UNIX operating system. We describe an empirical and theoretical analysis of user difficulties, and then show how our analyses lead to design recommendations. In addition, we summarize results from testing the impact of our design recommendations on system usability

Development of User Interface Design Guidelines for Education Software Designers (교육용 소프트웨어 설계자를 위한 사용자 인터페이스 설계지침 개발)

  • Yun, Cheol-Ho
    • Journal of the Ergonomics Society of Korea
    • /
    • v.22 no.3
    • /
    • pp.45-56
    • /
    • 2003
  • This study was conducted to develop user interface design guidelines for those who design education software products (web sites or CD-ROM titles). To establish this guideline scheme, international standards, commercial design guidelines, and research papers were surveyed. Especially, ISO 9241 was referred as a basic model of a guideline scheme. First, the research group developed draft guidelines. After that, education software developers, designers, and a user group reviewed the draft and the draft was revised with their commentations. Five components were selected as a primary class of guideline scheme: general principle, dialogue design, user guidance, visual interface, and information presentation. Each component was divided several components as a secondary class. Finally, 45 items were selected as user interface design guidelines for the education software design.

Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information (청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.