• 제목/요약/키워드: human and computer interaction

검색결과 607건 처리시간 0.029초

Highly Stretchable, Transparent Ionic Touch Panel

  • Sun, Jeong-Yun
    • 한국표면공학회:학술대회논문집
    • /
    • 한국표면공학회 2017년도 춘계학술대회 논문집
    • /
    • pp.51-63
    • /
    • 2017
  • The touch panel was developed decades ago and has become a popular input device in daily life. Because human-computer interaction is becoming more important, the next generation of touch panels require stretchability and bio-compatibility to allow integration with the human body. However, because most touch panels were developed based on stiff, brittle electrodes, electronic touch panels face difficulties to achieve such requirements. In this paper, for the first time, we demonstrate an ionic touch panel based on polyacrylamide hydrogel containing LiCl ions. The panel is soft and stretchable and thus, can sustain a large deformation. The panel can freely transmit light information through it because the hydrogel is transparent, with 99 % transmittance for visible light. A 1-dimensional touch strip was investigated to reveal the basic mechanism of sensing, and a 2-dimensional touch panel was developed to demonstrate its functionalities. The ionic touch panel was operated under high deformation with more than 1000% areal strain without sacrificing its functionalities. Furthermore, an epidermal touch panel on the skin was developed to demonstrate the mechanical and optical invisibility of the ionic touch panel through writing words, playing the piano and playing games.

  • PDF

A Proposal of the Olfactory Information Presentation Method and Its Application for Scent Generator Using Web Service

  • Kim, Jeong-Do;Byun, Hyung-Gi
    • 센서학회지
    • /
    • 제21권4호
    • /
    • pp.249-255
    • /
    • 2012
  • Among the human senses, olfactory information still does not have a proper data presentation method unlike that regarding vision and auditory information. It makes presenting the sense of smell into multimedia information impossible, which may be an exploratory field in human computer interaction. In this paper, we propose an olfactory information presentation method, which is a way to use smell as multimedia information, and show an application for scent generation and odor display using a web service. The olfactory information can present smell characteristics such as intensity, persistence, hedonic tone, and odor description. The structure of data format based on olfactory information can also be organized according to data types such as integer, float, char, string, and bitmap. Furthermore, it can be used for data transmitting via a web service and for odor display using a scent generator. The scent generator, which can display information of smell, is developed to generate 6 odors using 6 aroma solutions and a diluted solution with 14 micro-valves and a micropump. Throughout the experiment, we confirm that the remote user can grasp information of smell transmitted by messenger service and request odor display to the computer controlled scent generator. It contributes to enlarge existing virtual reality and to be proposed as a standard reference method regarding olfactory information presentation for future multimedia technology.

스마트폰 기반 행동인식 기술 동향 (Trends in Activity Recognition Using Smartphone Sensors)

  • 김무섭;정치윤;손종무;임지연;정승은;정현태;신형철
    • 전자통신동향분석
    • /
    • 제33권3호
    • /
    • pp.89-99
    • /
    • 2018
  • Human activity recognition (HAR) is a technology that aims to offer an automatic recognition of what a person is doing with respect to their body motion and gestures. HAR is essential in many applications such as human-computer interaction, health care, rehabilitation engineering, video surveillance, and artificial intelligence. Smartphones are becoming the most popular platform for activity recognition owing to their convenience, portability, and ease of use. The noticeable change in smartphone-based activity recognition is the adoption of a deep learning algorithm leading to successful learning outcomes. In this article, we analyze the technology trend of activity recognition using smartphone sensors, challenging issues for future development, and a strategy change in terms of the generation of a activity recognition dataset.

Half-Against-Half Multi-class SVM Classify Physiological Response-based Emotion Recognition

  • ;고광은;박승민;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제23권3호
    • /
    • pp.262-267
    • /
    • 2013
  • The recognition of human emotional state is one of the most important components for efficient human-human and human- computer interaction. In this paper, four emotions such as fear, disgust, joy, and neutral was a main problem of classifying emotion recognition and an approach of visual-stimuli for eliciting emotion based on physiological signals of skin conductance (SC), skin temperature (SKT), and blood volume pulse (BVP) was used to design the experiment. In order to reach the goal of solving this problem, half-against-half (HAH) multi-class support vector machine (SVM) with Gaussian radial basis function (RBF) kernel was proposed showing the effective techniques to improve the accuracy rate of emotion classification. The experimental results proved that the proposed was an efficient method for solving the emotion recognition problems with the accuracy rate of 90% of neutral, 86.67% of joy, 85% of disgust, and 80% of fear.

캡스톤 디자인을 통한 3D Depth 센서 기반 HRI 시스템의 위치추정 알고리즘 연구 (A Study of Localization Algorithm of HRI System based on 3D Depth Sensor through Capstone Design)

  • 이동명
    • 공학교육연구
    • /
    • 제19권6호
    • /
    • pp.49-56
    • /
    • 2016
  • The Human Robot Interface (HRI) based on 3D depth sensor on the docent robot is developed and the localization algorithm based on extended Kalman Filter (EKFLA) are proposed through the capstone design by graduate students in this paper. In addition to this, the performance of the proposed EKFLA is also analyzed. The developed HRI system consists of the route generation and localization algorithm, the user behavior pattern awareness algorithm, the map data generation and building algorithm, the obstacle detection and avoidance algorithm on the robot control modules that control the entire behaviors of the robot. It is confirmed that the improvement ratio of the localization error in EKFLA on the scenarios 1-3 is increased compared with the localization algorithm based on Kalman Filter (KFLA) as 21.96%, 25.81% and 15.03%, respectively.

Motion Pattern Detection for Dynamic Facial Expression Understanding

  • Mizoguchi, Hiroshi;Hiramatsu, Seiyo;Hiraoka, Kazuyuki;Tanaka, Masaru;Shigehara, Takaomi;Mishima, Taketoshi
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -3
    • /
    • pp.1760-1763
    • /
    • 2002
  • In this paper the authors present their attempt io realize a motion pattern detector that finds specified sequence of image from input motion image. The detector is intended to be used for time-varying facial expression understanding. Needless to say, facial expression understanding by machine is crucial and enriches quality of human machine interaction. Among various facial expressions, like blinking, there must be such expressions that can not be recognized if input expression image is static. Still image of blinking can not be distinguished from sleeping. In this paper, the authors discuss implementation of their motion pattern detector and describe experiments using the detector. Experimental results confirm the feasibility of the idea behind the implemented detector.

  • PDF

Development and Evaluation of the V-Catch Vision System

  • Kim, Dong Keun;Cho, Yongjoo;Park, Kyoung Shin
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권3호
    • /
    • pp.45-52
    • /
    • 2022
  • 체감형 스포츠 게임은 센서나 카메라를 이용하여 사용자의 신체 움직임을 추적하고 현실감을 느끼게 하는 운동 게임이다. 최근 체감형 스포츠 게임을 학교 체육활동에 사용하기 위해 가상현실 실내 스포츠룸 시스템을 설치하고 있다. 그러나 이러한 시스템은 주로 화면 터치 사용자 상호작용을 사용한다. 본 연구에서는 2차원 벽 터치 인터랙션이 아닌 3차원 공간에서 사용자의 움직임을 추적할 수 있도록 AI 이미지 인식 기술을 사용하는 브이캐치 비전 시스템을 개발했다. 그리고 본 시스템의 운동 효과를 알아보기 위하여 사용성 평가 실험을 진행했다. 실험에서 피험자에게 혈중 산소 포화도와 실시간 심박변이와 키넥트 골격 이동량, 각도 변화량을 측정하여 정량적 운동 효과를 살펴보았다. 실험 결과 브이캐치 비젼 시스템 사용 시 통계적으로 유의미한 심박수 증가와 신체 움직임양 증가로 운동 효과가 있었던 것으로 나타났다. 실험후 설문조사 주관적 평가 결과에서 대부분의 피실험자들은 이 시스템을 사용한 운동이 재미있고 만족스러워했다.

Robust Person Identification Using Optimal Reliability in Audio-Visual Information Fusion

  • Tariquzzaman, Md.;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • 제28권3E호
    • /
    • pp.109-117
    • /
    • 2009
  • Identity recognition in real environment with a reliable mode is a key issue in human computer interaction (HCI). In this paper, we present a robust person identification system considering score-based optimal reliability measure of audio-visual modalities. We propose an extension of the modified reliability function by introducing optimizing parameters for both of audio and visual modalities. For degradation of visual signals, we have applied JPEG compression to test images. In addition, for creating mismatch in between enrollment and test session, acoustic Babble noises and artificial illumination have been added to test audio and visual signals, respectively. Local PCA has been used on both modalities to reduce the dimension of feature vector. We have applied a swarm intelligence algorithm, i.e., particle swarm optimization for optimizing the modified convection function's optimizing parameters. The overall person identification experiments are performed using VidTimit DB. Experimental results show that our proposed optimal reliability measures have effectively enhanced the identification accuracy of 7.73% and 8.18% at different illumination direction to visual signal and consequent Babble noises to audio signal, respectively, in comparison with the best classifier system in the fusion system and maintained the modality reliability statistics in terms of its performance; it thus verified the consistency of the proposed extension.

A Decision Tree based Real-time Hand Gesture Recognition Method using Kinect

  • Chang, Guochao;Park, Jaewan;Oh, Chimin;Lee, Chilwoo
    • 한국멀티미디어학회논문지
    • /
    • 제16권12호
    • /
    • pp.1393-1402
    • /
    • 2013
  • Hand gesture is one of the most popular communication methods in everyday life. In human-computer interaction applications, hand gesture recognition provides a natural way of communication between humans and computers. There are mainly two methods of hand gesture recognition: glove-based method and vision-based method. In this paper, we propose a vision-based hand gesture recognition method using Kinect. By using the depth information is efficient and robust to achieve the hand detection process. The finger labeling makes the system achieve pose classification according to the finger name and the relationship between each fingers. It also make the classification more effective and accutate. Two kinds of gesture sets can be recognized by our system. According to the experiment, the average accuracy of American Sign Language(ASL) number gesture set is 94.33%, and that of general gestures set is 95.01%. Since our system runs in real-time and has a high recognition rate, we can embed it into various applications.

바이오센서 기반 특징 추출 기법 및 감정 인식 모델 개발 (Development of Bio-sensor-Based Feature Extraction and Emotion Recognition Model)

  • 조예리;배동성;이윤규;안우진;임묘택;강태구
    • 전기학회논문지
    • /
    • 제67권11호
    • /
    • pp.1496-1505
    • /
    • 2018
  • The technology of emotion recognition is necessary for human computer interaction communication. There are many cases where one cannot communicate without considering one's emotion. As such, emotional recognition technology is an essential element in the field of communication. n this regard, it is highly utilized in various fields. Various bio-sensor sensors are used for human emotional recognition and can be used to measure emotions. This paper proposes a system for recognizing human emotions using two physiological sensors. For emotional classification, two-dimensional Russell's emotional model was used, and a method of classification based on personality was proposed by extracting sensor-specific characteristics. In addition, the emotional model was divided into four emotions using the Support Vector Machine classification algorithm. Finally, the proposed emotional recognition system was evaluated through a practical experiment.