• 제목/요약/키워드: Facial signals

검색결과 37건 처리시간 0.02초

Implementation of communication system using signals originating from facial muscle constructions

  • Kim, EungSoo;Eum, TaeWan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제4권2호
    • /
    • pp.217-222
    • /
    • 2004
  • A person does communication between each other using language. But, In the case of disabled person, cannot communicate own idea to use writing and gesture. We embodied communication system using the EEG so that disabled person can do communication. After feature extraction of the EEG included facial muscle signals, it is converted the facial muscle into control signal, and then did so that can select character and communicate idea.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

HMM-Based Automatic Speech Recognition using EMG Signal

  • Lee Ki-Seung
    • 대한의용생체공학회:의공학회지
    • /
    • 제27권3호
    • /
    • pp.101-109
    • /
    • 2006
  • It has been known that there is strong relationship between human voices and the movements of the articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The EMG signals were acquired from three articulatory facial muscles. Preliminary, 10 Korean digits were used as recognition variables. The various feature parameters including filter bank outputs, linear predictive coefficients and cepstrum coefficients were evaluated to find the appropriate parameters for EMG-based speech recognition. The sequence of the EMG signals for each word is modelled by a hidden Markov model (HMM) framework. A continuous word recognition approach was investigated in this work. Hence, the model for each word is obtained by concatenating the subword models and the embedded re-estimation techniques were employed in the training stage. The findings indicate that such a system may have a capacity to recognize speech signals with an accuracy of up to 90%, in case when mel-filter bank output was used as the feature parameters for recognition.

모바일 얼굴 비디오로부터 심박 신호의 강건한 추출 (Robust Extraction of Heartbeat Signals from Mobile Facial Videos)

  • 로말리자쟝피에르;박한훈
    • 융합신호처리학회논문지
    • /
    • 제20권1호
    • /
    • pp.51-56
    • /
    • 2019
  • 본 논문은 모바일 환경에서의 BCG기반 심박 수 측정을 위한 향상된 심박 신호 추출 방법을 제안한다. 우선, 모바일 카메라를 이용하여 사용자의 얼굴을 촬영한 비디오로부터 얼굴 특징과 배경 특징을 동시에 추적함으로써 손 떨림에 의한 영향을 제거한 머리 움직임 신호를 추출한다. 그리고 머리 움직임 신호로부터 심박 신호를 정확하게 분리해내기 위해 신호의 주기성을 계산하는 새로운 방법을 제안한다. 제안 방법은 모바일 얼굴 비디오로부터 강건하게 심박 신호를 추출할 수 있으며, 기존 방법에 비해 보다 정확하게 심박 수 측정(측정 오차가 3-4 bpm 감소)을 할 수 있다.

안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발 (Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard)

  • 김홍현;박현석;김응수
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2009년도 추계학술대회
    • /
    • pp.289-292
    • /
    • 2009
  • 사람은 주로 언어를 통해 서로간의 의사를 표현한다. 하지만, 말을 할 수 없는 중증 장애인, 특히 전신마비 증세가 있는 중증 장애인의 경우에는 글을 쓰거나 몸짓을 통한 방법으로도 자신의 의사를 전달하지 못한다는 문제점이 있다. 따라서, 본 논문에서는 이러한 중증 장애인이 의사 소통을 할 수 있도록 안면근 신호를 이용한 의사 전달기를 구현하였다. 특히, 안면근 신호가 포함된 뇌파의 특징을 추출하여 이를 일반적인 제어 신호로써 변환한 다음, 이 제어 신호와 최소한의 자판을 연동시켜 문자를 선택하도록 함으로써, 중증 장애인이 효율적으로 의사를 전달할 수 있도록 하였다.

  • PDF

안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발 (Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard)

  • 김홍현;김응수
    • 한국정보통신학회논문지
    • /
    • 제14권6호
    • /
    • pp.1338-1344
    • /
    • 2010
  • 사람은 주로 언어를 통해 서로간의 의사를 표현한다. 하지만, 말을 할 수 없는 중증 장애인, 특히 전신마비 증세가 있는 중증 장애인의 경우에는 글을 쓰거나 몸짓을 통한 방법으로도 자신의 의사를 효과적으로 전달하지 못한다는 문제점이 있다. 따라서 본 논문에서는 이러한 중증 장애인이 의사소통을 할 수 있도록 안면근 신호를 이용한 의사 전달기를 구현하였다. 특히, 안면근 신호가 포함된 뇌파의 특징을 추출하여 이를 일반적인 제어 신호로써 변환한 다음, 이 제어 신호와 최소한의 자판을 연동시켜 문자를 선택하도록 함으로써, 중증 장애인이 효과적으로 의사를 전달할 수 있도록 하였다.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권2호
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

The Effects of a Massage and Oro-facial Exercise Program on Spastic Dysarthrics' Lip Muscle Function

  • Hwang, Young-Jin;Jeong, Ok-Ran;Yeom, Ho-Joon
    • 음성과학
    • /
    • 제11권1호
    • /
    • pp.55-64
    • /
    • 2004
  • This study was to determine the effects of a massage and oro-facial exercise program on spastic dysarthric patients' lip muscle function using an electromyogram (EMG). Three subjects with Spastic Dysarthria participated in the study. The surface electrodes were positioned on the Levator Labii Superior Muscle (LLSM), Depressor Labii Inferior Muscle (DLIM), and Orbicularis Oris Muscle (OOM). To examine lip muscle function improvement, the EMG signals were analyzed in terms of RMS (Root Mean Square) values and Median Frequency. In addition, the diadochokinetic movements and the rate of sentence reading were measured. The results revealed that the RMS values were decreased and the Median Frequency moved to a high frequency area. Diadochokinesis and sentence reading rates were improved.

  • PDF

Normalization Framework of BCI-based Facial Interface

  • Sung, Yunsick;Gong, Suhyun
    • Journal of Multimedia Information System
    • /
    • 제2권3호
    • /
    • pp.275-280
    • /
    • 2015
  • Recently brainwaves are utilized diversely in the field of medicine, entertainment, education and so on. In the case of medicine, brainwaves are analyzed to estimate patients' diseases. However, the applications for entertainments usually utilize brainwaves as control signal without figuring out the characters of the brainwaves. Given that users' brainwaves are different each other, a normalization method is essential. The traditional brainwave normalization approaches utilize normal distribution. However, those approaches assume that brainwaves are collected enough to conduct normal distribution. When the few amounts of brainwaves are measured, the accuracy of the control signal based on the measured brainwaves becomes low. In this paper, we propose a normalization framework of BCI-based facial interfaces for novel volume controllers, which can normalizes the few amounts of brainwaves and then generates the control signals of BCI-based facial interfaces. In the experiments, two subjects were involved to validate the proposed framework and then the normalization processes were introduced.