• Title/Summary/Keyword: Human speech

Search Result 571, Processing Time 0.027 seconds

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Natural-Language-Based Robot Action Control Using a Hierarchical Behavior Model

  • Ahn, Hyunsik;Ko, Hyun-Bum
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.192-200
    • /
    • 2012
  • In order for humans and robots to interact in daily life, robots need to understand human speech and link it to their actions. This paper proposes a hierarchical behavior model for robot action control using natural language commands. The model, which consists of episodes, primitive actions and atomic functions, uses a sentential cognitive system that includes multiple modules for perception, action, reasoning and memory. Human speech commands are translated to sentences with a natural language processor that are syntactically parsed. A semantic parsing procedure was applied to human speech by analyzing the verbs and phrases of the sentences and linking them to the cognitive information. The cognitive system performed according to the hierarchical behavior model, which consists of episodes, primitive actions and atomic functions, which are implemented in the system. In the experiments, a possible episode, "Water the pot," was tested and its feasibility was evaluated.

  • PDF

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Korean Phoneme Sequence based Word Embedding (한국어 음소열 기반 워드 임베딩 기술)

  • Chung, Euisok;Jeon, Hwa Jeon;Lee, Sung Joo;Park, Jeon-Gue
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.225-227
    • /
    • 2017
  • 본 논문은 한국어 서브워드 기반 워드 임베딩 기술을 다룬다. 미등록어 문제를 가진 기존 워드 임베딩 기술을 대체할 수 있는 새로운 워드 임베딩 기술을 한국어에 적용하기 위해, 음소열 기반 서브워드 자질 검증을 진행한다. 기존 서브워드 자질은 문자 n-gram을 사용한다. 한국어의 경우 특정 단음절 발음은 단어에 따라 달라진다. 여기서 음소열 n-gram은 특정 서브워드 자질의 변별력을 확보할 수 있다는 장점이 있다. 본 논문은 서브워드 임베딩 기술을 재구현하여, 영어 환경에서 기존 워드 임베딩 사례와 비교하여 성능 우위를 확보한다. 또한, 한국어 음소열 자질을 활용한 실험 결과에서 의미적으로 보다 유사한 어휘를 벡터 공간상에 근접시키는 결과를 보여 준다.

  • PDF

Machine scoring method for speech recognizer detection mispronunciation of foreign language (외국어 발화오류 검출 음성인식기를 위한 스코어링 기법)

  • Kang, Hyo-Won;Bae, Min-Young;Lee, Jae-Kang;Kwon, Chul-Hong
    • Proceedings of the KSPS conference
    • /
    • 2004.05a
    • /
    • pp.239-242
    • /
    • 2004
  • An automatic pronunciation correction system provides users with correction guidelines for each pronunciation error. For this purpose, we propose a speech recognition system which automatically classifies pronunciation errors when Koreans speak a foreign language. In this paper, we also propose machine scoring methods for automatic assessment of pronunciation quality by the speech recognizer. Scores obtained from an expert human listener are used as the reference to evaluate the different machine scores and to provide targets when training some of algorithms. We use a log-likelihood score and a normalized log-likelihood score as machine scoring methods. Experimental results show that the normalized log-likelihood score had higher correlation with human scores than that obtained using the log-likelihood score.

  • PDF

Sequence-to-sequence Autoencoder based Korean Text Error Correction using Syllable-level Multi-hot Vector Representation (음절 단위 Multi-hot 벡터 표현을 활용한 Sequence-to-sequence Autoencoder 기반 한글 오류 보정기)

  • Song, Chisung;Han, Myungsoo;Cho, Hoonyoung;Lee, Kyong-Nim
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.661-664
    • /
    • 2018
  • 온라인 게시판 글과 채팅창에서 주고받는 대화는 실제 사용되고 있는 구어체 특성이 잘 반영된 텍스트 코퍼스로 음성인식의 언어 모델 재료로 활용하기 좋은 학습 데이터이다. 하지만 온라인 특성상 노이즈가 많이 포함되어 있기 때문에 학습에 직접 활용하기가 어렵다. 본 논문에서는 사용자 입력오류가 다수 포함된 문장에서의 한글 오류 보정을 위한 sequence-to-sequence Denoising Autoencoder 모델을 제안한다.

  • PDF

Detecting Data which Represent Emotion Features from the Speech Signal

  • Park, Chang-Hyun;Sim, Kwee-Bo;Lee, Dong-Wook;Joo, Young-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.138.1-138
    • /
    • 2001
  • Usually, when we take a conversation with another, we can know his emotion as well as his idea. Recently, some applications using speech recognition comes out , however, those can recognize only context of various informations which he(she) gave. In the future, machine familiar to human will be a requirement for more convenient life. Therefore, we need to get emotion features. In this paper, we´ll collect a multiplicity of reference data which represent emotion features from the speech signal. As our final target is to recognize emotion from a stream of speech, as such, we must be able to understand features that represent emotion. There are much emotions human can show. the delicate difference of emotions makes this recognition problem difficult.

  • PDF

Computer-Based Fluency Evaluation of English Speaking Tests for Koreans (한국인을 위한 영어 말하기 시험의 컴퓨터 기반 유창성 평가)

  • Jang, Byeong-Yong;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.9-20
    • /
    • 2014
  • In this paper, we propose an automatic fluency evaluation algorithm for English speaking tests. In the proposed algorithm, acoustic features are extracted from an input spoken utterance and then fluency score is computed by using support vector regression (SVR). We estimate the parameters of feature modeling and SVR using the speech signals and the corresponding scores by human raters. From the correlation analysis results, it is shown that speech rate, articulation rate, and mean length of runs are best for fluency evaluation. Experimental results show that the correlation between the human score and the SVR score is 0.87 for 3 speaking tests, which suggests the possibility of the proposed algorithm as a secondary fluency evaluation tool.

Multi-channel Speech Enhancement Using Blind Source Separation and Cross-channel Wiener Filtering

  • Jang, Gil-Jin;Choi, Chang-Kyu;Lee, Yong-Beom;Kim, Jeong-Su;Kim, Sang-Ryong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2E
    • /
    • pp.56-67
    • /
    • 2004
  • Despite abundant research outcomes of blind source separation (BSS) in many types of simulated environments, their performances are still not satisfactory to be applied to the real environments. The major obstacle may seem the finite filter length of the assumed mixing model and the nonlinear sensor noises. This paper presents a two-step speech enhancement method with multiple microphone inputs. The first step performs a frequency-domain BSS algorithm to produce multiple outputs without any prior knowledge of the mixed source signals. The second step further removes the remaining cross-channel interference by a spectral cancellation approach using a probabilistic source absence/presence detection technique. The desired primary source is detected every frame of the signal, and the secondary source is estimated in the power spectral domain using the other BSS output as a reference interfering source. Then the estimated secondary source is subtracted to reduce the cross-channel interference. Our experimental results show good separation enhancement performances on the real recordings of speech and music signals compared to the conventional BSS methods.

Primary Study for dialogue based on Ordering Chatbot

  • Kim, Ji-Ho;Park, JongWon;Moon, Ji-Bum;Lee, Yulim;Yoon, Andy Kyung-yong
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.209-214
    • /
    • 2018
  • Today is the era of artificial intelligence. With the development of artificial intelligence, machines have begun to impersonate various human characteristics today. Chatbot is one instance of this interactive artificial intelligence. Chatbot is a computer program that enables to conduct natural conversations with people. As mentioned above, Chatbot conducted conversations in text, but Chatbot, in this study evolves to perform commands based on speech-recognition. In order for Chatbot to perfectly emulate a human dialogue, it is necessary to analyze the sentence correctly and extract appropriate response. To accomplish this, the sentence is classified into three types: objects, actions, and preferences. This study shows how objects is analyzed and processed, and also demonstrates the possibility of evolving from an elementary model to an advanced intelligent system. By this study, it will be evaluated that speech-recognition based Chatbot have improved order-processing time efficiency compared to text based Chatbot. Once this study is done, speech-recognition based Chatbot have the potential to automate customer service and reduce human effort.