• Title/Summary/Keyword: Speech signals

Search Result 499, Processing Time 0.025 seconds

Context Recognition Using Environmental Sound for Client Monitoring System (피보호자 모니터링 시스템을 위한 환경음 기반 상황 인식)

  • Ji, Seung-Eun;Jo, Jun-Yeong;Lee, Chung-Keun;Oh, Siwon;Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.343-350
    • /
    • 2015
  • This paper presents a context recognition method using environmental sound signals, which is applied to a mobile-based client monitoring system. Seven acoustic contexts are defined and the corresponding environmental sound signals are obtained for the experiments. To evaluate the performance of the context recognition, MFCC and LPCC method are employed as feature extraction, and statistical pattern recognition method are used employing GMM and HMM as acoustic models, The experimental results show that LPCC and HMM are more effective at improving context recognition accuracy compared to MFCC and GMM respectively. The recognition system using LPCC and HMM obtains 96.03% in recognition accuracy. These results demonstrate that LPCC is effective to represent environmental sounds which contain more various frequency components compared to human speech. They also prove that HMM is more effective to model the time-varying environmental sounds compared to GMM.

Monophthong Recognition Optimizing Muscle Mixing Based on Facial Surface EMG Signals (안면근육 표면근전도 신호기반 근육 조합 최적화를 통한 단모음인식)

  • Lee, Byeong-Hyeon;Ryu, Jae-Hwan;Lee, Mi-Ran;Kim, Deok-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.143-150
    • /
    • 2016
  • In this paper, we propose Korean monophthong recognition method optimizing muscle mixing based on facial surface EMG signals. We observed that EMG signal patterns and muscle activity may vary according to Korean monophthong pronunciation. We use RMS, VAR, MMAV1, MMAV2 which were shown high recognition accuracy in previous study and Cepstral Coefficients as feature extraction algorithm. And we classify Korean monophthong by QDA(Quadratic Discriminant Analysis) and HMM(Hidden Markov Model). Muscle mixing optimized using input data in training phase, optimized result is applied in recognition phase. Then New data are input, finally Korean monophthong are recognized. Experimental results show that the average recognition accuracy is 85.7% in QDA, 75.1% in HMM.

Ultrasensitive Crack-based Mechanosensor Inspired by Spider's Sensory Organ (거미의 감각기관을 모사한 초민감 균열기반 진동압력센서)

  • Suyoun Oh;Tae-il Kim
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.31 no.1
    • /
    • pp.1-6
    • /
    • 2024
  • Spiders detect even tiny vibrations through their vibrational sensory organs. Leveraging their exceptional vibration sensing abilities, they can detect vibrations caused by prey or predators to plan attacks or perceive threats, utilizing them for survival. This paper introduces a nanoscale crack-based sensor mimicking the spider's sensory organ. Inspired by the slit sensory organ used by spiders to detect vibrations, the sensor with the cracks detects vibrations and pressure with high sensitivity. By controlling the depth of these cracks, they developed a sensor capable of detecting external mechanical signals with remarkable sensitivity. This sensor achieves a gauge factor of 16,000 at 2% strain with an applied tensile stress of 10 N. With high signal-to-noise ratio, it accurately recognizes desired vibrations, as confirmed through various evaluations of external force and biological signals (speech pattern, heart rate, etc.). This underscores the potential of utilizing biomimetic technology for the development of new sensors and their application across diverse industrial fields.

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System (한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.509-514
    • /
    • 2020
  • The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

Multimodal Approach for Summarizing and Indexing News Video

  • Kim, Jae-Gon;Chang, Hyun-Sung;Kim, Young-Tae;Kang, Kyeong-Ok;Kim, Mun-Churl;Kim, Jin-Woong;Kim, Hyung-Myung
    • ETRI Journal
    • /
    • v.24 no.1
    • /
    • pp.1-11
    • /
    • 2002
  • A video summary abstracts the gist from an entire video and also enables efficient access to the desired content. In this paper, we propose a novel method for summarizing news video based on multimodal analysis of the content. The proposed method exploits the closed caption data to locate semantically meaningful highlights in a news video and speech signals in an audio stream to align the closed caption data with the video in a time-line. Then, the detected highlights are described using MPEG-7 Summarization Description Scheme, which allows efficient browsing of the content through such functionalities as multi-level abstracts and navigation guidance. Multimodal search and retrieval are also within the proposed framework. By indexing synchronized closed caption data, the video clips are searchable by inputting a text query. Intensive experiments with prototypical systems are presented to demonstrate the validity and reliability of the proposed method in real applications.

  • PDF

Crack Detection of Rotating Blade using Hidden Markov Model (회전 블레이드의 크랙 발생 예측을 위한 은닉 마르코프모델을 이용한 해석)

  • Lee, Seung-Kyu;Yoo, Hong-Hee
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2009.10a
    • /
    • pp.99-105
    • /
    • 2009
  • Crack detection method of a rotating blade was suggested in this paper. A rotating blade was modeled with a cantilever beam connected to a hub undergoing rotating motion. The existence and the location of crack were able to be recognized from the vertical response of end tip of a rotating cantilever beam by employing Discrete Hidden Markov Model (DHMM) and Empirical Mode Decomposition (EMD). DHMM is a famous stochastic method in the field of speech recognition. However, in recent researches, it has been proved that DHMM can also be used in machine health monitoring. EMD is the method suggested by Huang et al. that decompose a random signal into several mono component signals. EMD was used in this paper as the process of extraction of feature vectors which is the important process to developing DHMM. It was found that developed DHMMs for crack detection of a rotating blade have shown good crack detection ability.

  • PDF

Training of Fuzzy-Neural Network for Voice-Controlled Robot Systems by a Particle Swarm Optimization

  • Watanabe, Keigo;Chatterjee, Amitava;Pulasinghe, Koliya;Jin, Sang-Ho;Izumi, Kiyotaka;Kiguchi, Kazuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1115-1120
    • /
    • 2003
  • The present paper shows the possible development of particle swarm optimization (PSO) based fuzzy-neural networks (FNN) which can be employed as an important building block in real life robot systems, controlled by voice-based commands. The PSO is employed to train the FNNs which can accurately output the crisp control signals for the robot systems, based on fuzzy linguistic spoken language commands, issued by an user. The FNN is also trained to capture the user spoken directive in the context of the present performance of the robot system. Hidden Markov Model (HMM) based automatic speech recognizers are developed, as part of the entire system, so that the system can identify important user directives from the running utterances. The system is successfully employed in a real life situation for motion control of a redundant manipulator.

  • PDF

Recognizing Five Emotional States Using Speech Signals (음성 신호를 이용한 화자의 5가지 감성 인식)

  • Kang Bong-Seok;Han Chul-Hee;Woo Kyoung-Ho;Yang Tae-Young;Lee Chungyong;Youn Dae-Hee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.101-104
    • /
    • 1999
  • 본 논문에서는 음성 신호를 이용해서 화자의 감정을 인식하기 위해 3가지 시스템을 구축하고 이들의 성능을 비교해 보았다. 인식 대상으로 하는 감정은 기쁨, 슬픔, 화남, 두려움, 지루함, 평상시의 감정이고, 각 감정에 대한 감정 음성 데이터베이스를 직접 구축하였다. 피치와 에너지 정보를 감성 인식의 특징으로 이용하였고, 인식 알고리듬은 MLB(Maximum-Likelihood Bayes)분류기, NN(Nearest Neighbor)분류기 및 HMM(Hidden Markov Model)분류기를 이용하였다. 이 중 MLB 분류기와 NN 분류기에서는 특징벡터로 피치와 에너지의 평균과 표준편차, 최대값 등 통계적인 정보를 이용하였고, TMM 분류기에서는 각 프레임에서의 델타 피치와 델타델타 피치, 델타 에너지와 델타델타 에너지 등 시간적 정보를 이용하였다. 실험은 화자종속, 문장독립형 방식으로 하였고, 인식 실험 결과는 MLB를 이용해서 $68.9\%, NN을 이용해서 $66.7\%를 얻었고, HMM 분류기를 이용해서 $89.30\%를 얻었다.

  • PDF

A Study on Intelligent Control Algorithm Development for Cooperation Working of Human and Robot (인간과 로봇 협력작업을 위한 로봇 지능제어알고리즘 개발에 관한 연구)

  • Lee, Woo-Song;Jung, Yang-Guen;Park, In-Man;Jung, Jong-Gyu;Kim, Hui-Jin;Kim, Min-Seong;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.20 no.4
    • /
    • pp.285-297
    • /
    • 2017
  • This study proposed a new approach to develop an Intelligent control algorithm for cooperative working of human and robot based on voice recognition. In general case of speaker verification, Gaussian Mixture Model is used to model the feature vectors of reference speech signals. On the other hand, Dynamic Time Warping based template matching techniques were presented for the voice recognition about several years ago. We converge these two different concepts in a single method and then implement in a real time voice recognition enough to make reference model to satisfy 95% of recognition performance. In this paper it was illustrated the reliability of voice recognition by simulation and experiments for humanoid robot with 18 joints.

A Noise-Robust Adaptive NLMS Algorithm with Variable Convergence Factor for Acoustic Echo Cancellation (음향 반향 제어를 위한 가변수렴인자를 갖는 잡음에 강건한 적응 NLMS 알고리즘)

  • 박장식;손경식
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.1
    • /
    • pp.99-108
    • /
    • 1999
  • In this paper, a new robust adaptive algorithm is proposed to improve the performance of AEC without computational burden. The proposed adaptive algorithm is based on NLMS algorithm, and its step-size is varied with the reference input signal power and the desired signal power. Its step-size is normalized by the sum of the powers of the reference input signal and the desired signal. When the near-end speaker's speech and noise are applied into the microphone, the step-size becomes small and the misalignment of coefficients are reduced. The convergence speed is comparable to NLMS algorithm at AEC application because the echo signals are attenuated about 10∼20 dBSPL. The characteristics of this algorithm is also analyzed and compared with conventional ones in this paper.

  • PDF