• Title/Summary/Keyword: Speech pattern

Search Result 412, Processing Time 0.032 seconds

Adaptive Speech Streaming Based on Packet Loss Prediction Using Support Vector Machine for Software-Based Multipoint Control Unit over IP Networks

  • Kang, Jin Ah;Han, Mikyong;Jang, Jong-Hyun;Kim, Hong Kook
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1064-1073
    • /
    • 2016
  • An adaptive speech streaming method to improve the perceived speech quality of a software-based multipoint control unit (SW-based MCU) over IP networks is proposed. First, the proposed method predicts whether the speech packet to be transmitted is lost. To this end, the proposed method learns the pattern of packet losses in the IP network, and then predicts the loss of the packet to be transmitted over that IP network. The proposed method classifies the speech signal into different classes of silence, unvoiced, speech onset, or voiced frame. Based on the results of packet loss prediction and speech classification, the proposed method determines the proper amount and bitrate of redundant speech data (RSD) that are sent with primary speech data (PSD) in order to assist the speech decoder to restore the speech signals of lost packets. Specifically, when a packet is predicted to be lost, the amount and bitrate of the RSD must be increased through a reduction in the bitrate of the PSD. The effectiveness of the proposed method for learning the packet loss pattern and assigning a different speech coding rate is then demonstrated using a support vector machine and adaptive multirate-narrowband, respectively. The results show that as compared with conventional methods that restore lost speech signals, the proposed method remarkably improves the perceived speech quality of an SW-based MCU under various packet loss conditions in an IP network.

Denasalization error pattern for typically developing and SSD children (일반 및 말소리장애 아동의 탈비음화 오류패턴)

  • Kim, Min Jung
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.3-8
    • /
    • 2015
  • Denasalization that nasals are replaced by stops is an unusual error pattern related to manner of articulation. The purpose of this study is to investigate the prevalence of denasalization and to scrutinize the nasal production according to phonological context for typically developing children and children with speech sound disorders(SSD). 220 typically developing children and 48 SSD children from 2~6 years of age were tested with a formal word test, and those who demonstrate denasalization were selected. In addition, the nasal production of SSD children with denasalization were analyzed for the correctness and the error types using the formal word test and spontaneous conversation. The results were as follows: (1) Denasalization was shown in below 10% of 2-3 years of age with typically developing children and in above 20% of 2-5 years of age with SSD. (2) The SSD children who demonstrate denasalization were categorized into 4 types according to the error context of nasals; nasal errors with all word positions, nasal errors with word-final and word-medial positions, nasal errors with word-medial position preceding vowels, and nasal errors with word-medial position preceding obstruents. These results indicate that denasalization is a clinically important error pattern, and word-medial position preceding obstruents is an essential context for denasalization in terms of Korean phonotactics.

HearCAM Embedded Platform Design (히어 캠 임베디드 플랫폼 설계)

  • Hong, Seon Hack;Cho, Kyung Soon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.79-87
    • /
    • 2014
  • In this paper, we implemented the HearCAM platform with Raspberry PI B+ model which is an open source platform. Raspberry PI B+ model consists of dual step-down (buck) power supply with polarity protection circuit and hot-swap protection, Broadcom SoC BCM2835 running at 700MHz, 512MB RAM solered on top of the Broadcom chip, and PI camera serial connector. In this paper, we used the Google speech recognition engine for recognizing the voice characteristics, and implemented the pattern matching with OpenCV software, and extended the functionality of speech ability with SVOX TTS(Text-to-speech) as the matching result talking to the microphone of users. And therefore we implemented the functions of the HearCAM for identifying the voice and pattern characteristics of target image scanning with PI camera with gathering the temperature sensor data under IoT environment. we implemented the speech recognition, pattern matching, and temperature sensor data logging with Wi-Fi wireless communication. And then we directly designed and made the shape of HearCAM with 3D printing technology.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyun;Sim Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.284-288
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyeon;Sim Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF

The Effectiveness of a Prolonged-speech Treatment Program for School-age Children with Stuttering (학령기 말더듬 아동의 첫음연장기법을 이용한 치료프로그램 효과 연구)

  • Oh Seung Ah
    • Journal of Families and Better Life
    • /
    • v.22 no.6 s.72
    • /
    • pp.143-152
    • /
    • 2004
  • The purpose of this study was to know the effectiveness of prolonged-speech treatment program on school-age children with stuttering. Two male and One female subjects participated in this study. The speech of 3 subjects in the treatment was assessed on frequency of stuttering, stuttering Pattern, degree of severity in stuttering. This Program was taken from Ryan's the step of traditional therapy Program and prolonged-speech technique program. and then, modified in accordance with the purpose of this study. The treatment program were consisted of Four stages. The results of this study were as follows: First, 3 subjects can speak with greatly reduced stuttering frequency after treatment Second, in the stuttering pattern, all subjects were changed from part-word repetition in stuttering into a prolongation in stuttering. And also, all subjects showed similar effect in the maintenance.

Speech Rate Variation in Synchronous Speech (동시발화에 나타나는 발화 속도 변이 분석)

  • Kim, Miran;Nam, Hosung
    • Phonetics and Speech Sciences
    • /
    • v.4 no.4
    • /
    • pp.19-27
    • /
    • 2012
  • When two speakers read a text together, the produced speech has been shown to reduce a high degree of variability (e.g., pause duration and placement, and speech rate). This paper provides a quantitative analysis of speech rate variation exhibited in synchronous speech by examining the global and local patterns in two dialects of Mandarin Chinese (Taiwan and Shanghai). We analyzed the speech data in terms of mean speech rate and the reference of "Just Noticeable difference (JND)" within a subject and across subjects. Our findings show that speakers show lower and less variable speech rates when they read a text synchronously than when they read alone. This global pattern is observed consistently across speakers and dialects maintaining the unique local variation patterns of speech rate for each dialect. We conclude that paired speakers lower their speech rates and decrease the variability in order to ensure the synchrony of their speech.

Speech/Music Discrimination Using Spectrum Analysis and Neural Network (스펙트럼 분석과 신경망을 이용한 음성/음악 분류)

  • Keum, Ji-Soo;Lim, Sung-Kil;Lee, Hyon-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.5
    • /
    • pp.207-213
    • /
    • 2007
  • In this research, we propose an efficient Speech/Music discrimination method that uses spectrum analysis and neural network. The proposed method extracts the duration feature parameter(MSDF) from a spectral peak track by analyzing the spectrum, and it was used as a feature for Speech/Music discriminator combined with the MFSC. The neural network was used as a Speech/Music discriminator, and we have reformed various experiments to evaluate the proposed method according to the training pattern selection, size and neural network architecture. From the results of Speech/Music discrimination, we found performance improvement and stability according to the training pattern selection and model composition in comparison to previous method. The MSDF and MFSC are used as a feature parameter which is over 50 seconds of training pattern, a discrimination rate of 94.97% for speech and 92.38% for music. Finally, we have achieved performance improvement 1.25% for speech and 1.69% for music compares to the use of MFSC.

Study of Boundary Tone according to Speech Rate in Korean (발화 속도에 따른 국어의 경계 성조 연구)

  • Park Mi Young
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.73-76
    • /
    • 2002
  • The purpose of this paper is to research Korean boundary tone of sentence type and perceptive speaker's attitude according to speech rate - three type. In view of the preceding study, Korean intonation's meaning is determined by boundary tone. Also, in my experimental results, Korean boundary tone of sentence type has preferential tone. However, Korean boundary tone of sentence type is not influential according to speech rate. The speech rate's change of three pattern is influential in auditor's perceptual response. The relationship between the pitch contour of boundary tone and speech rate is not significant.

  • PDF

Prosodic characteristics of French language in conversational discourse (프랑스어의 대화 담화에 나타난 운율 연구)

  • Ko, Young-Lim;Yoon, Ae-Sun
    • Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.165-180
    • /
    • 2001
  • In this paper prosodic characteristics of French language are analysed with a corpus of radio interview. Intonation patterns are interpreted in terms of raising pattern, focal raising pattern and falling pattern. Accentual prominence is classified in two types, rhythmic accent and focal accent. Focal accent permit to explain the cohesion in a utterance or between two utterances. As a prosodic variable of discourse pauses are described by their form of realization (filled pause, silent pause, hesitation etc), their distribution and their function in utterance.

  • PDF