• Title/Summary/Keyword: Korean speech

Search Result 5,286, Processing Time 0.03 seconds

An acoustical analysis of synchronous English speech using automatic intonation contour extraction (영어 동시발화의 자동 억양궤적 추출을 통한 음향 분석)

  • Yi, So Pae
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.97-105
    • /
    • 2015
  • This research mainly focuses on intonational characteristics of synchronous English speech. Intonation contours were extracted from 1,848 utterances produced in two different speaking modes (solo vs. synchronous) by 28 (12 women and 16 men) native speakers of English. Synchronous speech is found to be slower than solo speech. Women are found to speak slower than men. The effect size of speech rate caused by different speaking modes is greater than gender differences. However, there is no interaction between the two factors (speaking modes vs. gender differences) in terms of speech rate. Analysis of pitch point features has it that synchronous speech has smaller Pt (pitch point movement time), Pr (pitch point pitch range), Ps (pitch point slope) and Pd (pitch point distance) than solo speech. There is no interaction between the two factors (speaking modes vs. gender differences) in terms of pitch point features. Analysis of sentence level features reveals that synchronous speech has smaller Sr (sentence level pitch range), Ss (sentence slope), MaxNr (normalized maximum pitch) and MinNr (normalized minimum pitch) but greater Min (minimum pitch) and Sd (sentence duration) than solo speech. It is also shown that the higher the Mid (median pitch), the MaxNr and the MinNr in solo speaking mode, the more they are reduced in synchronous speaking mode. Max, Min and Mid show greater speaker discriminability than other features.

An Encrypted Speech Retrieval Scheme Based on Long Short-Term Memory Neural Network and Deep Hashing

  • Zhang, Qiu-yu;Li, Yu-zhou;Hu, Ying-jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2612-2633
    • /
    • 2020
  • Due to the explosive growth of multimedia speech data, how to protect the privacy of speech data and how to efficiently retrieve speech data have become a hot spot for researchers in recent years. In this paper, we proposed an encrypted speech retrieval scheme based on long short-term memory (LSTM) neural network and deep hashing. This scheme not only achieves efficient retrieval of massive speech in cloud environment, but also effectively avoids the risk of sensitive information leakage. Firstly, a novel speech encryption algorithm based on 4D quadratic autonomous hyperchaotic system is proposed to realize the privacy and security of speech data in the cloud. Secondly, the integrated LSTM network model and deep hashing algorithm are used to extract high-level features of speech data. It is used to solve the high dimensional and temporality problems of speech data, and increase the retrieval efficiency and retrieval accuracy of the proposed scheme. Finally, the normalized Hamming distance algorithm is used to achieve matching. Compared with the existing algorithms, the proposed scheme has good discrimination and robustness and it has high recall, precision and retrieval efficiency under various content preserving operations. Meanwhile, the proposed speech encryption algorithm has high key space and can effectively resist exhaustive attacks.

An Analysis of Acoustic Features Caused by Articulatory Changes for Korean Distant-Talking Speech

  • Kim Sunhee;Park Soyoung;Yoo Chang D.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2E
    • /
    • pp.71-76
    • /
    • 2005
  • Compared to normal speech, distant-talking speech is characterized by the acoustic effect due to interfering sound and echoes as well as articulatory changes resulting from the speaker's effort to be more intelligible. In this paper, the acoustic features for distant-talking speech due to the articulatory changes will be analyzed and compared with those of the Lombard effect. In order to examine the effect of different distances and articulatory changes, speech recognition experiments were conducted for normal speech as well as distant-talking speech at different distances using HTK. The speech data used in this study consist of 4500 distant-talking utterances and 4500 normal utterances of 90 speakers (56 males and 34 females). Acoustic features selected for the analysis were duration, formants (F1 and F2), fundamental frequency, total energy and energy distribution. The results show that the acoustic-phonetic features for distant-talking speech correspond mostly to those of Lombard speech, in that the main resulting acoustic changes between normal and distant-talking speech are the increase in vowel duration, the shift in first and second formant, the increase in fundamental frequency, the increase in total energy and the shift in energy from low frequency band to middle or high bands.

Speech Rhythm and the Three Aspects of Speech Timing: Articulatory, Acoustic and Auditory

  • Yun, Il-Sung
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.67-76
    • /
    • 2001
  • This study is targeted at introducing the three aspects of speech timing (articulatory, acoustic and auditory) and discussing their strong and weak points in describing speech timing. Traditional (extrinsic) articulatory timing theories exclude timing representation in the speaker's articulatory plan for his utterance, while the (intrinsic) articulatory timing theories headed by Fowler incorporate time into the plan for an utterance. As compared with articulatory timing studies with crucial constraints in data collection, acoustic timing studies can deal with even several hours of speech relatively easily. This enables us to perform suprasegmental timing studies as well as segmental timing studies. On the other hand, perception of speech timing is related to psychology rather than physiology and physics. Therefore, auditory timing studies contribute to enhancing our understanding of speech timing from the psychological point of view. Traditionally, some theories of speech timing (e.g. typology of speech rhythm: stress-timing; syllable-timing or mora-timing) have been based on our perception. However, it is problematic that auditory timing can be subjective despite some validity. Many questions as to speech timing are expected to be answered more objectively. Acoustic and articulatory description of timing will be the method of solving such problems of auditory timing.

  • PDF

Feature Parameter Extraction and Analysis in the Wavelet Domain for Discrimination of Music and Speech (음악과 음성 판별을 위한 웨이브렛 영역에서의 특징 파라미터)

  • Kim, Jung-Min;Bae, Keun-Sung
    • MALSORI
    • /
    • no.61
    • /
    • pp.63-74
    • /
    • 2007
  • Discrimination of music and speech from the multimedia signal is an important task in audio coding and broadcast monitoring systems. This paper deals with the problem of feature parameter extraction for discrimination of music and speech. The wavelet transform is a multi-resolution analysis method that is useful for analysis of temporal and spectral properties of non-stationary signals such as speech and audio signals. We propose new feature parameters extracted from the wavelet transformed signal for discrimination of music and speech. First, wavelet coefficients are obtained on the frame-by-frame basis. The analysis frame size is set to 20 ms. A parameter $E_{sum}$ is then defined by adding the difference of magnitude between adjacent wavelet coefficients in each scale. The maximum and minimum values of $E_{sum}$ for period of 2 seconds, which corresponds to the discrimination duration, are used as feature parameters for discrimination of music and speech. To evaluate the performance of the proposed feature parameters for music and speech discrimination, the accuracy of music and speech discrimination is measured for various types of music and speech signals. In the experiment every 2-second data is discriminated as music or speech, and about 93% of music and speech segments have been successfully detected.

  • PDF

Speech treatment of velopharyngeal insufficiency using biofeedback technique with NM II; A case report (Nasometer 활용 바이오피드백 기법을 이용한 비인강폐쇄전환자의 치험 사례)

  • Yang Ji-Hyung;Choi Jin-Young
    • Korean Journal of Cleft Lip And Palate
    • /
    • v.8 no.1
    • /
    • pp.45-52
    • /
    • 2005
  • Velopharyngeal Insufficiency(VPI); the failure of velum, the lateral wall and the posterior pharyngeal wall to separate the nasal cavity from pharyngeal cavity during speech, can be caused by congenital conditions include cleft palate, submucous cleft palate and congenital palatal insufficiency. Speech problems of VPI are characterized by hypernasality, nasal air emission, increased nasal air flow and decreased intelligibility. These speech problems of VPI can be treated with the surgical procedure, the application of temporary prosthesis and speech therapy. Biofeedback technique with Nasometer is a speech treatment method of VPI that commonly used as one component of a comprehensive procedure for improvement of speech in patients with VPI. In this article describes a case of VPI treated by biofeedback technique with Nasometer; which showed satisfactory result in nasalance and formant analysis after the speech therapy during 9 months.

  • PDF

Speech Recognition in Noise Environment by Independent Component Analysis and Spectral Enhancement (독립 성분 분석과 스펙트럼 향상에 의한 잡음 환경에서의 음성인식)

  • Choi Seung-Ho
    • MALSORI
    • /
    • no.48
    • /
    • pp.81-91
    • /
    • 2003
  • In this paper, we propose a speech recognition method based on independent component analysis (ICA) and spectral enhancement techniques. While ICA tris to separate speech signal from noisy speech using multiple channels, some noise remains by its algorithmic limitations. Spectral enhancement techniques can compensate for lack of ICA's signal separation ability. From the speech recognition experiments with instantaneous and convolved mixing environments, we show that the proposed approach gives much improved recognition accuracies than conventional methods.

  • PDF

Robustness of Bimodal Speech Recognition on Degradation of Lip Parameter Estimation Performance (음성인식에서 입술 파라미터 열화에 따른 견인성 연구)

  • Kim, Jin-Young;Min, So-Hee;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.27-33
    • /
    • 2003
  • Bimodal speech recognition based on lip reading has been studied as a representative method of speech recognition under noisy environments. There are three integration methods of speech and lip modalities as like direct identification, separate identification and dominant recording. In this paper we evaluate the robustness of lip reading methods under the assumption that lip parameters are estimated with errors. We show that the dominant recording approach is more robust than other methods through lip reading experiments.

  • PDF

The Literature Review of Speech Intelligibility in Congenitally Deafened Children with Cochlear Implantation (선천성 청각장애 아동의 와우이식 후 말 명료도에 관한 문헌 고찰)

  • Yoon Misun
    • MALSORI
    • /
    • no.47
    • /
    • pp.141-151
    • /
    • 2003
  • The speech intelligibility of congenitally deafened children shows the change after cochlear implantation. The predicting factors of change in speech intelligibility are the age of implantation, the duration of implant use, and communication mode etc.. Among these factors, the age of implantation seems to be one of the most important predictors. But those factors including age of implantation can explain only some parts of the variance. Therefore, the further study to find the factors which affect the speech intelligibility should be done.

  • PDF

A Study on the Endpoint Detection by FIR Filtering (FIR filtering에 의한 끝점추출에 관한 연구)

  • Lee, Chang-Young
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.81-88
    • /
    • 1999
  • This paper provides a method for speech detection. After first order FIR filtering on the speech signals, we applied the conventional method of endpoint detection which utilizes the energy as the criterion in separating signals from background noise. By FIR filtering, only the Fourier components with large values of [amplitude x frequency] become significant in energy profile. By applying this procedure to the 445-words database constructed from ETRI, we confirmed that the low-amplitude noise and/or the low-frequency noise are separated clearly from the speech signals, thereby enhancing the feasibility of ideal endpoint detections.

  • PDF