• Title/Summary/Keyword: speech database

Search Result 332, Processing Time 0.026 seconds

Intonation Patterns of Korean Spontaneous Speech (한국어 자유 발화 음성의 억양 패턴)

  • Kim, Sun-Hee
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.85-94
    • /
    • 2009
  • This paper investigates the intonation patterns of Korean spontaneous speech through an analysis of four dialogues in the domain of travel planning. The speech corpus, which is a subset of spontaneous speech database recorded and distributed by ETRI, is labeled in APs and IPs based on K-ToBI system using Momel, an intonation stylization algorithm. It was found that unlike in English, a significant number of APs and IPs include hesitation lengthening, which is known to be a disfluency phenomenon due to speech planning. This paper also claims that the hesitation lengthening is different from the IP-final lengthening and that it should be categorized as a new category, as it greatly affects the intonation patterns of the language. Except for the fact that 19.09% of APs show hesitation lengthening, the spontaneous speech shows the same AP patterns as in read speech with higher frequency of falling patterns such as LHL in comparison with read speech which show more LH and LHLH patterns. The IP boundary tones of spontaneous speech, showing the same five patterns such as L%, HL%, LHL%, H%, LH% as in read speech, show higher frequency of rising patterns (H% and LH%) and contour tones (HL%, LH%, LHL%) while read speech on the contrary shows higher frequency of falling patterns and simple tones at the end of IPs.

  • PDF

Performance Improvement of Automatic Speech Segmentation and Labeling System (자동 음성분할 및 레이블링 시스템의 성능향상)

  • Hong Seong Tae;Kim Je-U;Kim Hyeong-Sun
    • MALSORI
    • /
    • no.35_36
    • /
    • pp.175-188
    • /
    • 1998
  • Database segmented and labeled up to phoneme level plays an important role in phonetic research and speech engineering. However, it usually requires manual segmentation and labeling, which is time-consuming and may also lead to inconsistent consequences. Automatic segmentation and labeling can be introduced to solve these problems. In this paper, we investigate a method to improve the performance of automatic segmentation and labeling system, where Spectral Variation Function(SVF), modification of silence model, and use of energy variations in postprocessing stage are considered. In this paper, SVF is applied in three ways: (1) addition to feature parameters, (2) postprocessing of phoneme boundaries, (3) restricting the Viterbi path so that the resulting phoneme boundaries may be located in frames around SVF peaks. In the postprocessing stage, positions with greatest energy variation during transitional period between silence and other phonemes were used to modify boundaries. In order to evaluate the performance of the system, we used 452 phonetically balanced word(PBW) database for training phoneme models and phonetically balanced sentence(PBS) database for testing. According to our experiments, 83.1% (6.2% improved) and 95.8% (0.9% improved) of phoneme boundaries were within 20ms and 40ms of the manually segmented boundaries, respectively.

  • PDF

Comparison of Adult and Child's Speech Recognition of Korean (한국어에서의 성인과 유아의 음성 인식 비교)

  • Yoo, Jae-Kwon;Lee, Kyoung-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.138-147
    • /
    • 2011
  • While most Korean speech databases are developed for adults' speech, not for children's speech, there are various children's speech databases based on other languages. Because there are wide differences between children's and adults' speech in acoustic and linguistic characteristics, the children's speech database needs to be developed. In this paper, to find the differences between them in Korean, we built speech recognizers using HMM and tested them according to gender, age, and the presence of VTLN(Vocal Tract Length Normalization). This paper shows the speech recognizer made by children's speech has a much higher recognition rate than that made by adults' speech and using VTLN helps to improve the recognition rate in Korean.

A Low-Cost Speech to Sign Language Converter

  • Le, Minh;Le, Thanh Minh;Bui, Vu Duc;Truong, Son Ngoc
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.37-40
    • /
    • 2021
  • This paper presents a design of a speech to sign language converter for deaf and hard of hearing people. The device is low-cost, low-power consumption, and it can be able to work entirely offline. The speech recognition is implemented using an open-source API, Pocketsphinx library. In this work, we proposed a context-oriented language model, which measures the similarity between the recognized speech and the predefined speech to decide the output. The output speech is selected from the recommended speech stored in the database, which is the best match to the recognized speech. The proposed context-oriented language model can improve the speech recognition rate by 21% for working entirely offline. A decision module based on determining the similarity between the two texts using Levenshtein distance decides the output sign language. The output sign language corresponding to the recognized speech is generated as a set of sequential images. The speech to sign language converter is deployed on a Raspberry Pi Zero board for low-cost deaf assistive devices.

On Effective Dual-Channel Noise Reduction for Speech Recognition in Car Environment

  • Ahn, Sung-Joo;Kang, Sun-Mee;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.43-52
    • /
    • 2004
  • This paper concerns an effective dual-channel noise reduction method to increase the performance of speech recognition in a car environment. While various single channel methods have already been developed and dual-channel methods have been studied somewhat, their effectiveness in real environments, such as in cars, has not yet been formally proven in terms of achieving acceptable performance level. Our aim is to remedy the low performance of the single and dual-channel noise reduction methods. This paper proposes an effective dual-channel noise reduction method based on a high-pass filter and front-end processing of the eigendecomposition method. We experimented with a real multi-channel car database and compared the results with respect to the microphones arrangements. From the analysis and results, we show that the enhanced eigendecomposition method combined with high-pass filter indeed significantly improve the speech recognition performance under a dual-channel environment.

  • PDF

A Study on Vocabulary-Independent Continuous Speech Recognition System for Intelligent Home Network System (지능형 홈네트워크 시스템을 위한 가변어휘 연속음성인식시스템에 관한 연구)

  • Lee, Ho-Woong;Jeong, Hee-Suk
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.2
    • /
    • pp.37-42
    • /
    • 2008
  • In this paper, the vocabulary-independent continuous speech recognition system for speech control of intelligent home-network is presented. This study suggests a conversational scenario of continuous natural vocabulary based upon keywords for recognition on natural speech command, and a way of optimizing the recognition system by constructing a recognition system and database based upon keywords.

  • PDF

Korean Prosody Generation Based on Stem-ML (Stem-ML에 기반한 한국어 억양 생성)

  • Han, Young-Ho;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.54
    • /
    • pp.45-61
    • /
    • 2005
  • In this paper, we present a method of generating intonation contour for Korean text-to-speech (TTS) system and a method of synthesizing emotional speech, both based on Soft template mark-up language (Stem-ML), a novel prosody generation model combining mark-up tags and pitch generation in one. The evaluation shows that the intonation contour generated by Stem-ML is better than that by our previous work. It is also found that Stem-ML is a useful tool for generating emotional speech, by controling limited number of tags. Large-size emotional speech database is crucial for more extensive evaluation.

  • PDF

Noise Reduction Using MMSE Estimator-based Adaptive Comb Filtering (MMSE Estimator 기반의 적응 콤 필터링을 이용한 잡음 제거)

  • Park, Jeong-Sik;Oh, Yung-Hwan
    • MALSORI
    • /
    • no.60
    • /
    • pp.181-190
    • /
    • 2006
  • This paper describes a speech enhancement scheme that leads to significant improvements in recognition performance when used in the ASR front-end. The proposed approach is based on adaptive comb filtering and an MMSE-related parameter estimator. While adaptive comb filtering reduces noise components remarkably, it is rarely effective in reducing non-stationary noises. Furthermore, due to the uniformly distributed frequency response of the comb-filter, it can cause serious distortion to clean speech signals. This paper proposes an improved comb-filter that adjusts its spectral magnitude to the original speech, based on the speech absence probability and the gain modification function. In addition, we introduce the modified comb filtering-based speech enhancement scheme for ASR in mobile environments. Evaluation experiments carried out using the Aurora 2 database demonstrate that the proposed method outperforms conventional adaptive comb filtering techniques in both clean and noisy environments.

  • PDF

The Speech Database for Large Scale Word Recognizer (Large scale word recognizer를 위한 음성 database - POW)

  • 임연자
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.291-294
    • /
    • 1995
  • 본논문은 POW algorithm과 알고리즘을 통해 수행된 결과인 large scale word recognizer를 위한 POW set에 대하여 설명하겠다. Large scale word recognizer를 위한 speech database를 구축하기 위해서는 모든 가능한 phonological phenomenon이 POW set에 포함 되어얗 ks다. 또한 POW set의 음운 현상들의 분포는 추출하고자 하는 모집단의 음운현상들의 분포와 유사해야 한다. 위와 같은 목적으로 다음과 같이 3가지 성질을 갖는 POW set을 추출하기 위한 새로운 algorithm을 제안한다. 1. 모집단에서 발생하는 모든 음운현상을 포함해야 한다. 2, 최소한의 단어 집합으로 구성되어야 한다. 3. POW set과 모집단의 음운현상의 분포가 유사해야 한다. 우리는 약 300만 어절의 한국어 text corpus로부터 5천 단어의 고빈도 어절을 추출하고 이로부터 한국어 POW set을 추출하였다.

  • PDF

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF