• 제목/요약/키워드: Korean speech

검색결과 5,286건 처리시간 0.031초

VoIP 환경에서의 잡음제거를 위한 최적화된 위너 필터 (Optimized Wiener Filter for Noise Reduction in VoIP Environments)

  • 정상배;이성독;한민수
    • 대한음성학회지:말소리
    • /
    • 제64호
    • /
    • pp.105-119
    • /
    • 2007
  • Noise reduction technologies are indispensable to achieve acceptable speech quality in VoIP systems. This paper proposes a Wiener filter optimized to the estimated SNR of noisy speech for the noise reduction in VoIP environments. The proposed noise canceller is applied as a pre-processor before speech encoding. The performance of the proposed method is evaluated by the PESQ in various noisy conditions. In this paper, the proposed algorithm is applied to G.711, G.723.1, and G.729A which are all VoIP speech codecs. The PESQ results show that the performance of our proposed noise reduction scheme outperforms those of the noise suppression in the IS-127 EVRC and the ETSI standard for the advanced distributed speech recognition front-end.

  • PDF

뇌성마비 성인 발화의 운율특성 (Prosodic Properties in the Speech of Adults with Cerebral Palsy)

  • 이숙향;고현주;김수진
    • 대한음성학회지:말소리
    • /
    • 제64호
    • /
    • pp.39-51
    • /
    • 2007
  • The purpose of this study is to investigate prosodic characteristics in the speech of adults with cerebral palsy through a comparison with the speech of normal speakers. Ten speakers with cerebral palsy (6 males, 4 females) and 6 normal speakers (3 males, 3 females) served as subjects. The results revealed that, compared to normal speakers, speakers with cerebral palsy showed a slower speech rate, a larger number of intonational phrases(IPs) and pauses, a larger number of accentual phrases(APs) per IP, a longer duration of pauses, and more gradual slopes of [L +H] in APs. However, the two groups showed similar tone patterns in their APs. The results also showed mild to moderate correlations between speech intelligibility and the prosodic properties which showed significant differences between the two groups, suggesting that they could be important prosodic factors to predict speech intelligibility in the speech of adults with cerebral palsy.

  • PDF

켑스트럼 거리 기반의 음성/음악 판별 성능 향상 (Performance Improvement of Speech/Music Discrimination Based on Cepstral Distance)

  • 박슬한;최무열;김형순
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.195-206
    • /
    • 2005
  • Discrimination between speech and music is important in many multimedia applications. In this paper, focusing on the spectral change characteristics of speech and music, we propose a new method of speech/music discrimination based on cepstral distance. Instead of using cepstral distance between the frames with fixed interval, the minimum of cepstral distances among neighbor frames is employed to increase discriminability between fast changing music and speech. And, to prevent misclassification of speech segments including short pause into music, short pause segments are excluded from computing cepstral distance. The experimental results show that proposed method yields the error rate reduction of$68\%$, in comparison with the conventional approach using cepstral distance.

  • PDF

음성 질의 기반 디지털 사진 검색 기법 (A Query-by-Speech Scheme for Photo Albuming)

  • 김태성;서영주;이용주;김회린
    • 대한음성학회지:말소리
    • /
    • 제57호
    • /
    • pp.99-112
    • /
    • 2006
  • In this paper, we introduce two retrieval methods for photos with speech documents. We compare the pattern of speech query with those of speech documents recorded in digital cameras, and measure the similarities, and retrieve photos corresponding to the speech documents which have high similarity scores. As the first approach, a phoneme recognition scheme is used as the pre-processor for the pattern matching, and in the second one, the vector quantization (VQ) and the dynamic time warping (DTW) are applied to match the speech query with the documents in signal domain itself. Experimental results show that the performance of the first approach is highly dependent on that of phoneme recognition while the processing time is short. The second method provides a great improvement of performance. While the processing time is longer than that of the first method due to DTW, but we can reduce it by taking approximated methods.

  • PDF

정상 성인의 말속도 및 유창성 연구 (A Study af Speech Rate and Fluency in Narmal Speakers)

  • 신문자;한숙자
    • 음성과학
    • /
    • 제10권2호
    • /
    • pp.159-168
    • /
    • 2003
  • The purpose of this study was to assess the speech rate, fluency and the type of dysfluencies of normal adults in order to provide a basic data of normal speaking. The number of subjects of this study were 30(14 females and 16 males), and their ages ranged 17 to 36. The rate was measured as syllables per minute (SPM). The speech rates in reading ranged 273-426 with a mean of 348 SPM and in speaking ranges 118-409 (mean=265). The average of their fluencies was 99.1% in reading and 96.9% in speaking. The rater reliability of speech rate in the data assessed by video was very high (r=0.98) and the rater reliability of speech fluency was moderately high (r=0.67). The disfluency types were also analysed from 150 disfluency episodes. Syllable repetition and word interjection were the most common disfluent types.

  • PDF

TMS320C2000계열 DSP를 이용한 단일칩 음성인식기 구현 (Implementation of a Single-chip Speech Recognizer Using the TMS320C2000 DSPs)

  • 정익주
    • 음성과학
    • /
    • 제14권4호
    • /
    • pp.157-167
    • /
    • 2007
  • In this paper, we implemented a single-chip speech recognizer using the TMS320C2000 DSPs. For this implementation, we had developed very small-sized speaker-dependent recognition engine based on dynamic time warping, which is especially suited for embedded systems where the system resources are severely limited. We carried out some optimizations including speed optimization by programming time-critical functions in assembly language, and code size optimization and effective memory allocation. For the TMS320F2801 DSP which has 12Kbyte SRAM and 32Kbyte flash ROM, the recognizer developed can recognize 10 commands. For the TMS320F2808 DSP which has 36Kbyte SRAM and 128Kbyte flash ROM, it has additional capability of outputting the speech sound corresponding to the recognition result. The speech sounds for response, which are captured when the user trains commands, are encoded using ADPCM and saved on flash ROM. The single-chip recognizer needs few parts except for a DSP itself and an OP amp for amplifying microphone output and anti-aliasing. Therefore, this recognizer may play a similar role to dedicated speech recognition chips.

  • PDF

청각 장애자용 발음 훈련 기기의 개발 (Speech training aids for deafs)

  • 김동준;윤태성;박상희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1991년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 22-24 Oct. 1991
    • /
    • pp.746-751
    • /
    • 1991
  • Deafs train articulation by observing mouth of a tutor. sensing tactually the notions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech ter, or display only frequency spectra in histogrm or pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system Is aimed to develop and this system makes a subject to know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory notions of the vocal tract from speech signal. Next, a vocal tract profile mode using LPC analysis is made up. And using this model, articulatory notions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF

다양한 음성코퍼스의 통합관리시스템의 설계 및 구현에 관한 검토 (An Investigation for Design and Implementation of an Integrated Data Management System of Various Speech Corpora)

  • 황경훈;정창원;김영일;김봉완;이용주
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 10월 학술대회지
    • /
    • pp.69-72
    • /
    • 2003
  • In this paper, we investigate various factors that are relevant to design and implementation of an integrated management system for various speech corpora. The purpose of this paper is to manage an integrated management system for various kinds of speech corpora necessary for speech research and speech corpora consrtructed in different data formats. In addition, ways are considered to allow users to search with effect for speech corpora that meet various conditions which they want, and to allow them to add with ease corpora that are constructed newly. In order to achieve this goal, we design a global schema for an integrated management of new additional information without changing old speech corpora, and construct a web-based integrated management system based on the scheme that can be accessed without any temporal and spatial restrictions. And we show the steps by which these can be implemented, and describe related future study topics, examining the system.

  • PDF

시간-주파수 스무딩이 적용된 소프트 마스크 필터를 이용한 단일 채널 음성 분리 (Single-Channel Speech Separation Using the Time-Frequency Smoothed Soft Mask Filter)

  • 이윤경;권오욱
    • 대한음성학회지:말소리
    • /
    • 제67호
    • /
    • pp.195-216
    • /
    • 2008
  • This paper addresses the problem of single-channel speech separation to extract the speech signal uttered by the speaker of interest from a mixture of speech signals. We propose to apply time-frequency smoothing to the existing statistical single-channel speech separation algorithms: The soft mask and the minimum-mean-square-error (MMSE) algorithms. In the proposed method, we use the two smoothing later. One is the uniform mask filter whose filter length is uniform at the time-Sequency domain, and the other is the met-scale filter whose filter length is met-scaled at the time domain. In our speech separation experiments, the uniform mask filter improves speaker-to-interference ratio (SIR) by 2.1dB and 1dB for the soft mask algorithm and the MMSE algorithm, respectively, whereas the mel-scale filter achieves 1.1dB and 0.8dB for the same algorithms.

  • PDF

언어 및 인지 과제 동시수행이 발화속도에 미치는 영향 (Effects of Concurrent Linguistic or Cognitive Tasks on Speech Rate)

  • 한지연;김효정;김문정
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.102-105
    • /
    • 2007
  • This study was designed to examination effects of concurrent linguistic or cognitive tasks on speech rate. Eight normal speakers were repeated sentences either with or without simultaneous a linguistic task and a cognitive task. Linguistic task was conducted by generating verbs from nouns and cognitive task meaned performing mental arithmetic. Speech rate was measured from acoustic data. One-way ANOVA conducted to know speech rate difference among 3 different type of tasks. The results showed there was no significant difference between sentence repeat and linguistic tasks. But There was significant difference findings: sentence repeat and linguistic task, linguistic and cognitive task.

  • PDF