• 제목/요약/키워드: speech features

검색결과 647건 처리시간 0.026초

음성 특징의 효율성 (EFFICIENCY OF SPEECH FEATURES)

  • 황규웅
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1995년도 제12회 음성통신 및 신호처리 워크샵 논문집 (SCAS 12권 1호)
    • /
    • pp.225-227
    • /
    • 1995
  • This paper compared waveform, cepstrum, and spline wavelet features with nonlinear discriminant analysis. This measure shows efficiency of speech parametrization better than old linear separability criteria and can be used to measure the efficiency of each layer of certain system. Spline wavelet transform has larger gap among classes and cepstrum is clustered better than the spline wavelet feature. Both features do not have good property for classification and we will compare Gabor wavelet transform, Mel cepstrum, delta cepstrum, etc.

  • PDF

한국어 화제구문의 운율적 고찰 (The Study of Prosodic Features in Korean Topic Constructions)

  • 황손문
    • 음성과학
    • /
    • 제9권2호
    • /
    • pp.59-68
    • /
    • 2002
  • This paper analyzes the prosodic features distinctively associated with Korean topic constructions (marked by nun or its variant un) and subject constructions (marked by ka or its variant i) as a way of explicating the role that prosody plays in differentially constituting their discourse messages. Using both spoken data elicited in controlled settings and spontaneous conversational data, an attempt is made to identify differentiating prosodic features and intonation contours associated with distinct meanings and functions of nun- and ka-constructions evoked in a variety of discourse contexts.

  • PDF

청크 기반 시계열 음성의 감정 인식 연구 (A Study on Emotion Recognition of Chunk-Based Time Series Speech)

  • 신현삼;홍준기;홍성찬
    • 인터넷정보학회논문지
    • /
    • 제24권2호
    • /
    • pp.11-18
    • /
    • 2023
  • 최근 음성 감정 인식(Speech Emotion Recognition, SER)분야는 음성 특징과 모델링을 활용하여 인식률을 개선하기 위한 많은 연구가 진행되고 있다. 기존 음성 감정 인식의 정확도를 높이기 위한 모델링 연구 이외에도 음성 특징을 다양한 방법으로 활용하는 연구들이 진행되고 있다. 본 논문에서는 음성 감정이 시간 흐름과 연관이 있음을 착안하여 시계열 방식으로 음성파일을 시간 구간별로 분리한다. 파일 분리 이후, 음성 특징인 Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), mel-frequency cepastral coefficients (MFCC)를 추출하여서 순차적 데이터 처리에 사용하는 순환형 신경망 모델에 적용하여 음성 데이터에서 감정을 분류하는 모델을 제안한다. 제안한 모델은 librosa를 사용하여 음성 특징들을 모든 파일에서 추출하여, 신경망 모델에 적용하였다. 시뮬레이션은 영어 데이터 셋인 Interactive Emotional Dyadic Motion Capture (IEMOCAP)을 이용하여 recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit(GRU)의 모델들의 성능을 비교 및 분석하였다.

음성인식 성능 개선을 위한 다중작업 오토인코더와 와설스타인식 생성적 적대 신경망의 결합 (Combining multi-task autoencoder with Wasserstein generative adversarial networks for improving speech recognition performance)

  • 고조원;고한석
    • 한국음향학회지
    • /
    • 제38권6호
    • /
    • pp.670-677
    • /
    • 2019
  • 음성 또는 음향 이벤트 신호에서 발생하는 배경 잡음은 인식기의 성능을 저하시키는 원인이 되며, 잡음에 강인한 특징을 찾는데 많은 노력을 필요로 한다. 본 논문에서는 딥러닝을 기반으로 다중작업 오토인코더(Multi-Task AutoEncoder, MTAE) 와 와설스타인식 생성적 적대 신경망(Wasserstein GAN, WGAN)의 장점을 결합하여, 잡음이 섞인 음향신호에서 잡음과 음성신호를 추정하는 네트워크를 제안한다. 본 논문에서 제안하는 MTAE-WGAN는 구조는 구배 페널티(Gradient Penalty) 및 누설 Leaky Rectified Linear Unit (LReLU) 모수 Parametric ReLU (PReLU)를 활용한 변수 초기화 작업을 통해 음성과 잡음 성분을 추정한다. 직교 구배 페널티와 파라미터 초기화 방법이 적용된 MTAE-WGAN 구조를 통해 잡음에 강인한 음성특징 생성 및 기존 방법 대비 음소 오인식률(Phoneme Error Rate, PER)이 크게 감소하는 성능을 보여준다.

Emotion recognition from speech using Gammatone auditory filterbank

  • 레바부이;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(A)
    • /
    • pp.255-258
    • /
    • 2011
  • An application of Gammatone auditory filterbank for emotion recognition from speech is described in this paper. Gammatone filterbank is a bank of Gammatone filters which are used as a preprocessing stage before applying feature extraction methods to get the most relevant features for emotion recognition from speech. In the feature extraction step, the energy value of output signal of each filter is computed and combined with other of all filters to produce a feature vector for the learning step. A feature vector is estimated in a short time period of input speech signal to take the advantage of dependence on time domain. Finally, in the learning step, Hidden Markov Model (HMM) is used to create a model for each emotion class and recognize a particular input emotional speech. In the experiment, feature extraction based on Gammatone filterbank (GTF) shows the better outcomes in comparison with features based on Mel-Frequency Cepstral Coefficient (MFCC) which is a well-known feature extraction for speech recognition as well as emotion recognition from speech.

Proposed Efficient Architectures and Design Choices in SoPC System for Speech Recognition

  • Trang, Hoang;Hoang, Tran Van
    • 전기전자학회논문지
    • /
    • 제17권3호
    • /
    • pp.241-247
    • /
    • 2013
  • This paper presents the design of a System on Programmable Chip (SoPC) based on Field Programmable Gate Array (FPGA) for speech recognition in which Mel-Frequency Cepstral Coefficients (MFCC) for speech feature extraction and Vector Quantization for recognition are used. The implementing process of the speech recognition system undergoes the following steps: feature extraction, training codebook, recognition. In the first step of feature extraction, the input voice data will be transformed into spectral components and extracted to get the main features by using MFCC algorithm. In the recognition step, the obtained spectral features from the first step will be processed and compared with the trained components. The Vector Quantization (VQ) is applied in this step. In our experiment, Altera's DE2 board with Cyclone II FPGA is used to implement the recognition system which can recognize 64 words. The execution speed of the blocks in the speech recognition system is surveyed by calculating the number of clock cycles while executing each block. The recognition accuracies are also measured in different parameters of the system. These results in execution speed and recognition accuracy could help the designer to choose the best configurations in speech recognition on SoPC.

가부키 증후군 환자의 구개인두부전증의 치료: 증례보고 (Treatment of Velopharyngeal Insufficiency in Kabuki Syndrome: Case Report)

  • 이산하;왕재권;박미경;백롱민
    • Archives of Plastic Surgery
    • /
    • 제38권2호
    • /
    • pp.203-206
    • /
    • 2011
  • Purpose: Kabuki syndrome is a multiple malformation syndrome that was first reported in Japan. It is characterized by distinctive Kabuki-like facial features, skeletal anomalies, dermatoglyphic abnormalities, short stature, and mental retardation. We report two cases of Kabuki syndrome with the surgical intervention and speech evaluation. Methods: Both patients had velopharyngeal insufficiency and had a superior based pharyngeal flap operation. The preoperative and postoperative speech evaluations were performed by a speech language pathologist. Results: In case 1, hypernasality was reduced in spontaneous speech, and the nasalance scores in syllable repetitions were reduced to be within normal ranges. In case 2, hypernasality in spontaneous speech was reduced from severe level to moderate level and the nasalance scores in syllable repetitions were also reduced to be within normal ranges. Conclusion: The goal of this article is to raise awareness among plastic surgeons who may encounter such patients with unique facial features. This study shows that pharyngeal flap operation can successfully correct the velopharyngeal insufficiency in Kabuki syndrome and post operative speech therapy plays a role in reinforcing surgical result.

Differentiation of Aphasic Patients from the Normal Control Via a Computational Analysis of Korean Utterances

  • Kim, HyangHee;Choi, Ji-Myoung;Kim, Hansaem;Baek, Ginju;Kim, Bo Seon;Seo, Sang Kyu
    • International Journal of Contents
    • /
    • 제15권1호
    • /
    • pp.39-51
    • /
    • 2019
  • Spontaneous speech provides rich information defining the linguistic characteristics of individuals. As such, computational analysis of speech would enhance the efficiency involved in evaluating patients' speech. This study aims to provide a method to differentiate the persons with and without aphasia based on language usage. Ten aphasic patients and their counterpart normal controls participated, and they were all tasked to describe a set of given words. Their utterances were linguistically processed and compared to each other. Computational analyses from PCA (Principle Component Analysis) to machine learning were conducted to select the relevant linguistic features, and consequently to classify the two groups based on the features selected. It was found that functional words, not content words, were the main differentiator of the two groups. The most viable discriminators were demonstratives, function words, sentence final endings, and postpositions. The machine learning classification model was found to be quite accurate (90%), and to impressively be stable. This study is noteworthy as it is the first attempt that uses computational analysis to characterize the word usage patterns in Korean aphasic patients, thereby discriminating from the normal group.

강인한 음성 인식을 위한 탠덤 구조와 분절 특징의 결합 (Combination Tandem Architecture with Segmental Features for Robust Speech Recognition)

  • 윤영선;이윤근
    • 대한음성학회지:말소리
    • /
    • 제62호
    • /
    • pp.113-131
    • /
    • 2007
  • It is reported that the segmental feature based recognition system shows better results than conventional feature based system in the previous studies. On the other hand, the various studies of combining neural network and hidden Markov models within a single system are done with expectations that it may potentially combine the advantages of both systems. With the influence of these studies, tandem approach was presented to use neural network as the classifier and hidden Markov models as the decoder. In this paper, we applied the trend information of segmental features to tandem architecture and used posterior probabilities, which are the output of neural network, as inputs of recognition system. The experiments are performed on Auroral database to examine the potentiality of the trend feature based tandem architecture. From the results, the proposed system outperforms on very low SNR environments. Consequently, we argue that the trend information on tandem architecture can be additionally used for traditional MFCC features.

  • PDF

The Role of Prosody in Dialect Synthesis and Authentication

  • Yoon, Kyu-Chul
    • 말소리와 음성과학
    • /
    • 제1권1호
    • /
    • pp.25-31
    • /
    • 2009
  • The purpose of this paper is to examine the viability of synthesizing Masan dialect with Seoul dialect and to examine the role of prosody in the authentication of the synthesized Masan dialect. The synthesis was performed by transferring one or more of the prosodic features of the Masan utterance onto the Seoul utterance. The hypothesis is that, given an utterance composed of the phonemes shared by both dialects, as more prosodic features of the Masan utterance are transferred onto the Seoul utterance, the Seoul utterance will be identified as more authentic Masan utterance. The prosodic features involved were the fundamental frequency contour, the segmental durations, and the intensity contour. The synthesized Masan utterances were evaluated by thirteen native speakers of Masan dialect. The result showed that the fundamental frequency contour and the segmental durations had main effects on the perceptual shift from Seoul to Masan dialect.

  • PDF