• 제목/요약/키워드: cepstral

검색결과 297건 처리시간 0.026초

GMM 기반의 문맥독립 화자 검증 시스템의 성능 향상 (Performance Improvement in GMM-based Text-Independent Speaker Verification System)

  • 함성준;신광호;김민정;김주곤;정호열;정현열
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 2004년도 추계학술발표대회논문집 제23권 2호
    • /
    • pp.131-134
    • /
    • 2004
  • 본 논문에서는 GMM(Gaussian Mixture Model)을 이용한 문맥독립 화자 검증 시스템을 구현한 후, arctan 함수를 이용한 정규화 방법을 사용하여 화자검증실험을 수행하였다. 특징파라미터로서는 선형예측방법을 이용한 켑스트럼 계수와 회귀계수를 사용하고 화자의 발성 변이를 고려하여 CMN(Cepstral Mean Normalization)을 적용하였다. 화자모델 생성을 위한 학습단에서는 화자발성의 음향학적 특징을 잘 표현할 수 있는 GMM(Gaussian Mixture Model)을 이용하였고 화자 검증단에서는 ML(Maximum Likelihood)을 이용하여 유사도를 계산하고 기존의 정규화 방법과 arctan 함수를 이용한 방법에 의해 정규화된 점수(score)와 미리 정해진 문턱값과 비교하여 검증하였다. 화자 검증 실험결과, arctan 함수를 부가한 방법이 기존의 방법보다 항상 향상된 EER을 나타냄을 확인할 수 있었다.

  • PDF

Bi-Level HMM을 이용한 효율적인 음성구간 검출 방법 (An Efficient Voice Activity Detection Method using Bi-Level HMM)

  • 장광우;정문호
    • 한국전자통신학회논문지
    • /
    • 제10권8호
    • /
    • pp.901-906
    • /
    • 2015
  • 본 논문에서는 Bi-Level HMM을 이용한 음성구간 검출 방법을 제안하였다. 기존의 음성 구간 검출법은 짧은 상태변화 오류(Burst Clipping)를 제거하기 위하여 별도의 후처리 과정을 거치든가, 규칙 기반 지연 프레임을 설정해야만 한다. 이러한 문제에 대처하기 위하여 기존의 HMM 모델에 상태 계층을 추가한 Bi-Level HMM을 이용하여 음성구간 판정을 위해 음성상태의 사후 확률비를 이용하였다. 사람의 청각특성을 고려한 MFCC를 특징치로 하여, 다양한 SNR의 음성 데이터에 대한 평가지표를 활용한 실험을 수행하여 기존의 음성상태 분류법보다 우수한 결과를 얻을 수 있었다.

채널보상기법을 사용한 전화 음성 연속숫자음의 인식 성능향상 (Performance Improvement of Connected Digit Recognition with Channel Compensation Method for Telephone speech)

  • 김민성;정성윤;손종목;배건성
    • 대한음성학회지:말소리
    • /
    • 제44호
    • /
    • pp.73-82
    • /
    • 2002
  • Channel distortion degrades the performance of speech recognizer in telephone environment. It mainly results from the bandwidth limitation and variation of transmission channel. Variation of channel characteristics is usually represented as baseline shift in the cepstrum domain. Thus undesirable effect of the channel variation can be removed by subtracting the mean from the cepstrum. In this paper, to improve the recognition performance of Korea connected digit telephone speech, channel compensation methods such as CMN (Cepstral Mean Normalization), RTCN (Real Time Cepatral Normalization), MCMN (Modified CMN) and MRTCN (Modified RTCN) are applied to the static MFCC. Both MCMN and MRTCN are obtained from the CMN and RTCN, respectively, using variance normalization in the cepstrum domain. Using HTK v3.1 system, recognition experiments are performed for Korean connected digit telephone speech database released by SITEC (Speech Information Technology & Industry Promotion Center). Experiments have shown that MRTCN gives the best result with recognition rate of 90.11% for connected digit. This corresponds to the performance improvement over MFCC alone by 1.72%, i.e, error reduction rate of 14.82%.

  • PDF

재생 정보 기반 우연성 지향적 음악 추천에 관한 연구 (A Study on Serendipity-Oriented Music Recommendation Based on Play Information)

  • 하태현;이상원
    • 대한산업공학회지
    • /
    • 제41권2호
    • /
    • pp.128-136
    • /
    • 2015
  • With the recent interests with culture technologies, many studies for recommendation systems have been done. In this vein, various music recommendation systems have been developed. However, they have often focused on the technical aspects such as feature extraction and similarity comparison, and have not sufficiently addressed them in user-centered perspectives. For users' high satisfaction with recommended music items, it is necessary to study how the items are connected to the users' actual desires. For this, our study proposes a novel music recommendation method based on serendipity, which means the freshness users feel for their familiar items. The serendipity is measured through the comparison of users' past and recent listening tendencies. We utilize neural networks to apply these tendencies to the recommendation process and to extract the features of music items as MFCCs (Mel-frequency cepstral coefficients). In that the recommendation method is developed based on the characteristics of user behaviors, it is expected that user satisfaction for the recommended items can be actually increased.

한국어 유아 음성인식을 위한 수정된 Mel 주파수 캡스트럼 (Modified Mel Frequency Cepstral Coefficient for Korean Children's Speech Recognition)

  • 유재권;이경미
    • 한국콘텐츠학회논문지
    • /
    • 제13권3호
    • /
    • pp.1-8
    • /
    • 2013
  • 본 논문에서는 한국어에서 유아 대상의 음성인식 향상을 위한 새로운 특징추출 알고리즘을 제안한다. 제안하는 특징추출 알고리즘은 세 가지 방법을 통합한 기법이다. 첫째 성도의 길이가 성인에 비해 짧은 유아의 음향적 특징을 보완하기 위한 방법으로 성도정규화 방법을 사용한다. 둘째 성인의 음성과 비교했을 때 높은 스펙트럼 영역에 집중되어 있는 유아의 음향적 특징을 보완하기 위해 균일한 대역폭을 사용하는 방법이다. 마지막으로 실시간 환경에서의 잡음에 강건한 음성인식기 개발을 위해 스무딩 필터를 사용하여 보완하는 방법이다. 세 가지 방법을 통해 제안하는 특징추출 기법은 실험을 통해 유아의 음성인식 성능 향상에 도움을 준다는 것을 확인했다.

Emotion recognition from speech using Gammatone auditory filterbank

  • 레바부이;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(A)
    • /
    • pp.255-258
    • /
    • 2011
  • An application of Gammatone auditory filterbank for emotion recognition from speech is described in this paper. Gammatone filterbank is a bank of Gammatone filters which are used as a preprocessing stage before applying feature extraction methods to get the most relevant features for emotion recognition from speech. In the feature extraction step, the energy value of output signal of each filter is computed and combined with other of all filters to produce a feature vector for the learning step. A feature vector is estimated in a short time period of input speech signal to take the advantage of dependence on time domain. Finally, in the learning step, Hidden Markov Model (HMM) is used to create a model for each emotion class and recognize a particular input emotional speech. In the experiment, feature extraction based on Gammatone filterbank (GTF) shows the better outcomes in comparison with features based on Mel-Frequency Cepstral Coefficient (MFCC) which is a well-known feature extraction for speech recognition as well as emotion recognition from speech.

한국어 음성을 이용한 연령 분류 딥러닝 알고리즘 기술 개발 (Development of Age Classification Deep Learning Algorithm Using Korean Speech)

  • 소순원;유승민;김주영;안현준;조백환;육순현;김인영
    • 대한의용생체공학회:의공학회지
    • /
    • 제39권2호
    • /
    • pp.63-68
    • /
    • 2018
  • In modern society, speech recognition technology is emerging as an important technology for identification in electronic commerce, forensics, law enforcement, and other systems. In this study, we aim to develop an age classification algorithm for extracting only MFCC(Mel Frequency Cepstral Coefficient) expressing the characteristics of speech in Korean and applying it to deep learning technology. The algorithm for extracting the 13th order MFCC from Korean data and constructing a data set, and using the artificial intelligence algorithm, deep artificial neural network, to classify males in their 20s, 30s, and 50s, and females in their 20s, 40s, and 50s. finally, our model confirmed the classification accuracy of 78.6% and 71.9% for males and females, respectively.

생체기반 GMM Supervector Kernel을 이용한 운전자검증 기술 (Driver Verification System Using Biometrical GMM Supervector Kernel)

  • 김형국
    • 한국ITS학회 논문지
    • /
    • 제9권3호
    • /
    • pp.67-72
    • /
    • 2010
  • 본 논문에서는 음성과 얼굴 정보를 분석하여 자동차환경에서 운전자를 검증하는 기술을 소개한다. 음성정보를 이용한 화자검증을 위해서는 잘 알려진 Mel-scale Frequency Cepstral Coefficients(MFCCs)를 음성 특징으로 사용하였으며, 동영상을 이용한 얼굴검증에 대해서는 AdaBoost를 이용하여 검출된 얼굴 영역에 대해 주성분 분석을 수행하여 데이터의 크기가 현저히 줄어든 특징벡터를 추출하였다. 기존의 화자검증 방식에 비해 본 논문에서는 추출된 음성 및 얼굴 특징들을 Gaussian Mixture Models(GMM)-Supervector기반의 Support Vector Machine(SVM)커넬 방식에 적용하여 운전자의 음성과 얼굴을 효과적으로 검증하는 방식을 제안하였다. 실험결과 제안한 방법은 단순한 GMM 방식이나 SVM 방식보다 운전자 검증성능을 향상시킴을 알 수 있었다.

갑상선 수술범위에 따른 음성의 음향적 분석 (Acoustic Analysis of Voice Change According to Extent of Thyroidectomy)

  • 강영애;구본석
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.77-83
    • /
    • 2015
  • Voice complication without the laryngeal nerve injury can occur after thyroidectomy. The purpose of this study is to investigate voice changes according to extent of thyroidectomy with acoustic analysis. Thirty-five female patients with papillary thyroid carcinoma took voice evaluation at before and 1 month, and 3 months after thyroidectomy. Acoustic analysis parameters were speaking fundamental frequency(SFF), min $F_0$, max $F_0$, dynamic range $F_0$, jitter, shimmer, noise-to-harmonic ratio(NHR), and Cepstral prominence peak(CPP). Repeated-measured analysis of variance was applied. Time-related voice changes showed significant differences in all parameters except NHR. At 1 month after surgery, voice quality was worse and pitch was decreasing, but voice quality and pitch were improving at 3-month follow-up. Voice changes according to the extent of surgery were in SFF, max $F_0$, and dynamic range $F_0$. Time by surgery-related voice change existed only in min $F_0$. The result showed that the severity of voice complication depended on the extend of thyroidectomy which had a negative impact on $F_0$-related parameters. The deterioration of voice quality at 1 month after thyroidectomy may be affected by the loss of thyroid hormone in the blood. The descent of $F_0$-related parameters may be impacted by laryngeal fixation of surgical site adhesion.

오디오 신호를 이용한 음란 동영상 판별 (Classification of Phornographic Videos Using Audio Information)

  • 김봉완;최대림;방만원;이용주
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.207-210
    • /
    • 2007
  • As the Internet is prevalent in our life, harmful contents have been increasing on the Internet, which has become a very serious problem. Among them, pornographic video is harmful as poison to our children. To prevent such an event, there are many filtering systems which are based on the keyword based methods or image based methods. The main purpose of this paper is to devise a system that classifies the pornographic videos based on the audio information. We use Mel-Cepstrum Modulation Energy (MCME) which is modulation energy calculated on the time trajectory of the Mel-Frequency cepstral coefficients (MFCC) and MFCC as the feature vector and Gaussian Mixture Model (GMM) as the classifier. With the experiments, the proposed system classified the 97.5% of pornographic data and 99.5% of non-pornographic data. We expect the proposed method can be used as a component of the more accurate classification system which uses video information and audio information simultaneously.

  • PDF