• Title/Summary/Keyword: 음성 특징 추출

Search Result 310, Processing Time 0.027 seconds

Semantic Ontology Speech Recognition Performance Improvement using ERB Filter (ERB 필터를 이용한 시맨틱 온톨로지 음성 인식 성능 향상)

  • Lee, Jong-Sub
    • Journal of Digital Convergence
    • /
    • v.12 no.10
    • /
    • pp.265-270
    • /
    • 2014
  • Existing speech recognition algorithm have a problem with not distinguish the order of vocabulary, and the voice detection is not the accurate of noise in accordance with recognized environmental changes, and retrieval system, mismatches to user's request are problems because of the various meanings of keywords. In this article, we proposed to event based semantic ontology inference model, and proposed system have a model to extract the speech recognition feature extract using ERB filter. The proposed model was used to evaluate the performance of the train station, train noise. Noise environment of the SNR-10dB, -5dB in the signal was performed to remove the noise. Distortion measure results confirmed the improved performance of 2.17dB, 1.31dB.

Comparison of Feature Extraction Methods for the Telephone Speech Recognition (전화 음성 인식을 위한 특징 추출 방법 비교)

  • 전원석;신원호;김원구;이충용;윤대희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.7
    • /
    • pp.42-49
    • /
    • 1998
  • 본 논문에서는 전화망 환경에서 음성 인식 성능을 개선하기 위한 특징 벡터 추출 단계에서의 처리 방법들을 연구하였다. 먼저, 고립 단어 인식 시스템에서 채널 왜곡 보상 방 법들을 단어 모델과 문맥 독립 음소 모델에 대하여 인식 실험을 하였다. 켑스트럼 평균 차 감법, RASTA 처리, 켑스트럼-시간 행렬을 실험하였으며, 인식 모델에 따른 각 알고리즘의 성능을 비교하였다. 둘째로, 문맥 독립 음소 모델을 이용한 인식 시스템의 성능 향상을 위하 여 정적 특징 벡터에 대하여 주성분 분석 방법(principal component analysis)과 선형 판별 분석(linear discriminant analysis)과 같은 선형 변환 방법을 적용하여 분별력이 높은 벡터 공간으로 변환함으로써 인식 성능을 향상시켰다. 또한 선형 변환 방법을 켑스트럼 평균 차 감법과 결합하여 더욱 뛰어난 성능을 보여주었다.

  • PDF

The Research on Emotion Recognition through Multimodal Feature Combination (멀티모달 특징 결합을 통한 감정인식 연구)

  • Sung-Sik Kim;Jin-Hwan Yang;Hyuk-Soon Choi;Jun-Heok Go;Nammee Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.739-740
    • /
    • 2024
  • 본 연구에서는 음성과 텍스트라는 두 가지 모달리티의 데이터를 효과적으로 결합함으로써, 감정 분류의 정확도를 향상시키는 새로운 멀티모달 모델 학습 방법을 제안한다. 이를 위해 음성 데이터로부터 HuBERT 및 MFCC(Mel-Frequency Cepstral Coefficients)기법을 통해 추출한 특징 벡터와 텍스트 데이터로부터 RoBERTa를 통해 추출한 특징 벡터를 결합하여 감정을 분류한다. 실험 결과, 제안한 멀티모달 모델은 F1-Score 92.30으로 유니모달 접근 방식에 비해 우수한 성능 향상을 보였다.

A Phase-related Feature Extraction Method for Robust Speaker Verification (열악한 환경에 강인한 화자인증을 위한 위상 기반 특징 추출 기법)

  • Kwon, Chul-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.3
    • /
    • pp.613-620
    • /
    • 2010
  • Additive noise and channel distortion strongly degrade the performance of speaker verification systems, as it introduces distortion of the features of speech. This distortion causes a mismatch between the training and recognition conditions such that acoustic models trained with clean speech do not model noisy and channel distorted speech accurately. This paper presents a phase-related feature extraction method in order to improve the robustness of the speaker verification systems. The instantaneous frequency is computed from the phase of speech signals and features from the histogram of the instantaneous frequency are obtained. Experimental results show that the proposed technique offers significant improvements over the standard techniques in both clean and adverse testing environments.

A Study on Speech Recognition System Using Continuous HMM (연속분포 HMM을 이용한 음성인식 시스템에 관한 연구)

  • Kim, Sang-Duck;Lee, Geuk
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 1998.10a
    • /
    • pp.221-225
    • /
    • 1998
  • 본 논문에서는 연속분포(Continuous) HMM(hidden Markov model)을 기반으로 하여 한국어 고립단어인식 시스템을 설계, 구현하였다. 시스템의 학습과 평가를 위해 자동차 항법용 음성 명령어 도메인에서 추출한 10개의 고립단어를 대상으로 음성 데이터 베이스를 구축하였다. 음성 특징 파라미터로는 MFCCs(Mel Frequency Cepstral Coefficients)와 차분(delta) MFCC 그리고 에너지(energy)를 사용하였다. 학습 데이터로부터 추출한 18개의 유사 음소(phoneme-like unit : PLU)를 인식단위로 HMM 모델을 만들었고 조음 결합 현상(채-articulation)을 모델링 하기 위해 트라이폰(triphone) 모델로 확장하였다. 인식기 평가는 학습에 참여한 음성 데이터와 학습에 참여하지 않은 화자가 발성한 음성 데이터를 이용해 수행하였으며 평균적으로 97.5%의 인식성능을 얻었다.

  • PDF

Design and Implementation of Speech-Training System for Voice Disorders (발성장애아동을 위한 발성훈련시스템 설계 및 구현)

  • 정은순;김봉완;양옥렬;이용주
    • Journal of Internet Computing and Services
    • /
    • v.2 no.1
    • /
    • pp.97-106
    • /
    • 2001
  • In this paper, we design and implement complement based speech training system for voice disorder. The system consists of three level of training: precedent training, training for speech apprehension and training for speech enhancement. To analyze speech of voice disorder, we extracted speech features as loudness, amplitude, pitch using digital signal processing technique. Extracted features are converted to graphic interface for visual feedback of speech by the system.

  • PDF

Dialect classification based on the speed and the pause of speech utterances (발화 속도와 휴지 구간 길이를 사용한 방언 분류)

  • Jonghwan Na;Bowon Lee
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2023
  • In this paper, we propose an approach for dialect classification based on the speed and pause of speech utterances as well as the age and gender of the speakers. Dialect classification is one of the important techniques for speech analysis. For example, an accurate dialect classification model can potentially improve the performance of speaker or speech recognition. According to previous studies, research based on deep learning using Mel-Frequency Cepstral Coefficients (MFCC) features has been the dominant approach. We focus on the acoustic differences between regions and conduct dialect classification based on the extracted features derived from the differences. In this paper, we propose an approach of extracting underexplored additional features, namely the speed and the pauses of speech utterances along with the metadata including the age and the gender of the speakers. Experimental results show that our proposed approach results in higher accuracy, especially with the speech rate feature, compared to the method only using the MFCC features. The accuracy improved from 91.02% to 97.02% compared to the previous method that only used MFCC features, by incorporating all the proposed features in this paper.

Robust Speech Endpoint Detection in Noisy Environments for HRI (Human-Robot Interface) (인간로봇 상호작용을 위한 잡음환경에 강인한 음성 끝점 검출 기법)

  • Park, Jin-Soo;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.147-156
    • /
    • 2013
  • In this paper, a new speech endpoint detection method in noisy environments for moving robot platforms is proposed. In the conventional method, the endpoint of speech is obtained by applying an edge detection filter that finds abrupt changes in the feature domain. However, since the feature of the frame energy is unstable in such noisy environments, it is difficult to accurately find the endpoint of speech. Therefore, a novel feature extraction method based on the twice-iterated fast fourier transform (TIFFT) and statistical models of speech is proposed. The proposed feature extraction method was applied to an edge detection filter for effective detection of the endpoint of speech. Representative experiments claim that there was a substantial improvement over the conventional method.

Feature Term Based Retrieval Method for Image Retrieval (이미지 검색을 위한 특징용어 기반 검색 기법)

  • Park, Sung-Hee;Hur, Jeung;Kim, Hyun-Jin;Jang, Myung-Gil
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.576-578
    • /
    • 2003
  • 본 논문에서는 이미지 검색을 위한 새로운 검색 기법을 제시한다. 기존의 특징기반 검색 기법이나 주석기반 검색 기법은 특징이나 주석에 대하여 색인 형태나 질의 형태가 동일하였다. 그러나, 제안하는 검색 기법은 위의 두 전형적인 검색기법을 혼합한 것으로, 텍스트로 질의하면 질의 텍스트를 질의처리를 통해 텍스트에 포함된 특징용어를 추출하고 특징용어를 이미지가 본질적으로 가지는 특징(color, shape, texture)으로 변환한 다음 그 특징을 질의로 이용하여 특징기반 검색을 하는 기법이다. 이러한 기법은 현재 사용자에게 친숙한 텍스트 질의를 유지할 수 있게 해 주며 앞으로 음성인식을 통한 음성 질의인터페이스가 적용될 경우 더욱 효과적으로 사용될 수 있을 것이다.

  • PDF

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.