• Title/Summary/Keyword: 음성인식알고리즘

Search Result 449, Processing Time 0.023 seconds

On a Pitch Point Detection by Preserving the Phase Component of the Autocorrelation Function (자기상관함수에서 위상 성분의 보존에 의한 피치 시점 검출에 관한 연구)

  • 함명규;최성영;박종철;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.799-802
    • /
    • 2000
  • 음성신호처리 분야에서 음성신호의 기본 주파수를 정확히 검출 할 수 있다면 음성인식을 할 때 화자에 따른 영향을 줄일 수 있으므로 인식의 정확도를 높일 수 있고, 음성합성을 할 때 자연성과 개성을 쉽게 변경하거나 유지할 수 있다. 또한 분석을 할 때 피치에 동기시켜 분석하면 성문의 영향이 제거된 정확한 성도 파라미터를 얻을 수 있다. 위와 같은 피치검출의 중요성 때문에 피치검출에 대하여 다양한 방법 이 제안되었다〔1〕. 본 논문에서는 음성신호의 분석 시 불안정한 구간에 대해 피치 시점을 검출하는 방법을 연구하였다. 음성신호의 분석에 있어서 기존의 자기상관함수법(Autocorrelation Function)은 주기성을 강조할 수 있다는 장점을 가지고 있다. 그러나 자기상관함수는 위상성분을 보존하지 못한다는 단점을 가지고 있다. 따라서, 자기상관함수를 사용하면서 위상성분을 보존할 수 있는 알고리즘을 제안하고자 한다. 실험결과 피치시점을 수동으로 찾은 경우와 비교하였을 때 약 98% 정도의 정확도를 얻을 수 있었다. 위의 결과와 같이 위상 성분이 보존된 자기상관함수를 사용할 경우 음성합성, 코딩, 인식에서 유용하게 쓰일 수 있다.

  • PDF

A Real-Time Embedded Speech Recognition System (실시간 임베디드 음성 인식 시스템)

  • 남상엽;전은희;박인정
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.74-81
    • /
    • 2003
  • In this study, we'd implemented a real time embedded speech recognition system that requires minimum memory size for speech recognition engine and DB. The word to be recognized consist of 40 commands used in a PCS phone and 10 digits. The speech data spoken by 15 male and 15 female speakers was recorded and analyzed by short time analysis method, which window size is 256. The LPC parameters of each frame were computed through Levinson-Burbin algorithm and they were transformed to Cepstrum parameters. Before the analysis, speech data should be processed by pre-emphasis that will remove the DC component in speech and emphasize high frequency band. Baum-Welch reestimation algorithm was used for the training of HMM. In test phone, we could get a recognition rate using likelihood method. We implemented an embedded system by porting the speech recognition engine on ARM core evaluation board. The overall recognition rate of this system was 95%, while the rate on 40 commands was 96% and that 10 digits was 94%.

A Basic Performance Evaluation of the Speech Recognition APP of Standard Language and Dialect using Google, Naver, and Daum KAKAO APIs (구글, 네이버, 다음 카카오 API 활용앱의 표준어 및 방언 음성인식 기초 성능평가)

  • Roh, Hee-Kyung;Lee, Kang-Hee
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.819-829
    • /
    • 2017
  • In this paper, we describe the current state of speech recognition technology and identify the basic speech recognition technology and algorithms first, and then explain the code flow of API necessary for speech recognition technology. We use the application programming interface (API) of Google, Naver, and Daum KaKao, which have the most famous search engine among the speech recognition APIs, to create a voice recognition app in the Android studio tool. Then, we perform a speech recognition experiment on people's standard words and dialects according to gender, age, and region, and then organize the recognition rates into a table. Experiments were conducted on the Gyeongsang-do, Chungcheong-do, and Jeolla-do provinces where the degree of tongues was severe. And Comparative experiments were also conducted on standardized dialects. Based on the resultant sentences, the accuracy of the sentence is checked based on spacing of words, final consonant, postposition, and words and the number of each error is represented by a number. As a result, we aim to introduce the advantages of each API according to the speech recognition rate, and to establish a basic framework for the most efficient use.

A Comparative Study on the phoneme recognition rate with regard to HMM training algorithms (HMM 훈련 알고리즘에 따른 음소인식률 비교 연구)

  • 구명완
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.298-301
    • /
    • 1998
  • HMM 훈련 방법에 따른 음소인식률의 변화에 대하여 기술한다. 음성모델은 이산 확률 밀도 혹은 연속 확률 밀도를 갖는 HMM을 사용하였으며, 훈련 알고리즘으로서는 forward-backward 와 segmental K-means 알고리즘을 사용하였다. 연속 확률 밀도는 N개의 mixture로 구성되어 있는데 1개의 mixture로 확장할 경우에서는 이진 트리 방식과 one-by-one 방식을 사용하였다. 여러 가지의 조합을 이용하여 음소인식 실험을 수행한 결과 연속 확률 분포를 사용하고 one-by-one 방식을 사용한 forward-backward 알고리즘이 가장 우수한 결과를 나타내었다.

  • PDF

A Parallel Speech Recognition Model on Distributed Memory Multiprocessors (분산 메모리 다중프로세서 환경에서의 병렬 음성인식 모델)

  • 정상화;김형순;박민욱;황병한
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.44-51
    • /
    • 1999
  • This paper presents a massively parallel computational model for the efficient integration of speech and natural language understanding. The phoneme model is based on continuous Hidden Markov Model with context dependent phonemes, and the language model is based on a knowledge base approach. To construct the knowledge base, we adopt a hierarchically-structured semantic network and a memory-based parsing technique that employs parallel marker-passing as an inference mechanism. Our parallel speech recognition algorithm is implemented in a multi-Transputer system using distributed-memory MIMD multiprocessors. Experimental results show that the parallel speech recognition system performs better in recognition accuracy than a word network-based speech recognition system. The recognition accuracy is further improved by applying code-phoneme statistics. Besides, speedup experiments demonstrate the possibility of constructing a realtime parallel speech recognition system.

  • PDF

A Study of Speech Recognition in a High Speed Automobile (고속 주행중인 자동차 환경에서의 음성인식 연구)

  • 유봉근
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.65-69
    • /
    • 1998
  • 고속 주행중인 자동차 환경에서 운전자의 안전 및 편의성을 위하여, 음성인식 기술을 이용한 각종 차량 편의장치를 제어하는 것으로, 운전자와 자동차와의 Man Machine Interface 구조로 구성되었다. 이 시스템은 주행중인 자동차 환경에서 보조적인 스위치의 조작없이 상시 음성의 입, 출력이 가능하도록 하며, band pass filter를 이용하여 잡음 환경에 강인한 모델을 선택하도록 하였으며, 음성의 특징 파라미터와 인식 알고리즘은 perceptual linear predictive 13차와 one-stage dynamic programming을 사용하였다. off-line 실험 결과 고속 주행중인 자동차 환경에서 자주 사용하는 차량제어 명령 33개에 대하여 화자독립 82.47%(중부고속도로), 화자종속 94.44%의 인식율을 구하였다. 또한 고속 주행중인 차량에서 kvhs, 핸드폰 사용으로 인한 사고를 줄이기 위하여 음성으로 전화를 걸 수 있도록 하는 Voice Dialing기능도 구현하였다.

  • PDF

A Study on a Non-Voice Section Detection Model among Speech Signals using CNN Algorithm (CNN(Convolutional Neural Network) 알고리즘을 활용한 음성신호 중 비음성 구간 탐지 모델 연구)

  • Lee, Hoo-Young
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.6
    • /
    • pp.33-39
    • /
    • 2021
  • Speech recognition technology is being combined with deep learning and is developing at a rapid pace. In particular, voice recognition services are connected to various devices such as artificial intelligence speakers, vehicle voice recognition, and smartphones, and voice recognition technology is being used in various places, not in specific areas of the industry. In this situation, research to meet high expectations for the technology is also being actively conducted. Among them, in the field of natural language processing (NLP), there is a need for research in the field of removing ambient noise or unnecessary voice signals that have a great influence on the speech recognition recognition rate. Many domestic and foreign companies are already using the latest AI technology for such research. Among them, research using a convolutional neural network algorithm (CNN) is being actively conducted. The purpose of this study is to determine the non-voice section from the user's speech section through the convolutional neural network. It collects the voice files (wav) of 5 speakers to generate learning data, and utilizes the convolutional neural network to determine the speech section and the non-voice section. A classification model for discriminating speech sections was created. Afterwards, an experiment was conducted to detect the non-speech section through the generated model, and as a result, an accuracy of 94% was obtained.

Speech/Mixed Content Signal Classification Based on GMM Using MFCC (MFCC를 이용한 GMM 기반의 음성/혼합 신호 분류)

  • Kim, Ji-Eun;Lee, In-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.185-192
    • /
    • 2013
  • In this paper, proposed to improve the performance of speech and mixed content signal classification using MFCC based on GMM probability model used for the MPEG USAC(Unified Speech and Audio Coding) standard. For effective pattern recognition, the Gaussian mixture model (GMM) probability model is used. For the optimal GMM parameter extraction, we use the expectation maximization (EM) algorithm. The proposed classification algorithm is divided into two significant parts. The first one extracts the optimal parameters for the GMM. The second distinguishes between speech and mixed content signals using MFCC feature parameters. The performance of the proposed classification algorithm shows better results compared to the conventionally implemented USAC scheme.

Design and Implementation of Speech Music Discrimination System per Block Unit on FM Radio Broadcast (FM 방송 중 블록 단위 음성 음악 판별 시스템의 설계 및 구현)

  • Jang, Hyeon-Jong;Eom, Jeong-Gwon;Im, Jun-Sik
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.25-28
    • /
    • 2007
  • 본 논문은 FM 라디오 방송의 오디오 신호를 블록 단위로 음성 음악을 판별하는 시스템을 제안하는 논문이다. 본 논문에서는 음성 음악 판별 시스템을 구축하기 위해 다양한 특정 파라미터와 분류 알고리즘을 제안 한다. 특정 파라미터는 신호처리 분야(Centroid, Rolloff, Flux, ZCR, Low Energy), 음성 인식 분야(LPC, MFCC), 음악 분석 분야(MPitch, Beat)에서 각각 사용되는 파라미터를 사용하였으며 분류 알고리즘으로는 패턴인식 분야(GMM, KNN, BP)와 퍼지 신경망(ANFIS)을 사용하였고, 거리 구현은 Mahalanobis 거리를 사용하였다.

  • PDF

Multi-Modal Biometries System for Ubiquitous Sensor Network Environment (유비쿼터스 센서 네트워크 환경을 위한 다중 생체인식 시스템)

  • Noh, Jin-Soo;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.36-44
    • /
    • 2007
  • In this paper, we implement the speech & face recognition system to support various ubiquitous sensor network application services such as switch control, authentication, etc. using wireless audio and image interface. The proposed system is consist of the H/W with audio and image sensor and S/W such as speech recognition algorithm using psychoacoustic model, face recognition algorithm using PCA (Principal Components Analysis) and LDPC (Low Density Parity Check). The proposed speech and face recognition systems are inserted in a HOST PC to use the sensor energy effectively. And improve the accuracy of speech and face recognition, we implement a FEC (Forward Error Correction) system Also, we optimized the simulation coefficient and test environment to effectively remove the wireless channel noises and correcting wireless channel errors. As a result, when the distance that between audio sensor and the source of voice is less then 1.5m FAR and FRR are 0.126% and 7.5% respectively. The face recognition algorithm step is limited 2 times, GAR and FAR are 98.5% and 0.036%.