• Title/Summary/Keyword: voice extract

Search Result 68, Processing Time 0.024 seconds

A study on the vowel extraction from the word using the neural network (신경망을 이용한 단어에서 모음추출에 관한 연구)

  • 이택준;김윤중
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2003.11a
    • /
    • pp.721-727
    • /
    • 2003
  • This study designed and implemented a system to extract of vowel from a word. The system is comprised of a voice feature extraction module and a neutral network module. The voice feature extraction module use a LPC(Linear Prediction Coefficient) model to extract a voice feature from a word. The neutral network module is comprised of a learning module and voice recognition module. The learning module sets up a learning pattern and builds up a neutral network to learn. Using the information of a learned neutral network, a voice recognition module extracts a vowel from a word. A neutral network was made to learn selected vowels(a, eo, o, e, i) to test the performance of a implemented vowel extraction recognition machine. Through this experiment, could confirm that speech recognition module extract of vowel from 4 words.

  • PDF

The Extraction of Effective Index Database from Voice Database and Information Retrieval (음성 데이터베이스로부터의 효율적인 색인데이터베이스 구축과 정보검색)

  • Park Mi-Sung
    • Journal of Korean Library and Information Science Society
    • /
    • v.35 no.3
    • /
    • pp.271-291
    • /
    • 2004
  • Such information services source like digital library has been asked information services of atypical multimedia database like image, voice, VOD/AOD. Examined in this study are suggestions such as word-phrase generator, syllable recoverer, morphological analyzer, corrector for voice processing. Suggested voice processing technique transform voice database into tort database, then extract index database from text database. On top of this, the study suggest a information retrieval model to use in extracted index database, voice full-text information retrieval.

  • PDF

Voice Recognition Based on Adaptive MFCC and Neural Network (적응 MFCC와 Neural Network 기반의 음성인식법)

  • Bae, Hyun-Soo;Lee, Suk-Gyu
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.5 no.2
    • /
    • pp.57-66
    • /
    • 2010
  • In this paper, we propose an enhanced voice recognition algorithm using adaptive MFCC(Mel Frequency Cepstral Coefficients) and neural network. Though it is very important to extract voice data from the raw data to enhance the voice recognition ratio, conventional algorithms are subject to deteriorating voice data when they eliminate noise within special frequency band. Differently from the conventional MFCC, the proposed algorithm imposed bigger weights to some specified frequency regions and unoverlapped filterbank to enhance the recognition ratio without deteriorating voice data. In simulation results, the proposed algorithm shows better performance comparing with MFCC since it is robust to variation of the environment.

Voice Recognition Performance Improvement using the Convergence of Bayesian method and Selective Speech Feature (베이시안 기법과 선택적 음성특징 추출을 융합한 음성 인식 성능 향상)

  • Hwang, Jae-Chun
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.6
    • /
    • pp.7-11
    • /
    • 2016
  • Voice recognition systems which use a white noise and voice recognition environment are not correct voice recognition with variable voice mixture. Therefore in this paper, we propose a method using the convergence of Bayesian technique and selecting voice for effective voice recognition. we make use of bank frequency response coefficient for selective voice extraction, Using variables observed for the combination of all the possible two observations for this purpose, and has an voice signal noise information to the speech characteristic extraction selectively is obtained by the energy ratio on the output. It provide a noise elimination and recognition rates are improved with combine voice recognition of bayesian methode. The result which we confirmed that the recognition rate of 2.3% is higher than HMM and CHMM methods in vocabulary recognition, respectively.

A Study on Stable Motion Control of Humanoid Robot with 24 Joints Based on Voice Command

  • Lee, Woo-Song;Kim, Min-Seong;Bae, Ho-Young;Jung, Yang-Keun;Jung, Young-Hwa;Shin, Gi-Soo;Park, In-Man;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.1
    • /
    • pp.17-27
    • /
    • 2018
  • We propose a new approach to control a biped robot motion based on iterative learning of voice command for the implementation of smart factory. The real-time processing of speech signal is very important for high-speed and precise automatic voice recognition technology. Recently, voice recognition is being used for intelligent robot control, artificial life, wireless communication and IoT application. In order to extract valuable information from the speech signal, make decisions on the process, and obtain results, the data needs to be manipulated and analyzed. Basic method used for extracting the features of the voice signal is to find the Mel frequency cepstral coefficients. Mel-frequency cepstral coefficients are the coefficients that collectively represent the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The reliability of voice command to control of the biped robot's motion is illustrated by computer simulation and experiment for biped walking robot with 24 joint.

Correlation between the Content and Pharmacokinetics of Ginsenosides from Four Different Preparation of Panax Ginseng C.A. Meyer in Rats

  • Jeon, Ji-Hyeon;Lee, Jaehyeok;Lee, Chul Haeng;Choi, Min-Koo;Song, Im-Sook
    • Mass Spectrometry Letters
    • /
    • v.12 no.1
    • /
    • pp.16-20
    • /
    • 2021
  • We aimed to compare the content of ginsenosides and the pharmacokinetics after the oral administration of four different ginseng products at a dose of 1 g/kg in rats. The four different ginseng products were fresh ginseng extract, red ginseng extract, white ginseng extract, and saponin enriched white ginseng extract prepared from the radix of Panax ginseng C.A. Meyer. The ginsenoside concentrations in the ginseng product and the rat plasma samples were determined using a liquid chromatography-tandem mass spectrometry (LC-MS/MS). Eight or nine ginsenosides of the 15 tested ginsenosides were detected; however, the content and total ginsenosides varied depending on the preparation method. Moreover, the content of triglycosylated ginsenosides was higher than that of diglycosylated ginsenosides, and deglycosylated ginsenosides were not present in any preparation. After the single oral administrations of four different ginseng products in rats, only four ginsenosides, such as 20(S)-ginsenosides Rb1 (GRb1), GRb2, GRc, and GRd, were detected in the rat plasma samples among the 15 ginsenosides tested. The plasma concentrations of GRb1, GRb2, GRc, and GRd were different depends on the preparation method but pharmacokinetic features of the four ginseng products were similar. In conclusion, a good correlation between the area under the concentration curve and the content of GRb1, GRb2, and GRc, but not GRd, in the ginseng products was identified and it might be the result of their higher content and intestinal biotransformation of the ginseng product.

Implementation of Extracting Specific Information by Sniffing Voice Packet in VoIP

  • Lee, Dong-Geon;Choi, WoongChul
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.209-214
    • /
    • 2020
  • VoIP technology has been widely used for exchanging voice or image data through IP networks. VoIP technology, often called Internet Telephony, sends and receives voice data over the RTP protocol during the session. However, there is an exposition risk in the voice data in VoIP using the RTP protocol, where the RTP protocol does not have a specification for encryption of the original data. We implement programs that can extract meaningful information from the user's dialogue. The meaningful information means the information that the program user wants to obtain. In order to do that, our implementation has two parts. One is the client part, which inputs the keyword of the information that the user wants to obtain, and the other is the server part, which sniffs and performs the speech recognition process. We use the Google Speech API from Google Cloud, which uses machine learning in the speech recognition process. Finally, we discuss the usability and the limitations of the implementation with the example.

An acoustic study of feeling information extracting method (음성을 이용한 감정 정보 추출 방법)

  • Lee, Yeon-Soo;Park, Young-B.
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.51-55
    • /
    • 2010
  • Tele-marketing service has been provided through voice media in a several places such as modern call centers. In modern call centers, they are trying to measure their service quality, and one of the measuring method is a extracting speaker's feeling information in their voice. In this study, it is proposed to analyze speaker's voice in order to extract their feeling information. For this purpose, a person's feeling is categorized by analyzing several types of signal parameters in the voice signal. A person's feeling can be categorized in four different states: joy, sorrow, excitement, and normality. In a normal condition, excited or angry state can be major factor of service quality. In this paper, it is proposed to select a conversation with problems by extracting the speaker's feeling information based on pitches and amplitudes of voice.

Anatomy of Delay for Voice Service in NGN

  • Lee, Hoon;Baek, Yong-Chang
    • Proceedings of the IEEK Conference
    • /
    • 2003.11c
    • /
    • pp.172-175
    • /
    • 2003
  • In this paper we propose a method fur the evaluation of the quality of service for VoIP services in NGN. Specifically, let us anatomize the elements of delay of a voice connection in the network in an end-to-end manner and investigate expected value at each point. We extract the delay time in each element in the network such as gateway, network node, and terminal equipment, and estimate an upper bound fur the tolerable delay in each element.

  • PDF

Voice Synthesis Detection Using Language Model-Based Speech Feature Extraction (언어 모델 기반 음성 특징 추출을 활용한 생성 음성 탐지)

  • Seung-min Kim;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.439-449
    • /
    • 2024
  • Recent rapid advancements in voice generation technology have enabled the natural synthesis of voices using text alone. However, this progress has led to an increase in malicious activities, such as voice phishing (voishing), where generated voices are exploited for criminal purposes. Numerous models have been developed to detect the presence of synthesized voices, typically by extracting features from the voice and using these features to determine the likelihood of voice generation.This paper proposes a new model for extracting voice features to address misuse cases arising from generated voices. It utilizes a deep learning-based audio codec model and the pre-trained natural language processing model BERT to extract novel voice features. To assess the suitability of the proposed voice feature extraction model for voice detection, four generated voice detection models were created using the extracted features, and performance evaluations were conducted. For performance comparison, three voice detection models based on Deepfeature proposed in previous studies were evaluated against other models in terms of accuracy and EER. The model proposed in this paper achieved an accuracy of 88.08%and a low EER of 11.79%, outperforming the existing models. These results confirm that the voice feature extraction method introduced in this paper can be an effective tool for distinguishing between generated and real voices.