• 제목/요약/키워드: Voice Features Extraction

검색결과 20건 처리시간 0.026초

언어 모델 기반 음성 특징 추출을 활용한 생성 음성 탐지 (Voice Synthesis Detection Using Language Model-Based Speech Feature Extraction)

  • 김승민;박소희;최대선
    • 정보보호학회논문지
    • /
    • 제34권3호
    • /
    • pp.439-449
    • /
    • 2024
  • 최근 음성 생성 기술의 급격한 발전으로, 텍스트만으로도 자연스러운 음성 합성이 가능해졌다. 이러한 발전은 타인의 음성을 생성하여 범죄에 이용하는 보이스피싱과 같은 악용 사례를 증가시키는 결과를 낳고 있다. 음성 생성 여부를 탐지하는 모델은 많이 개발되고 있으며, 일반적으로 음성의 특징을 추출하고 이러한 특징을 기반으로 음성 생성 여부를 탐지한다. 본 논문은 생성 음성으로 인한 악용 사례에 대응하기 위해 새로운 음성 특징 추출 모델을 제안한다. 오디오를 입력으로 받는 딥러닝 기반 오디오 코덱 모델과 사전 학습된 자연어 처리 모델인 BERT를 사용하여 새로운 음성 특징 추출 모델을 제안하였다. 본 논문이 제안한 음성 특징 추출 모델이 음성 탐지에 적합한지 확인하기 위해 추출된 특징을 활용하여 4가지 생성 음성 탐지 모델을 만들어 성능평가를 진행하였다. 성능 비교를 위해 기존 논문에서 제안한 Deepfeature 기반의 음성 탐지 모델 3개와 그 외 모델과 정확도 및 EER을 비교하였다. 제안한 모델은 88.08%로 기존 모델보다 높은 정확도와 11.79%의 낮은 EER을 보였다. 이를 통해 본 논문에서 제안한 음성 특징 추출 방법이 생성 음성과 실제 음성을 판별하는 효과적인 도구로 사용될 수 있음을 확인하였다.

음성구간검출을 위한 비정상성 잡음에 강인한 특징 추출 (Robust Feature Extraction for Voice Activity Detection in Nonstationary Noisy Environments)

  • 홍정표;박상준;정상배;한민수
    • 말소리와 음성과학
    • /
    • 제5권1호
    • /
    • pp.11-16
    • /
    • 2013
  • This paper proposes robust feature extraction for accurate voice activity detection (VAD). VAD is one of the principal modules for speech signal processing such as speech codec, speech enhancement, and speech recognition. Noisy environments contain nonstationary noises causing the accuracy of the VAD to drastically decline because the fluctuation of features in the noise intervals results in increased false alarm rates. In this paper, in order to improve the VAD performance, harmonic-weighted energy is proposed. This feature extraction method focuses on voiced speech intervals and weighted harmonic-to-noise ratios to determine the amount of the harmonicity to frame energy. For performance evaluation, the receiver operating characteristic curves and equal error rate are measured.

혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식 (Emotion Recognition of Facial Expression using the Hybrid Feature Extraction)

  • 변광섭;박창현;심귀보
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

음성의 안정적 변수 추출 및 변수의 의미 연구 (Study for Extraction of Stable Vocal Features and Definition of the Features)

  • 김근호;김상길;강남식;김종열
    • 한국한의학연구원논문집
    • /
    • 제17권3호
    • /
    • pp.97-104
    • /
    • 2011
  • Objectives : In this paper, we proposed a method for selecting reliable variables from various vocal features such as frequency derivative features, frequency band ratios, intensities of 5 vowels and an intensity of a sentence, since some features are sensitive to the variation of a subject's utterance. Methods : To obtain the reliable voice variables, the coefficient of variation (CV) was used as the index to evaluate the level of reliability. Since the distributions of a few features are not Gaussian, but are instead skewed to the right or left, we transformed the features by taking the log or square root. Moreover, the definition of the variables that are suitable to represent the vocal property was explained and analyzed. Results : At first, we recorded the vowels and the sentence five times both in the morning and afternoon of the same day, totally ten recordings from each of six subjects (three males and three females). We then analyzed the CVs of each subject's voice to obtain the stable features with a sufficient repeatability. The features having less than 20% CVs for all six subjects were selected. As a result, 92 stable variables from the 222 features were extracted, which included all the transformed variables. Conclusions : Voice can be widely used to classify the four constitution types and to recognize one's health condition from extracting meaningful features as physical quantity in traditional Korean medicine or Western medicine. Therefore, stable voice variables can be useful in the u-Healthcare system of personalized medicine and for improving diagnostic accuracy.

발화속도 및 강도 분석에 기반한 폐질환의 음성적 특징 추출 (Voice Features Extraction of Lung Diseases Based on the Analysis of Speech Rates and Intensity)

  • 김봉현;조동욱
    • 정보처리학회논문지B
    • /
    • 제16B권6호
    • /
    • pp.471-478
    • /
    • 2009
  • 현대인의 6대 난치병으로 분류되고 있는 폐질환은 대부분 흡연과 대기 오염으로 발병한다. 이와 같은 이유로 폐기능이 손상되어 폐포내에서 이산화탄소와 산소의 교환이 정상적으로 이루어지지 않아 생명 연장의 위험 질환으로 관심이 증대되고 있다. 이를 위해 본 논문에서는 폐질환 에 대한 음성적 특징 추출을 목적으로 음성 분석 요소를 적용한 폐질환 진단 방법을 제안하였다. 우선 폐질환을 앓고 있는 환자들과 동일한 연령, 성별대의 정상인들로 피실험자 집단을 구성하고 이들의 음성을 수집하였다. 또한 수집된 음성을 통해 다양한 음성 분석 요소를 적용하여 분석을 수행하였으며 발화속도 및 강도 분석 요소 부분에서 폐질환자 집단과 정상인 집단간의 유의성이 있음을 알아 낼 수 있었다. 결론적으로 폐질환자 집단이 정상인 집단보다 발화속도가 느리며 강도가 크게 나타나는 결과를 도출해 내었으며 이를 통해 폐질환의 음성적 특징 추출 방법을 제시하였다.

음성신호기반의 감정인식의 특징 벡터 비교 (A Comparison of Effective Feature Vectors for Speech Emotion Recognition)

  • 신보라;이석필
    • 전기학회논문지
    • /
    • 제67권10호
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

손정맥 패턴 추출에 관한 연구 (A Study of the extraction of a Hand Vein Pattern)

  • 김종석;백한욱;정진현
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 하계학술대회 논문집 D
    • /
    • pp.3022-3024
    • /
    • 2000
  • Biometrics is the electronic recognition of individuals achieved through a process of extracting, and then verifying, features which are unique to that individual. This field is rapidly evolving technology that has to be widely adopted in a broad range of applications. Many methods have been studied such as extraction of the facial features, the voice, the vein and even a person's signature. Among biometrics, a hand veins provide large, robust, stable, hidden biometric features. Hand vein patterns have been proven to be absolutely unique by Cambridge Consultants Ltd. Because of this advantage, hand vein recognition are recently developing field in the field of a security.

  • PDF

무선랜 환경에서의 분산 음성 인식을 이용한 음성 다이얼링 시스템 (A Voice-Activated Dialing System with Distributed Speech Recognition in WiFi Environments)

  • 박성준;구명완
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.135-145
    • /
    • 2005
  • In this paper, a WiFi phone system with distributed speech recognition is implemented. The WiFi phone with voice-activated dialing and its functions are explained. Features of the input speech are extracted and are sent to the interactive voice response (IVR) server according to the real-time transport protocol (RTP). Feature extraction is based on the European Telecommunication Standards Institute (ETSI) standard front-end, but is modified to reduce the processing time. The time for front-end processing on a WiFi phone is compared with that in a PC.

  • PDF

감정 인식을 위한 음성 특징 도출 (Extraction of Speech Features for Emotion Recognition)

  • 권철홍;송승규;김종열;김근호;장준수
    • 말소리와 음성과학
    • /
    • 제4권2호
    • /
    • pp.73-78
    • /
    • 2012
  • Emotion recognition is an important technology in the filed of human-machine interface. To apply speech technology to emotion recognition, this study aims to establish a relationship between emotional groups and their corresponding voice characteristics by investigating various speech features. The speech features related to speech source and vocal tract filter are included. Experimental results show that statistically significant speech parameters for classifying the emotional groups are mainly related to speech sources such as jitter, shimmer, F0 (F0_min, F0_max, F0_mean, F0_std), harmonic parameters (H1, H2, HNR05, HNR15, HNR25, HNR35), and SPI.

Proposed Efficient Architectures and Design Choices in SoPC System for Speech Recognition

  • Trang, Hoang;Hoang, Tran Van
    • 전기전자학회논문지
    • /
    • 제17권3호
    • /
    • pp.241-247
    • /
    • 2013
  • This paper presents the design of a System on Programmable Chip (SoPC) based on Field Programmable Gate Array (FPGA) for speech recognition in which Mel-Frequency Cepstral Coefficients (MFCC) for speech feature extraction and Vector Quantization for recognition are used. The implementing process of the speech recognition system undergoes the following steps: feature extraction, training codebook, recognition. In the first step of feature extraction, the input voice data will be transformed into spectral components and extracted to get the main features by using MFCC algorithm. In the recognition step, the obtained spectral features from the first step will be processed and compared with the trained components. The Vector Quantization (VQ) is applied in this step. In our experiment, Altera's DE2 board with Cyclone II FPGA is used to implement the recognition system which can recognize 64 words. The execution speed of the blocks in the speech recognition system is surveyed by calculating the number of clock cycles while executing each block. The recognition accuracies are also measured in different parameters of the system. These results in execution speed and recognition accuracy could help the designer to choose the best configurations in speech recognition on SoPC.