• Title/Summary/Keyword: 화자의 성별

Search Result 32, Processing Time 0.023 seconds

Acoustics of Young People's In Busan : Developmental Changes of Spectral Parameters (부산 지역 청소년 음성의 연령별 특징 변화 분석)

  • Back Sung-Kwan;Ro Yong-Ju;Yoon Jong-Rak
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.49-52
    • /
    • 2001
  • 부산지역 청소년 음성의 지속시간, 피치주파수 포만트 주파수 특성을 연령별, 성별로 분석하였다. 실제 발음 환경에서의 음성 패턴은 발성화자 개인 및 화자별로 다양하게 변화한다. 이를 모델 화하기 위해서는 다량의 음성 데이터로부터 통계적 방법에 의한 변화 요인별 파라미터 분석이 선행되어야 할 것이다. 실험에 사용된 데이터는 부산지역에 거주하는 청소년(초등학생, 중학생, 고등학생)들이 연령별로 3회 발성한 우화의 일부와 단모음(/아/,/이/,/우/,/에/,/오/)이다 실험 결과로부터 얻어진 지속시간, 주파수 특성 변화 패턴을 연령별, 성별로 구분하여 통계적으로 분석한 뒤 이를 정량화 하였다. 실험 결과로부터 부산 지역 청소년 음성의 지속시간, 주파수 특성은 예측된 바와 같이 기 연구된 성인 음성과 많은 차이를 보였으며 이는 부산 지역 방언의 DB 구축 시 설계자가 고려해야 할 기초자료로 활용 될 수 있을 것이다.

  • PDF

A Development of Wireless Sensor Networks for Collaborative Sensor Fusion Based Speaker Gender Classification (협동 센서 융합 기반 화자 성별 분류를 위한 무선 센서네트워크 개발)

  • Kwon, Ho-Min
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.2
    • /
    • pp.113-118
    • /
    • 2011
  • In this paper, we develop a speaker gender classification technique using collaborative sensor fusion for use in a wireless sensor network. The distributed sensor nodes remove the unwanted input data using the BER(Band Energy Ration) based voice activity detection, process only the relevant data, and transmit the hard labeled decisions to the fusion center where a global decision fusion is carried out. This takes advantages of power consumption and network resource management. The Bayesian sensor fusion and the global weighting decision fusion methods are proposed to achieve the gender classification. As the number of the sensor nodes varies, the Bayesian sensor fusion yields the best classification accuracy using the optimal operating points of the ROC(Receiver Operating Characteristic) curves_ For the weights used in the global decision fusion, the BER and MCL(Mutual Confidence Level) are employed to effectively combined at the fusion center. The simulation results show that as the number of the sensor nodes increases, the classification accuracy was even more improved in the low SNR(Signal to Noise Ration) condition.

Speaker-Adaptive Speech Synthesis based on Fuzzy Vector Quantizer Mapping and Neural Networks (퍼지 벡터 양자화기 사상화와 신경망에 의한 화자적응 음성합성)

  • Lee, Jin-Yi;Lee, Gwang-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.149-160
    • /
    • 1997
  • This paper is concerned with the problem of speaker-adaptive speech synthes is method using a mapped codebook designed by fuzzy mapping on FLVQ (Fuzzy Learning Vector Quantization). The FLVQ is used to design both input and reference speaker's codebook. This algorithm is incorporated fuzzy membership function into the LVQ(learning vector quantization) networks. Unlike the LVQ algorithm, this algorithm minimizes the network output errors which are the differences of clas s membership target and actual membership values, and results to minimize the distances between training patterns and competing neurons. Speaker Adaptation in speech synthesis is performed as follow;input speaker's codebook is mapped a reference speaker's codebook in fuzzy concepts. The Fuzzy VQ mapping replaces a codevector preserving its fuzzy membership function. The codevector correspondence histogram is obtained by accumulating the vector correspondence along the DTW optimal path. We use the Fuzzy VQ mapping to design a mapped codebook. The mapped codebook is defined as a linear combination of reference speaker's vectors using each fuzzy histogram as a weighting function with membership values. In adaptive-speech synthesis stage, input speech is fuzzy vector-quantized by the mapped codcbook, and then FCM arithmetic is used to synthesize speech adapted to input speaker. The speaker adaption experiments are carried out using speech of males in their thirties as input speaker's speech, and a female in her twenties as reference speaker's speech. Speeches used in experiments are sentences /anyoung hasim nika/ and /good morning/. As a results of experiments, we obtained a synthesized speech adapted to input speaker.

  • PDF

A Design and Implementation of Speech Recognition Preprocessing System using Formant Frequency (포만트 주파수를 이용한 음성인식 전처리 시스템의 설계 및 구현)

  • 김태욱;한승진;김민성;이정현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.198-200
    • /
    • 1999
  • 인간이 발성하는 음성에는 의미에 대한 정보 뿐만 아니라 화자의 성별에 따라 고유한 특성을 가지고 있다. 즉 음성은 고음이 강한 여성음성과 남성음성으로 분류할 수 있다. 그러나, 기존의 HMM을 이용한 음성인식시스템에서는 남성과 여성음성의 이러한 특성이 있음에도 불구하고 이를 고려하지 않고, 하나의 HMM으로 구성하고 있다. 본 논문에서 제시하는 알고리즘으로 실험한 결과 남성과 여성의 포만트 주파수가 100~30Hzck이가 나는 것을 알 수 있었고, 이러한 특성을 고려하여 남성과 여성의 음성을 구별할 수 있는 방법을 제안한다. 또한 남성과 여성음성을 각각 구분하여 GMM을 훈련시킨 후 인식과정에서 입력된 음성의 포만트 특성에 따라 남성음성이면 남성 HMM으로 여성음성이면 여성 HMM으로 인식을 수행함으로써 기존의 인식방법보다 남성음성은 5.2% 여성음성은 4.4% 향상된 결과를 얻었다.

  • PDF

A realization of pauses in utterance across speech style, gender, and generation (과제, 성별, 세대에 따른 휴지의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.33-44
    • /
    • 2019
  • This paper dealt with how realization of pauses in utterance is affected by speech style, gender, and generation. For this purpose, we analyzed the frequency and duration of pauses. Pauses were categorized into four types: pause with breath, pause with no breath, utterance medial pause, and utterance final pause. Forty-eight subjects living in Seoul were chosen from the Korean Standard Speech Database. All subjects engaged in reading and spontaneous speech, through which we could also compare the realization between the two speech styles. The results showed that utterance final pauses had longer durations than utterance medial pauses. It means that utterance final pause has a function that signals the end of an utterance to the audience. For difference between tasks, spontaneous speech had longer and more frequent pauses because of cognitive reasons. With regard to gender variables, women produced shorter and less frequent pauses. For male speakers, the duration of pauses with breath was significantly longer. Finally, for generation variable, older speakers produced more frequent pauses. In addition, the results showed several interaction effects. Male speakers produced longer pauses, but this gender effect was more prominent at the utterance final position.

Study on the realization of pause groups and breath groups (휴지 단위와 호흡 단위의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.19-31
    • /
    • 2020
  • The purpose of this study is to observe the realization of pause and breath groups from adult speakers and to examine how gender, generation, and tasks can affect this realization. For this purpose, we analyzed forty-eight male or female speakers. Their generation was divided into two groups: young, old. Task and gender affected both the realization of pause and breath groups. The length of the pause groups was longer in the read speech than in the spontaneous speech and female speech. On the other hand, the length of the breath group was longer in the spontaneous speech and the male speech. In the spontaneous speech, which requires planning, the speaker produced shorter length of pause group. The short sentence length of the reading material influenced the reason for which the length of the breath group was shorter in the reading speech. Gender difference resulted from difference in pause patterns between genders. In the case of the breath groups, the male speaker produced longer duration of pause than the female speaker did, which may be due to difference in lung capacity between genders. On the other hand, generation did not affect either the pause groups or the breath groups. The generation factor only influenced the number of syllables and the eojeols, which can be interpreted as the result of the difference in speech rate between generations.

Analysis of Error Characteristics and Usabilities for Korean Consonant Perception Test (한국자음지각검사의 오류특성 및 유용성 분석)

  • Kim, Dong Chang;Kim, Jin Sook;Lee, Kyoung Won
    • 재활복지
    • /
    • v.18 no.4
    • /
    • pp.295-314
    • /
    • 2014
  • The purpose of this study was to supply the baseline data for auditory rehabilitation in the field through error type and rate of the phoneme which the hearing impaired feel difficulty to discriminate. Thirty participants with sensorineural hearing loss heard KCPT lists through recorded voice by male and female to get the data about error type and KCPT score accordance with talker's gender. In the initial consonant test list, /ㄷ/, /ㅂ/, /ㅃ/, /ㅉ/, /ㅌ/ showed more than 30% error rate while /ㄱ/and /ㄷ/ showed in final consonant test list. The most common error type was the initial consonant substitution or the final consonant substitution for the initial or final consonant test lists. Talker's gender effect was not signigicant showing no statistical difference between the scores when compared results from male voice and female voice. It means that KCPT can be used regardless of talker's gender in clinics.

Impact of face masks on spectral and cepstral measures of speech: A case study of two Korean voice actors (한국어 스펙트럼과 캡스트럼 측정시 안면마스크의 영향: 남녀 성우 2인 사례 연구)

  • Wonyoung Yang;Miji Kwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.4
    • /
    • pp.422-435
    • /
    • 2024
  • This study intended to verify the effects of face masks on the Korean language in terms of acoustic, aerodynamic, and formant parameters. We chose all types of face masks available in Korea based on filter performance and folding type. Two professional voice actors (a male and a female) with more than 20 years of experience who are native Koreans and speak standard Korean participated in this study as speakers of voice data. Face masks attenuated the high-frequency range, resulting in decreased Vowel Space Area (VSA) and Vowel Articulation Index (VAI)scores and an increased Low-to-High spectral ratio (L/H ratio) in all voice samples. This can result in lower speech intelligibility. However, the degree of increment and decrement was based on the voice characteristics. For female speakers, the Speech Level (SL) and Cepstral Peak Prominence (CPP) increased with increasing face mask thickness. In this study, the presence or filter performance of a face mask was found to affect speech acoustic parameters according to the speech characteristics. Face masks provoked vocal effort when the vocal intensity was not sufficiently strong, or the environment had less reverberance. Further research needs to be conducted on the vocal efforts induced by face masks to overcome acoustic modifications when wearing masks.

시청각 기반 HRI 컴포넌트 상용화 서비스 현장 성능 평가 및 환경분석

  • Ji, Su-Yeong;Kim, Hye-Jin;Kim, Do-Hyeong;Yun, Ho-Seop
    • Information and Communications Magazine
    • /
    • v.25 no.4
    • /
    • pp.16-21
    • /
    • 2008
  • 본고에서는 지능형 서비스 로봇의 상용화 단계에서 가장 현실적으로 적용 가능한 대표적인 HRI기술(얼굴검출, 화자 성별구별, 음원추적)에 대하여 상용화 서비스 현장에서의 성능평가 결과를 제공하고, 현장을 분석하여 사용자에게 가이드라인을 제공함과 동시에 최적의 상용화 서비스 제공을 위한 사용자와 로봇간 HRI 기준 및, 공공로봇 플랫폼 적용을 통한 로봇 서비스의 Needs 파악과 상품기획력의 극대화를 목적으로 성능평가에 따른 환경분석을 제안한다.

Extending StarGAN-VC to Unseen Speakers Using RawNet3 Speaker Representation (RawNet3 화자 표현을 활용한 임의의 화자 간 음성 변환을 위한 StarGAN의 확장)

  • Bogyung Park;Somin Park;Hyunki Hong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.303-314
    • /
    • 2023
  • Voice conversion, a technology that allows an individual's speech data to be regenerated with the acoustic properties(tone, cadence, gender) of another, has countless applications in education, communication, and entertainment. This paper proposes an approach based on the StarGAN-VC model that generates realistic-sounding speech without requiring parallel utterances. To overcome the constraints of the existing StarGAN-VC model that utilizes one-hot vectors of original and target speaker information, this paper extracts feature vectors of target speakers using a pre-trained version of Rawnet3. This results in a latent space where voice conversion can be performed without direct speaker-to-speaker mappings, enabling an any-to-any structure. In addition to the loss terms used in the original StarGAN-VC model, Wasserstein distance is used as a loss term to ensure that generated voice segments match the acoustic properties of the target voice. Two Time-Scale Update Rule (TTUR) is also used to facilitate stable training. Experimental results show that the proposed method outperforms previous methods, including the StarGAN-VC network on which it was based.