• 제목/요약/키워드: speech features

검색결과 648건 처리시간 0.02초

음성신호기반의 감정인식의 특징 벡터 비교 (A Comparison of Effective Feature Vectors for Speech Emotion Recognition)

  • 신보라;이석필
    • 전기학회논문지
    • /
    • 제67권10호
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

영어 동시발화의 자동 억양궤적 추출을 통한 음향 분석 (An acoustical analysis of synchronous English speech using automatic intonation contour extraction)

  • 이서배
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.97-105
    • /
    • 2015
  • This research mainly focuses on intonational characteristics of synchronous English speech. Intonation contours were extracted from 1,848 utterances produced in two different speaking modes (solo vs. synchronous) by 28 (12 women and 16 men) native speakers of English. Synchronous speech is found to be slower than solo speech. Women are found to speak slower than men. The effect size of speech rate caused by different speaking modes is greater than gender differences. However, there is no interaction between the two factors (speaking modes vs. gender differences) in terms of speech rate. Analysis of pitch point features has it that synchronous speech has smaller Pt (pitch point movement time), Pr (pitch point pitch range), Ps (pitch point slope) and Pd (pitch point distance) than solo speech. There is no interaction between the two factors (speaking modes vs. gender differences) in terms of pitch point features. Analysis of sentence level features reveals that synchronous speech has smaller Sr (sentence level pitch range), Ss (sentence slope), MaxNr (normalized maximum pitch) and MinNr (normalized minimum pitch) but greater Min (minimum pitch) and Sd (sentence duration) than solo speech. It is also shown that the higher the Mid (median pitch), the MaxNr and the MinNr in solo speaking mode, the more they are reduced in synchronous speaking mode. Max, Min and Mid show greater speaker discriminability than other features.

조음자질을 이용한 한국인 학습자의 영어 발화 자동 발음 평가 (Automatic pronunciation assessment of English produced by Korean learners using articulatory features)

  • 류혁수;정민화
    • 말소리와 음성과학
    • /
    • 제8권4호
    • /
    • pp.103-113
    • /
    • 2016
  • This paper aims to propose articulatory features as novel predictors for automatic pronunciation assessment of English produced by Korean learners. Based on the distinctive feature theory, where phonemes are represented as a set of articulatory/phonetic properties, we propose articulatory Goodness-Of-Pronunciation(aGOP) features in terms of the corresponding articulatory attributes, such as nasal, sonorant, anterior, etc. An English speech corpus spoken by Korean learners is used in the assessment modeling. In our system, learners' speech is forced aligned and recognized by using the acoustic and pronunciation models derived from the WSJ corpus (native North American speech) and the CMU pronouncing dictionary, respectively. In order to compute aGOP features, articulatory models are trained for the corresponding articulatory attributes. In addition to the proposed features, various features which are divided into four categories such as RATE, SEGMENT, SILENCE, and GOP are applied as a baseline. In order to enhance the assessment modeling performance and investigate the weights of the salient features, relevant features are extracted by using Best Subset Selection(BSS). The results show that the proposed model using aGOP features outperform the baseline. In addition, analysis of relevant features extracted by BSS reveals that the selected aGOP features represent the salient variations of Korean learners of English. The results are expected to be effective for automatic pronunciation error detection, as well.

정상 음성의 목소리 특성의 정성적 분류와 음성 특징과의 상관관계 도출 (Qualitative Classification of Voice Quality of Normal Speech and Derivation of its Correlation with Speech Features)

  • 김정민;권철홍
    • 말소리와 음성과학
    • /
    • 제6권1호
    • /
    • pp.71-76
    • /
    • 2014
  • In this paper voice quality of normal speech is qualitatively classified by five components of breathy, creaky, rough, nasal, and thin/thick voice. To determine whether a correlation exists between a subjective measure of voice and an objective measure of voice, each voice is perceptually evaluated using the 1/2/3 scale by speech processing specialists and acoustically analyzed using speech analysis tools such as the Praat, MDVP, and VoiceSauce. The speech parameters include features related to speech source and vocal tract filter. Statistical analysis uses a two-independent-samples non-parametric test. Experimental results show that statistical analysis identified a significant correlation between the speech feature parameters and the components of voice quality.

실험에 의한 음성·음악 분류 특징의 비교 분석 (Comparison & Analysis of Speech/Music Discrimination Features through Experiments)

  • 이경록;류시우;곽재영
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2004년도 추계 종합학술대회 논문집
    • /
    • pp.308-313
    • /
    • 2004
  • 본 논문에서는 각 특징 파라미터 조합의 음성/음악 분류 성능을 비교 분석하였다. 음향신호는 3가지(음성, 음악, 음성+음악)로 분류하였다. 본 실험에서는 분류 특징으로 멜캡스트럼, 에너지, 영교차 3가지 형태가 사용되었다. 음성/음악 분류 성능이 가장 좋은 특징간의 상호 조합을 비교 분석하였다. 실험결과 멜캡스트럼, 영교차 조합이 가장 좋은 결과(음성: 95.1%, 음악: 61.9%, 음성+음악: 55.5%)를 보인다는 것을 확인할 수 있었다.

  • PDF

PROSODY IN SPEECH TECHNOLOGY - National project and some of our related works -

  • Hirose Keikichi
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 2002년도 하계학술발표대회 논문집 제21권 1호
    • /
    • pp.15-18
    • /
    • 2002
  • Prosodic features of speech are known to play an important role in the transmission of linguistic information in human conversation. Their roles in the transmission of para- and non- linguistic information are even much more. In spite of their importance in human conversation, from engineering viewpoint, research focuses are mainly placed on segmental features, and not so much on prosodic features. With the aim of promoting research works on prosody, a research project 'Prosody and Speech Processing' is now going on. A rough sketch of the project is first given in the paper. Then, the paper introduces several prosody-related research works, which are going on in our laboratory. They include, corpus-based fundamental frequency contour generation, speech rate control for dialogue-like speech synthesis, analysis of prosodic features of emotional speech, reply speech generation in spoken dialogue systems, and language modeling with prosodic boundaries.

  • PDF

청각 장애자를 위한 시각 음성 처리 시스템에 관한 연구 (A study on the Visible Speech Processing System for the Hearing Impaired)

  • 김원기;김남현
    • 대한의용생체공학회:의공학회지
    • /
    • 제11권1호
    • /
    • pp.75-82
    • /
    • 1990
  • The purpose of this study is to help the hearing Impaired's speech training with a visible speech processing system. In brief, this system converts the features of speech signals into graphics on monitor, and adjusts the features of hearing impaired to normal ones. There are formant and pitch in the features used for this system. They are extracted using the digital signal processing such as linear predictive method or AMDF(Average Magnitude Difference Function). In order to effectively train for the hearing impaired's abnormal speech, easilly visible feature has been being studied.

  • PDF

Annotation of a Non-native English Speech Database by Korean Speakers

  • Kim, Jong-Mi
    • 음성과학
    • /
    • 제9권1호
    • /
    • pp.111-135
    • /
    • 2002
  • An annotation model of a non-native speech database has been devised, wherein English is the target language and Korean is the native language. The proposed annotation model features overt transcription of predictable linguistic information in native speech by the dictionary entry and several predefined types of error specification found in native language transfer. The proposed model is, in that sense, different from other previously explored annotation models in the literature, most of which are based on native speech. The validity of the newly proposed model is revealed in its consistent annotation of 1) salient linguistic features of English, 2) contrastive linguistic features of English and Korean, 3) actual errors reported in the literature, and 4) the newly collected data in this study. The annotation method in this model adopts the widely accepted conventions, Speech Assessment Methods Phonetic Alphabet (SAMPA) and the TOnes and Break Indices (ToBI). In the proposed annotation model, SAMPA is exclusively employed for segmental transcription and ToBI for prosodic transcription. The annotation of non-native speech is used to assess speaking ability for English as Foreign Language (EFL) learners.

  • PDF

영어 강세 교정을 위한 주변 음 특징 차를 고려한 강조점 검출 (Prominence Detection Using Feature Differences of Neighboring Syllables for English Speech Clinics)

  • 심성건;유기선;성원용
    • 말소리와 음성과학
    • /
    • 제1권2호
    • /
    • pp.15-22
    • /
    • 2009
  • Prominence of speech, which is often called 'accent,' affects the fluency of speaking American English greatly. In this paper, we present an accurate prominence detection method that can be utilized in computer-aided language learning (CALL) systems. We employed pitch movement, overall syllable energy, 300-2200 Hz band energy, syllable duration, and spectral and temporal correlation as features to model the prominence of speech. After the features for vowel syllables of speech were extracted, prominent syllables were classified by SVM (Support Vector Machine). To further improve accuracy, the differences in characteristics of neighboring syllables were added as additional features. We also applied a speech recognizer to extract more precise syllable boundaries. The performance of our prominence detector was measured based on the Intonational Variation in English (IViE) speech corpus. We obtained 84.9% accuracy which is about 10% higher than previous research.

  • PDF

말지각의 기초표상: 음소 또는 변별자질 (The Primitive Representation in Speech Perception: Phoneme or Distinctive Features)

  • 배문정
    • 말소리와 음성과학
    • /
    • 제5권4호
    • /
    • pp.157-169
    • /
    • 2013
  • Using a target detection task, this study compared the processing automaticity of phonemes and features in spoken syllable stimuli to determine the primitive representation in speech perception, phoneme or distinctive feature. For this, we modified the visual search task(Treisman et al., 1992) developed to investigate the processing of visual features(ex. color, shape or their conjunction) for auditory stimuli. In our task, the distinctive features(ex. aspiration or coronal) corresponded to visual primitive features(ex. color and shape), and the phonemes(ex. /$t^h$/) to visual conjunctive features(ex. colored shapes). The automaticity is measured by the set size effect that was the increasing amount of reaction time when the number of distracters increased. Three experiments were conducted. The laryngeal features(experiment 1), the manner features(experiment 2), and the place features(experiment 3) were compared with phonemes. The results showed that the distinctive features are consistently processed faster and automatically than the phonemes. Additionally there were differences in the processing automaticity among the classes of distinctive features. The laryngeal features are the most automatic, the manner features are moderately automatic and the place features are the least automatic. These results are consistent with the previous studies(Bae et al., 2002; Bae, 2010) that showed the perceptual hierarchy of distinctive features.