• 제목/요약/키워드: speech analysis

검색결과 1,592건 처리시간 0.027초

운율경계에 위치한 어두 모음의 성문 특성: 음향적 상관성을 중심으로 (Glottal Characteristics of Word-initial Vowels in the Prosodic Boundary: Acoustic Correlates)

  • 손형숙
    • 말소리와 음성과학
    • /
    • 제2권3호
    • /
    • pp.47-63
    • /
    • 2010
  • This study provides a description of the glottal characteristics of the word-initial low vowels /a, $\ae$/ in terms of a set of acoustic parameters and discusses glottal configuration as their acoustic correlates. Furthermore, it examines the effect of prosodic boundary on the glottal properties of the vowels, seeking an account of the possible role of prosodic structure based on prosodic theory. Acoustic parameters reported to indicate glottal characteristics were obtained from the measurements made directly from the speech spectrum on recordings of Korean and English collected from 45 speakers. They consist of two separate groups of native Korean and native English speakers, each including both male and female speakers. Based on the three acoustic parameters of open quotient (OQ), first-formant bandwidth (B1), and spectral tilt (ST), comparisons were made between the speech of males and females, between the speech of native Korean and native English speakers, and between Korean and English produced by native Korean speakers. Acoustic analysis of the experimental data indicates that some or all glottal parameters play a crucial role in differentiating the speech groups, despite substantial interspeaker variations. Statistical analysis of the Korean data indicates prosodic strengthening with respect to the acoustic parameters B1 and OQ, suggesting acoustic enhancement in terms of the degree of glottal abduction and the glottal closure during a vibratory cycle.

  • PDF

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권2호
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

위너필터에 의한 음성 중의 잡음제거 알고리즘 (Noise Reduction Algorithm in Speech by Wiener Filter)

  • 최재승
    • 한국전자통신학회논문지
    • /
    • 제8권9호
    • /
    • pp.1293-1298
    • /
    • 2013
  • 본 논문에서는 음성신호를 개선할 목적으로 잡음으로 오염된 음성신호로부터 잡음성분을 제거하기 위한 위너 필터를 사용한 잡음제거 알고리즘을 제안한다. 제안한 알고리즘은 먼저 잡음 복원 및 제거 방법에 기초하여 잡음으로 오염된 신호로부터 각 프레임에서 백색잡음의 잡음 스펙트럼을 제거한다. 또한 본 알고리즘은 선형예측 분석 방법에 기초한 위너 필터를 사용하여 음성신호를 강조한다. 본 실험에서는 일본 남성화자에 의한 음성과 잡음데이터를 사용하여 본 알고리즘의 실험 결과를 나타낸다. 백색잡음에 의하여 오염된 음성신호에 대하여 스펙트럼 왜곡률 척도를 사용하여 본 알고리즘이 유효하다는 것을 확인한다. 실험으로부터 백색잡음에 대하여 이전의 위너 필터와 비교하여 최대 4.94 dB의 출력 스펙트럼 왜곡률이 개선된 것을 확인할 수 있었다.

독립성분분석을 이용한 DSP 기반의 화자 독립 음성 인식 시스템의 구현 (Implementation of Speaker Independent Speech Recognition System Using Independent Component Analysis based on DSP)

  • 김창근;박진영;박정원;이광석;허강인
    • 한국정보통신학회논문지
    • /
    • 제8권2호
    • /
    • pp.359-364
    • /
    • 2004
  • 본 논문에서는 범용 디지털 신호처리기를 이용한 잡음환경에 강인한 실시간 화자 독립 음성인식 시스템을 구현하였다. 구현된 시스템은 TI사의 범용 부동소수점 디지털 신호처리기인 TMS320C32를 이용하였고, 실시간 음성 입력을 위한 음성 CODEC과 외부 인터페이스를 확장하여 인식결과를 출력하도록 구성하였다. 실시간 음성 인식기에 사용한 음성특징 파라메터는 일반적으로 사용되어 지는 MFCC(Mel Frequency Cepstral Coefficient)대신 독립성분분석을 통해 MFCC의 특징 공간을 변화시킨 파라메터를 사용하여 외부잡음 환경에 강인한 특성을 지니도록 하였다. 두 가지 특징 파라메터에 대해 잡음 환경에서의 인식실험 결과, 독립성분 분석에 의한 특징 파라메터의 인식 성능이 MFCC보다 우수함을 확인 할 수 있었다.

음향 파라미터에 의한 정서적 음성의 음질 분석 (Analysis of the Voice Quality in Emotional Speech Using Acoustical Parameters)

  • 조철우;리타오
    • 대한음성학회지:말소리
    • /
    • 제55권
    • /
    • pp.119-130
    • /
    • 2005
  • The aim of this paper is to investigate some acoustical characteristics of the voice quality features from the emotional speech database. Six different parameters are measured and compared for 6 different emotions (normal, happiness, sadness, fear, anger, boredom) and from 6 different speakers. Inter-speaker variability and intra-speaker variability are measured. Some intra-speaker consistency of the parameter change across the emotions are observed, but inter-speaker consistency are not observed.

  • PDF

청각 장애인용 통합형 발음 훈련 기기의 개발 (Development of Integrated Speech Training Aids for Hearing Impaired)

  • 박상희;김동준
    • 대한의용생체공학회:의공학회지
    • /
    • 제13권4호
    • /
    • pp.275-284
    • /
    • 1992
  • Development of Integrated Speech Training Aids for Hearing Impaired In this study, a spepch lralnlng aids that can do real-time display of vocal tract shape and other speech parameters together in a single system is implemenLed and self-training program for this system is developed. To estimate vocal tract shape, speech production process is assumed to be AR model. Through LPC analysis, vocal tract shape, intensity, and log spcclrum are calculated. And, fundamental frequency and nasality are measured using vibration sensors.

  • PDF

음성신호의 실시간 처리기법에 관한 연구 (A Study on the Real Time Processing Technique of speech Signal)

  • 이택수;안창;김성락;이상범
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1987년도 전기.전자공학 학술대회 논문집(II)
    • /
    • pp.1094-1096
    • /
    • 1987
  • Zero-crossing analysis techniques have been applied to speech recognition. Zero-crossing rate, level-crossing rate and differentiated zero-crossing rate in time domain we used in analyzing speech signals. Speech samples could be stored in memory buffer in real time.

  • PDF

삼각필터를 이용한 Spectral 포락변경에 관한 연구 (A Study on Spectral Envelope Modification using Triangular Filter)

  • 최성은;김동현;홍광석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2415-2418
    • /
    • 2003
  • In this paper, we present a new filter to adjust formant information. Spectral envelope in speech analysis shows information about characteristics of speech and formant information determines speech timbre. So, if formant position is adjusted, we can verify adjusted speech timbre. A presented filter is to adjust this formant. This filter is composed of triangular filters. Using this filter we could locate the formant frequency at target position.

  • PDF

인지 선형 예측 분석에 의한 음성 인식 방법 (The Speech Recognition Method by Perceptual Linear Predictive Analysis)

  • 김현철
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1995년도 제12회 음성통신 및 신호처리 워크샵 논문집 (SCAS 12권 1호)
    • /
    • pp.184-187
    • /
    • 1995
  • This paper proposes an algorithm for machine recognition of phonemes in continuous speech. The proposed algorithm is static strategy neural network. The algorithm uses, at the stage of training neuron, features such as PARCOR coefficient and auditory-like perceptual liner prediction . These features are extracted from speech samples selected by a sliding 25.6msec windows with s sliding gap being 3 msec long, then interleaved and summed up to 7 sets of parmeters covering 171 msec worth of speech for use of neural inputs. Perfomances are compared when either PARCOR or auditory-like PLP is included in the feture set.

  • PDF

Information Dimensions of Speech Phonemes

  • Lee, Chang-Young
    • 음성과학
    • /
    • 제3권
    • /
    • pp.148-155
    • /
    • 1998
  • As an application of dimensional analysis in the theory of chaos and fractals, we studied and estimated the information dimension for various phonemes. By constructing phase-space vectors from the time-series speech signals, we calculated the natural measure and the Shannon's information from the trajectories. The information dimension was finally obtained as the slope of the plot of the information versus space division order. The information dimension showed that it is so sensitive to the waveform and time delay. By averaging over frames for various phonemes, we found the information dimension ranges from 1.2 to 1.4.

  • PDF