• 제목/요약/키워드: Sound spectrogram

검색결과 69건 처리시간 0.023초

청각 장애인용 홈 모니터링 시스템을 위한 다채널 다중 스케일 신경망 기반의 사운드 이벤트 검출 (Sound event detection based on multi-channel multi-scale neural networks for home monitoring system used by the hard-of-hearing)

  • 이기용;김형국
    • 한국음향학회지
    • /
    • 제39권6호
    • /
    • pp.600-605
    • /
    • 2020
  • 본 논문에서는 청각 장애인을 위한 소리 감지 홈 모니터링을 위해 다채널 다중 스케일 신경망을 사용한 사운드 이벤트 검출 방식을 제안한다. 제안하는 시스템에서는 홈 내의 여러 무선 마이크 센서들로부터 높은 신호 품질을 갖는 두 개의 채널을 선택하고, 그 신호들로부터 도착신호 지연시간, 피치 범위, 그리고 다중 스케일 합성 곱 신경망을 로그멜 스펙트로그램에 적용하여 추출한 특징들을 양방향 게이트 순환 신경망 기반의 분류기에 적용함으로써 사운드 이벤트 검출의 성능을 더욱 향상시킨다. 검출된 사운드 이벤트 결과는 선택된 채널의 센서 위치와 함께 텍스트로 변환되어 청각 장애인에게 제공된다. 실험결과는 제안한 시스템의 사운드 이벤트 검출 방식이 기존 방식보다 우수하며 청각 장애인에게 효과적으로 사운드 정보를 전달할 수 있음을 보인다.

숫자음의 스펙트럼 차이값과 상관계수를 이용한 화자인증 파라미터 연구 (A Study on Speaker Identification Parameter Using Difference and Correlation Coeffieicent of Digit_sound Spectrum)

  • 이후동;강선미;장문수;양병곤
    • 음성과학
    • /
    • 제11권3호
    • /
    • pp.131-142
    • /
    • 2004
  • Speaker identification system basically functions by comparing spectral energy of an individual production model with that of an input signal. This study aimed to develop a new speaker identification system from two parameters from the spectral energy of numeric sounds: difference sum and correlation coefficient. A narrow-band spectrogram yielded more stable spectral energy across time than a wide-band one. In this paper, we collected empirical data from four male speakers and tested the speaker identification system. The subjects produced 18 combinations of three-digit numeric. sounds !en times each. Five productions of each three-digit number were statistically averaged to make a model for each speaker. Then, the remaining five productions were tested on the system. Results showed that when the threshold for the absolute difference sum was set to 1200, all the speakers could not pass the system while everybody could pass if set to 2800. The minimum correlation coefficient to allow all to pass was 0.82 while the coefficient of 0.95 rejected all. Thus, both threshold levels can be adjusted to the need of speaker identification system, which is desirable for further study.

  • PDF

뇌 손상 후 실어증 환자의 언어치료 프로그램 kMIT의 개발 및 임상적 효과 (Development of Speech-Language Therapy Program kMIT for Aphasic Patients Following Brain Injury and Its Clinical Effects)

  • 김현기;김연희;고명환;박종호;김선숙
    • 음성과학
    • /
    • 제9권4호
    • /
    • pp.237-252
    • /
    • 2002
  • MIT has been applied for nonfluent aphasic patients on the basis of lateralization of brain hemisphere. However, its applications for different languages have some inquiry for aphasic patients because of prosodic and rhythmic differences. The purpose of this study is to develop the Korean Melodic Intonation Therapy program using personal computer and its clinical effects for nonfluent aphasic patients. The algorithm was composed to voice analog signal, PCM, AMDF, Short-time autocorrelation function and center clipping. The main menu contains pitch, waveform, sound intensity and speech files on window. Aphasic patients' intonation patterns overlay on selected kMIT patterns. Three aphasic patients with or without kMIT training participated in this study. Four affirmative sentences and two interrogative sentences were uttered on CSL by stimulus of ST. VOT, VD, Hold and TD were measured on Spectrogram. In addition, articulation disorders and intonation patterns were evaluated objectively on spectrogram. The results indicated that nonfluent aphasic patients with kMIT training group showed some clinical effects of speech intelligibility based on VOT, TD values, articulation evaluation and prosodic pattern changes.

  • PDF

한국어 방언 음성의 실험적 연구 (An Experimental Study of Korean Dialectal Speech)

  • 김현기;최영숙;김덕수
    • 음성과학
    • /
    • 제13권3호
    • /
    • pp.49-65
    • /
    • 2006
  • Recently, several theories on the digital speech signal processing expanded the communication boundary between human beings and machines drastically. The aim of this study is to collect dialectal speech in Korea on a large scale and to establish a digital speech data base in order to provide the data base for further research on the Korean dialectal and the creation of value-added network. 528 informants across the country participated in this study. Acoustic characteristics of vowels and consonants are analyzed by Power spectrum and Spectrogram of CSL. Test words were made on the picture cards and letter cards which contained each vowel and each consonant in the initial position of words. Plot formants were depicted on a vowel chart and transitions of diphthongs were compared according to dialectal speech. Spectral times, VOT, VD, and TD were measured on a Spectrogram for stop consonants, and fricative frequency, intensity, and lateral formants (LF1, LF2, LF3) for fricative consonants. Nasal formants (NF1, NF2, NF3) were analyzed for different nasalities of nasal consonants. The acoustic characteristics of dialectal speech showed that young generation speakers did not show distinction between close-mid /e/ and open-mid$/\epsilon/$. The diphthongs /we/ and /wj/ showed simple vowels or diphthongs depending to dialect speech. The sibilant sound /s/ showed the aspiration preceded to fricative noise. Lateral /l/ realized variant /r/ in Kyungsang dialectal speech. The duration of nasal consonants in Chungchong dialectal speech were the longest among the dialects.

  • PDF

음향 기반 물 사용 활동 감지용 엣지 컴퓨팅 시스템 (The Edge Computing System for the Detection of Water Usage Activities with Sound Classification)

  • 현승호;지영준
    • 대한의용생체공학회:의공학회지
    • /
    • 제44권2호
    • /
    • pp.147-156
    • /
    • 2023
  • Efforts to employ smart home sensors to monitor the indoor activities of elderly single residents have been made to assess the feasibility of a safe and healthy lifestyle. However, the bathroom remains an area of blind spot. In this study, we have developed and evaluated a new edge computer device that can automatically detect water usage activities in the bathroom and record the activity log on a cloud server. Three kinds of sound as flushing, showering, and washing using wash basin generated during water usage were recorded and cut into 1-second scenes. These sound clips were then converted into a 2-dimensional image using MEL-spectrogram. Sound data augmentation techniques were adopted to obtain better learning effect from smaller number of data sets. These techniques, some of which are applied in time domain and others in frequency domain, increased the number of training data set by 30 times. A deep learning model, called CRNN, combining Convolutional Neural Network and Recurrent Neural Network was employed. The edge device was implemented using Raspberry Pi 4 and was equipped with a condenser microphone and amplifier to run the pre-trained model in real-time. The detected activities were recorded as text-based activity logs on a Firebase server. Performance was evaluated in two bathrooms for the three water usage activities, resulting in an accuracy of 96.1% and 88.2%, and F1 Score of 96.1% and 87.8%, respectively. Most of the classification errors were observed in the water sound from washing. In conclusion, this system demonstrates the potential for use in recording the activities as a lifelog of elderly single residents to a cloud server over the long-term.

복합음과 대학생이 발음한 모음 포먼트 측정 (Formant Measurements of Complex Waves and Vowels Produced by Students)

  • 양병곤
    • 음성과학
    • /
    • 제15권3호
    • /
    • pp.39-51
    • /
    • 2008
  • Formant measurements are one of the most important factors to objectively test cross-linguistic differences among vowels produced by speakers of any given languages. However, many speech analysis softwares present erroneous estimates and some researchers use them without any verification procedures. The purposes of this paper are to examine formant measurements of complex waves which were synthesized from the average formant values of five Korean vowels using three default methods in Praat and to verify the measured values of the five vowels produced by 20 students using one of the methods. Variances along the time axis are discussed after determining absolute difference sum from the 1/3 vowel duration point. Results show that there were smaller measurement errors by the burg method. Also, greater errors were observed in the sl or lpc methods mostly caused by the inappropriate formant settings. Formant measurement deviations were greater in those vowels produced by the female students than those of the male students, which were mostly attributed to the settings for the vowels /o, u/. Formant settings can best be corrected by changing the number of formants to the number of visible dark bands on the spectrogram. Those results suggest that researchers should check the validity of the estimates from the speech analysis software. Further studies are recommended on the perception test of the original sound with the synthesized sound by the estimated formant values.

  • PDF

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권5호
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.

실시간 음성타자 시스템 구현 (Development of Realtime Phonetic Typewriter)

  • 조우연;최두일
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 추계학술대회 논문집 학회본부 B
    • /
    • pp.727-729
    • /
    • 1999
  • We have developed a realtime phonetic typewriter implemented on IBM PC with sound card based on Windows 95. In this system, analyzing of speech signal, learning of neural network, labeling of output neurons and visualizing of recognition results are performed on realtime. The developing environment for speech processing is established by adding various functions, such as editing, saving, loading of speech data and 3-D or gray level displaying of spectrogram. Recognition experimental using Korean phone had a 71.42% for 13 basic consonant and 90.01% for 7 basic vowel accuracy.

  • PDF

언어와 민요의 운율 자질에 관한 음향음성학적 연구 (An Acoustic Study of Prosodic Features of Korean Spoken Language and Korean Folk Song (Minyo))

  • 구희산
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.133-144
    • /
    • 2003
  • The purpose of this acoustic experimental study was to investigate interrelation between prosodic features of Korean spoken language and those of Korean folk songs. The words of Changbutaryoung were spoken for analysis of spoken language by three female graduate students and the song was sung for musical features by three Kyunggi Minyo singers. Pitch contours were analyzed from sound spectrogram made by Pitch Works. Results showed that special musical voices (breaking, tinkling, vibrating, etc.) and tunes (rising, falling, level, etc) of folk song were discovered at the same place where accents of spoken language came. It appeared that, even though the patterns of pitch contour were different from each other, there was positive interrelation between prosodic features of Korean spoken language and those of Korean folk songs.

  • PDF

연속구어 내 발성 종결-개시의 음향학적 특징 - 말더듬 화자와 비말더듬 화자 비교 - (Acoustic Features of Phonatory Offset-Onset in the Connected Speech between a Female Stutterer and Non-Stutterers)

  • 한지연;이옥분
    • 음성과학
    • /
    • 제13권2호
    • /
    • pp.19-33
    • /
    • 2006
  • The purpose of this paper was to examine acoustical characteristics of phonatory offset-onset mechanism in the connected speech of female adults with stuttering and normal nonfluency. The phonatory offset-onset mechanism refers to the laryngeal articulatory gestures. Those gestures are required to mark word boundaries in phonetic contexts of the connected speech. This mechanism included 7 patterns based on the speech spectrogram. This study showed the acoustic features in the connected speech in the production of female adults with stuttering (n=1) and normal nonfluency (n=3). Speech tokens in V_V, V_H, and V_S contexts were selected for the analysis. Speech samples were recorded by Sound Forge, and the spectrographic analysis was conducted using Praat. Results revealed a stuttering (with a type of block) female exhibited more laryngealization gestures in the V_V context. Laryngealization gesture was more characterized by a complete glottal stop or glottal fry both in V_H and in V_S contexts. The results were discussed from theoretical and clinical perspectives.

  • PDF