• Title/Summary/Keyword: Speech Spectrogram

Search Result 90, Processing Time 0.019 seconds

A Study on Correcting Korean Pronunciation Error of Foreign Learners by Using Supporting Vector Machine Algorithm

  • Jang, Kyungnam;You, Kwang-Bock;Park, Hyungwoo
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.316-324
    • /
    • 2020
  • It has experienced how difficult People with foreign language learning, it is to pronounce a new language different from the native language. The goal of various foreigners who want to learn Korean is to speak Korean as well as their native language to communicate smoothly. However, each native language's vocal habits also appear in Korean pronunciation, which prevents accurate information transmission. In this paper, the pronunciation of Chinese learners was compared with that of Korean. For comparison, the fundamental frequency and its variation of the speech signal were examined and the spectrogram was analyzed. The Formant frequencies known as the resonant frequency of the vocal tract were calculated. Based on these characteristics parameters, the classifier of the Supporting Vector Machine was found to classify the pronunciation of Koreans and the pronunciation of Chinese learners. In particular, the linguistic proposition was scientifically proved by examining the Korean pronunciation of /ㄹ/ that the Chinese people were not good at pronouncing.

The Effect of Tonsillectomy and Adenoidectomy on Acoustic Factors (구개편도 및 아데노이드 절제술이 음향학적 자질에 미치는 영향)

  • 임성태;손진호;유정운;강지원;이현석;신승헌;박재율
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.9 no.1
    • /
    • pp.38-42
    • /
    • 1998
  • It has been reported that Tonsillectomy & Adenoidectomy(T & A) resulted in the change of voice by structural changes directly to the vocal track. We studied the effect of T & A on the voice of patients comparing the pre-operative to the post-operative voice. It was performed using a Computerized Speech Lab(CSL50) which is currently used as a method for voice analysis. Forty-five patients who had T&A, aging from 3 to 42 years old, took part in studies and wert evaluated for voice changes and the degree of formant changes of four basic vowels, /a/, /i/, /o/, and /u/. They were evaluated pre-operatively and post-operatively one month later using MDVP, CSL program of CSL50. The results obtained were as follows ; In using MDVP, there were some differences between pre-operative and post-operative shimmer measures within the normal range but other acoustic measures(Fo, jitter, NHR) show no significant differences(p>0.05). F3 of /a/ and /o/ were significantly decreased(p<0.05) and F2, F3 of /i/ were increased(p>0.05) in patients who only had Tonsillectomy in doing CSL spectrogram. For the patients who had T & A, Fl and F3 of /a/, F3 of /i/, Fl, F2 and F3 of /o/ were decreased with significant increase in F1 and F2 of /i/(p<0.05).

  • PDF

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.

Clinical Acoustic Study of Acupuncture Therapy Effects on Post-Stroke Dysarthria (침치료가 뇌졸중으로 인한 구음장애에 미치는 음향적 특성에 대한 증례보고)

  • Lee, Min-Goo;Park, Sae-Wook;Lee, Sun-Woo;Ryu, Hyun-Hee;Lee, Seung-Eon;Kim, Yong-Jeong;Son, Ji-Woo;Rhim, Eun-Kyung;Kim, Sung-Nam;Lee, In;Moon, Byung-Soon;Yun, Jong-Min
    • The Journal of Internal Korean Medicine
    • /
    • v.26 no.3
    • /
    • pp.660-669
    • /
    • 2005
  • Objectives : The aim of this study is to find the acoustic characteristics of acupuncture therapy effects on post-stroke dysarthria. Methods : Acupuncture therapy was applied for four to six weeks by inserting needles into eight acupuncture points, CV23, CV24, bilateral 'Sheyu' and ipsilateral ST4, ST6 and contralateral LI4, ST36 on facial palsy side. All the speech samples were collected, pre-treatment and post-treatment, using Computerized Speech Lab. VOT and TD of each speech sample and vowel formant(F1&F2) were analyzed on spectrogram. Result : VOT and TD were decreased after treatment. F1 was decreased, and F2 was increased after treatment. Conclusions : This suggests that acupuncture therapy improves symptoms of post-stroke dysarthria by stimulating articulation organs such as tongue, lips, cheeks, larynx and pharynx.

  • PDF

Comparative study of data augmentation methods for fake audio detection (음성위조 탐지에 있어서 데이터 증강 기법의 성능에 관한 비교 연구)

  • KwanYeol Park;Il-Youp Kwak
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.2
    • /
    • pp.101-114
    • /
    • 2023
  • The data augmentation technique is effectively used to solve the problem of overfitting the model by allowing the training dataset to be viewed from various perspectives. In addition to image augmentation techniques such as rotation, cropping, horizontal flip, and vertical flip, occlusion-based data augmentation methods such as Cutmix and Cutout have been proposed. For models based on speech data, it is possible to use an occlusion-based data-based augmentation technique after converting a 1D speech signal into a 2D spectrogram. In particular, SpecAugment is an occlusion-based augmentation technique for speech spectrograms. In this study, we intend to compare and study data augmentation techniques that can be used in the problem of false-voice detection. Using data from the ASVspoof2017 and ASVspoof2019 competitions held to detect fake audio, a dataset applied with Cutout, Cutmix, and SpecAugment, an occlusion-based data augmentation method, was trained through an LCNN model. All three augmentation techniques, Cutout, Cutmix, and SpecAugment, generally improved the performance of the model. In ASVspoof2017, Cutmix, in ASVspoof2019 LA, Mixup, and in ASVspoof2019 PA, SpecAugment showed the best performance. In addition, increasing the number of masks for SpecAugment helps to improve performance. In conclusion, it is understood that the appropriate augmentation technique differs depending on the situation and data.

Sound Enhancement of low Sample rate Audio Using LMS in DWT Domain (DWT영역에서 LMS를 이용한 저 샘플링 비율 오디오 신호의 음질 향상)

  • 백수진;윤원중;박규식
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.54-60
    • /
    • 2004
  • In order to mitigate the problems in storage space and network bandwidth for the full CD quality audio, current digital audio is always restricted by sampling rate and bandwidth. This restriction normally results in low sample rate audio or calls for the data compression scheme such as MP3. However, they can only reproduce a lower frequency range than a regular CD quality because of the Nyquist sampling theory. Consequently they lose rich spatial information embedded in high frequency. The propose of this paper is to propose efficient high frequency enhancement of low sample rate audio using n adaptive filtering and DWT analysis and synthesis. The proposed algorithm uses the LMS adaptive algorithm to estimate the missing high frequency contents in DWT domain and it then reconstructs the spectrally enhanced audio by using the DWT synthesis procedure. Several experiments with real speech and audio are performed and compared with other algorithm. From the experimental results of spectrogram and sonic test, we confirm that the proposed algorithm outperforms the other algorithm and reasonably works well for the most of audio cases.

Design of the Noise Suppressor Using Wavelet Transform (웨이블릿 변환을 이용한 잡음제거기 설계)

  • 원호진;김종학;이인성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.7
    • /
    • pp.37-46
    • /
    • 2001
  • This paper proposes a new noise suppression method using the Wavelet transform analysis. The noise suppressor using the Wavelet transform shows the more effective advantages in a babble noise than one using the short-time Fourier transform. We designed a new channel structure based on spectral subtraction of Wavelet transform coefficients and used the Wavelet mask pattern with more higher time resolution in high frequency. It showed a good adaptation capability for babble noise with a non-stationary property. To evaluate the performance of proposed noise canceller, the informal subjective listening tests (Mos tests) were performed in background noise environments (car noise, street noise, babble noise) of mobile communication. The proposed noise suppression algorithm showed about MOS 0.2 performance improvements than the suppression algorithm of EVRC in informal listening tests. The noise reduction by the proposed method was shown in spectrogram of speech signal.

  • PDF

Visual.Auditory.Acoustic Study on Singing Vowels of Korean Lyric Songs (시각과 청각 및 음향적 관점에서의 노랫말 모음 연구)

  • Lee Jai Kang
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.362-366
    • /
    • 1996
  • This paper is generally divided in 2 parts. One is the study on vowels about korean singer's lyric song in view of Daniel Jones' Cardinal Vowel. The other is acoustic study on vowels in my singing about korean lyric song. Analysis data are KBS concert video tape and CSL's. NSP file on my singing and Informants are famous singers i.e. 3 sopranos, 1 mezzo, 2 tenors, 1baritone, and me. Analysis aim is to find out Korean 8 vowels([equation omitted]) quality in singing. The methods of descrition are used in closed vowels, half closed vowels, half open vowels, open vowels and rounded vowels, unroundes vowels and formants. The study of the former is while watching the monitor screen to stop the scene that is to be analysixed. The study of the latter is to analysis the spectrogram converted by CSL's. SP file. Analysis results are an follows: Visual and auditory korean vowels quality in singing have the 3 tendency. One is the tendency of more rounded than is usual Korean vowels. Another is the tendency of centralized to center point in Cardinal Vowel and the other is the tendency of diversity in vowel quality. Acoustic analysis is studied by means of 4 formants. Fl and F2 show similiar step in spoken. In Fl there is the same formant values. This seems to vocal organization be perceived the singign situation. The width of F3 is the widest of all, so F3 may be the characteristics in singing. In conclude, the characteristics of vowels in Korean lyric songs are seems to have the tendencies of rounding, centralizing to center point in Cardinal Vowel, diversity in vowel quality and, F3'widest width in compared with usual Korean vowels.

  • PDF

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

A STUDY ON THE INFLUENCE OF THE PALATAL PLATES UPON THE DURATION OF KOREAN SOUNDS (구개상 장착에 따른 한국어 어음의 조음시간 변화에 관한 연구)

  • Koh, Yeo-Joon;Kim, Chang-Whe;Kim, Yong-Soo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.32 no.1
    • /
    • pp.77-102
    • /
    • 1994
  • Many studies have been made on the masticatory and esthetic effects of prosthodontic treatments, but few on the restoration of pronunciation, especially in complete denture wearers. The purpose of this study is to provide a basis that could be of help to the complete denture wearers' speech adaptation by analyzing the influence of the palatal coverage upon the duration of consonants and vowels with the method of experimental phonetics. For this study, metal plates and resin plates were made for 3 male subjects in their twenties, who have good occlusion, and do not have speech and hearing disorders. Then 8 Korean consonants and 4 Korean vowels were selected, systemically considering phonetic variants such as the place and manner of articulation, lenis/fortis, mutual effect of each phoneme, etc. They were combined into meaningless tested words in the form of /VCV/, and were included in the carrier sentences. Each informant uttered the sentences 1) without the plate, 2) with the metal plate, 3) with the resin plate. The recorded data were analyzed through the waveform of sounds and spectrogram by using the program SoundEdit, Signalize, Statview 512+for the Macintosh computer. The duration of each segment was measured by searching for the boundaries between the preceding vowels and consonants, and between the consonants and the following vowels. The study led to the conclusion that. 1. With the palatal plate, the duration of all the tested words increased and the duration increased more with the resin plate than with the metal plate. 2. With the palatal plate, the duration of all the preceding vowels, consonants, and following vowels increased, but the temporal structure of the tested words was maintained. 3. As for the manner of articulation, fricative /s/(ㅅ) was greatly influenced by both kinds of palatal plates. 4. As for the place of articulation, alveolar sounds /d/(ㄷ), /n/(ㄴ) were greatly influnced by the kinds of palatal plates, and the velar sounds /n/(ㅇ), /g/(ㄱ) were influenced by the platal plates, but the kind of the palatal plates did not show any significance. 5. As for the lenis/fortis, lenis was influenced more by the kind of the palatal plates. 6. As for the influence of vowels upon each segment in the tested words, palatal vowel /i/(ㅣ) had greater influence than pharyngeal vowel /a/(ㅏ), and following vowels than preceding vowels.

  • PDF