• Title/Summary/Keyword: Automatic Speech Analysis

Search Result 74, Processing Time 0.019 seconds

Emergency dispatching based on automatic speech recognition (음성인식 기반 응급상황관제)

  • Lee, Kyuwhan;Chung, Jio;Shin, Daejin;Chung, Minhwa;Kang, Kyunghee;Jang, Yunhee;Jang, Kyungho
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.31-39
    • /
    • 2016
  • In emergency dispatching at 119 Command & Dispatch Center, some inconsistencies between the 'standard emergency aid system' and 'dispatch protocol,' which are both mandatory to follow, cause inefficiency in the dispatcher's performance. If an emergency dispatch system uses automatic speech recognition (ASR) to process the dispatcher's protocol speech during the case registration, it instantly extracts and provides the required information specified in the 'standard emergency aid system,' making the rescue command more efficient. For this purpose, we have developed a Korean large vocabulary continuous speech recognition system for 400,000 words to be used for the emergency dispatch system. The 400,000 words include vocabulary from news, SNS, blogs and emergency rescue domains. Acoustic model is constructed by using 1,300 hours of telephone call (8 kHz) speech, whereas language model is constructed by using 13 GB text corpus. From the transcribed corpus of 6,600 real telephone calls, call logs with emergency rescue command class and identified major symptom are extracted in connection with the rescue activity log and National Emergency Department Information System (NEDIS). ASR is applied to emergency dispatcher's repetition utterances about the patient information. Based on the Levenshtein distance between the ASR result and the template information, the emergency patient information is extracted. Experimental results show that 9.15% Word Error Rate of the speech recognition performance and 95.8% of emergency response detection performance are obtained for the emergency dispatch system.

Analysis of Feature Extraction Methods for Distinguishing the Speech of Cleft Palate Patients (구개열 환자 발음 판별을 위한 특징 추출 방법 분석)

  • Kim, Sung Min;Kim, Wooil;Kwon, Tack-Kyun;Sung, Myung-Whun;Sung, Mee Young
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1372-1379
    • /
    • 2015
  • This paper presents an analysis of feature extraction methods used for distinguishing the speech of patients with cleft palates and people with normal palates. This research is a basic study on the development of a software system for automatic recognition and restoration of speech disorders, in pursuit of improving the welfare of speech disabled persons. Monosyllable voice data for experiments were collected for three groups: normal speech, cleft palate speech, and simulated clef palate speech. The data consists of 14 basic Korean consonants, 5 complex consonants, and 7 vowels. Feature extractions are performed using three well-known methods: LPC, MFCC, and PLP. The pattern recognition process is executed using the acoustic model GMM. From our experiments, we concluded that the MFCC method is generally the most effective way to identify speech distortions. These results may contribute to the automatic detection and correction of the distorted speech of cleft palate patients, along with the development of an identification tool for levels of speech distortion.

Developing the speech screening test for 4-year-old children and application of Korean speech sound analysis tool (KSAT) (4세 말소리발달 선별검사 개발과 한국어말소리분석도구(Korean Speech Sound Analysis Tool, KSAT)의 활용)

  • Soo-Jin Kim;Ki-Wan Jang;Moon-Soo Chang
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.49-55
    • /
    • 2024
  • This study aims to develop a three-sentence speech screening test to evaluate speech development in 4-year-old children and provide standards for comparison with peers. Screening tests were conducted on 24 children each in the first and second halves of 4 years old. The screening test results showed a correlation of .7 with the existing speech disorder evaluation test results. We compared whether there was a difference between the two groups of 4-year-old in the phonological development indicators and error patterns obtained through the screening test. The developmental indicators of the children in the second half were high, but there were no statistically significant differences. The Korean Speech Sound Analysis Tool (KSAT) was used for all analyses, and the automatic analysis results and contents of the clinician's manual analysis were compared. The degree of agreement between the automatic and manual error pattern analyses was 93.63%. The significance of this study is that the standard of speech of a 4-year-old child of the speech screening test according to three sentences at the level of elicited sentences, and the applicability of the KSAT were reviewed in both clinical and research fields.

An Acoustic Analysis of Diadochokinesis in Patients with Parkinson's Disease (파킨슨병 환자 대상 조음교대운동의 음향적 분석)

  • Kang, Young Ae;Park, Hyun Young;Koo, Bon Seok
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.3-15
    • /
    • 2013
  • The acoustic analysis of diadochokinesis(DDK) has been used to evaluate dysarthria. However, there has not been an automatic method to evaluate dysarthria. The aim of this study was to introduce a new automated program to measure DDK tasks and to apply this to clinical patients with idiopathic Parkinson's disease(IPD). Fourty-seven patients with IPD and a healthy control group of twenty participants were selected with every DDK task recorded three times. Twenty-five acoustic parameters in the program were developed. The relevant parameters were times of DDK, pitch related parameters, intensity parameters which were analyzed by 2-way ANOVA. Significant differences between the groups were found in the times of DDK, pitch related parameters, and intensity parameters. The findings indicated that the pitch of control group was more stable than that of the IPD. Even though the patients with IPD had a higher intensity value, this phenomenon was caused by the weakness of the IPD group who could not control their speech with a breath.

An acoustical analysis of synchronous English speech using automatic intonation contour extraction (영어 동시발화의 자동 억양궤적 추출을 통한 음향 분석)

  • Yi, So Pae
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.97-105
    • /
    • 2015
  • This research mainly focuses on intonational characteristics of synchronous English speech. Intonation contours were extracted from 1,848 utterances produced in two different speaking modes (solo vs. synchronous) by 28 (12 women and 16 men) native speakers of English. Synchronous speech is found to be slower than solo speech. Women are found to speak slower than men. The effect size of speech rate caused by different speaking modes is greater than gender differences. However, there is no interaction between the two factors (speaking modes vs. gender differences) in terms of speech rate. Analysis of pitch point features has it that synchronous speech has smaller Pt (pitch point movement time), Pr (pitch point pitch range), Ps (pitch point slope) and Pd (pitch point distance) than solo speech. There is no interaction between the two factors (speaking modes vs. gender differences) in terms of pitch point features. Analysis of sentence level features reveals that synchronous speech has smaller Sr (sentence level pitch range), Ss (sentence slope), MaxNr (normalized maximum pitch) and MinNr (normalized minimum pitch) but greater Min (minimum pitch) and Sd (sentence duration) than solo speech. It is also shown that the higher the Mid (median pitch), the MaxNr and the MinNr in solo speaking mode, the more they are reduced in synchronous speaking mode. Max, Min and Mid show greater speaker discriminability than other features.

Cyber Threats Analysis of AI Voice Recognition-based Services with Automatic Speaker Verification (화자식별 기반의 AI 음성인식 서비스에 대한 사이버 위협 분석)

  • Hong, Chunho;Cho, Youngho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.33-40
    • /
    • 2021
  • Automatic Speech Recognition(ASR) is a technology that analyzes human speech sound into speech signals and then automatically converts them into character strings that can be understandable by human. Speech recognition technology has evolved from the basic level of recognizing a single word to the advanced level of recognizing sentences consisting of multiple words. In real-time voice conversation, the high recognition rate improves the convenience of natural information delivery and expands the scope of voice-based applications. On the other hand, with the active application of speech recognition technology, concerns about related cyber attacks and threats are also increasing. According to the existing studies, researches on the technology development itself, such as the design of the Automatic Speaker Verification(ASV) technique and improvement of accuracy, are being actively conducted. However, there are not many analysis studies of attacks and threats in depth and variety. In this study, we propose a cyber attack model that bypasses voice authentication by simply manipulating voice frequency and voice speed for AI voice recognition service equipped with automated identification technology and analyze cyber threats by conducting extensive experiments on the automated identification system of commercial smartphones. Through this, we intend to inform the seriousness of the related cyber threats and raise interests in research on effective countermeasures.

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

Shapes of Vowel F0 Contours Influenced by Preceding Obstruents of Different Types - Automatic Analyses Using Tilt Parameters-

  • Jang, Tae-Yeoub
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.105-116
    • /
    • 2004
  • The fundamental frequency of a vowel is known to be affected by the identity of the preceding consonant. The general agreement is that strong consonants trigger higher F0 than weak consonants. However, there has been a disagreement on the shape of this segmentally affected F0 contours. Some studies report that shapes of contours are differentiated based on the consonant type, but others regard this observation as misleading. This research attempts to resolve this controversy by investigating shapes and slopes of F0 contours of Korean word level speech data produced by four male speakers. Instead of entirely relying on traditional human intuition and judgment, I employed an automatic F0 contour analysis technique known as tilt parameterisation (Taylor 2000). After necessary manipulation of an F0 contour of each data token, various parameters are collapsed into a single tilt value which directly indicates the shape of the contour. The result, in terms of statistical inference, shows that it is not viable to conclude that the type of consonant is significantly related to the shape of F0 contour. A supplementary measurement is also made to see if the slope of each contour bears meaningful information. Unlike shapes themselves, slopes are suspected to be practically more practical for consonantal differentiation, although confirmation is required through further refined experiments.

  • PDF

Intra-and Inter-frame Features for Automatic Speech Recognition

  • Lee, Sung Joo;Kang, Byung Ok;Chung, Hoon;Lee, Yunkeun
    • ETRI Journal
    • /
    • v.36 no.3
    • /
    • pp.514-517
    • /
    • 2014
  • In this paper, alternative dynamic features for speech recognition are proposed. The goal of this work is to improve speech recognition accuracy by deriving the representation of distinctive dynamic characteristics from a speech spectrum. This work was inspired by two temporal dynamics of a speech signal. One is the highly non-stationary nature of speech, and the other is the inter-frame change of a speech spectrum. We adopt the use of a sub-frame spectrum analyzer to capture very rapid spectral changes within a speech analysis frame. In addition, we attempt to measure spectral fluctuations of a more complex manner as opposed to traditional dynamic features such as delta or double-delta. To evaluate the proposed features, speech recognition tests over smartphone environments were conducted. The experimental results show that the feature streams simply combined with the proposed features are effective for an improvement in the recognition accuracy of a hidden Markov model-based speech recognizer.

Differentiation of Aphasic Patients from the Normal Control Via a Computational Analysis of Korean Utterances

  • Kim, HyangHee;Choi, Ji-Myoung;Kim, Hansaem;Baek, Ginju;Kim, Bo Seon;Seo, Sang Kyu
    • International Journal of Contents
    • /
    • v.15 no.1
    • /
    • pp.39-51
    • /
    • 2019
  • Spontaneous speech provides rich information defining the linguistic characteristics of individuals. As such, computational analysis of speech would enhance the efficiency involved in evaluating patients' speech. This study aims to provide a method to differentiate the persons with and without aphasia based on language usage. Ten aphasic patients and their counterpart normal controls participated, and they were all tasked to describe a set of given words. Their utterances were linguistically processed and compared to each other. Computational analyses from PCA (Principle Component Analysis) to machine learning were conducted to select the relevant linguistic features, and consequently to classify the two groups based on the features selected. It was found that functional words, not content words, were the main differentiator of the two groups. The most viable discriminators were demonstratives, function words, sentence final endings, and postpositions. The machine learning classification model was found to be quite accurate (90%), and to impressively be stable. This study is noteworthy as it is the first attempt that uses computational analysis to characterize the word usage patterns in Korean aphasic patients, thereby discriminating from the normal group.