• Title/Summary/Keyword: Non-speech

Search Result 470, Processing Time 0.025 seconds

Non-Intrusive Speech Quality Estimation of G.729 Codec using a Packet Loss Effect Model (G.729 코덱의 패킷 손실 영향 모델을 이용한 비 침입적 음질 예측 기법)

  • Lee, Min-Ki;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.157-166
    • /
    • 2013
  • This paper proposes a non-intrusive speech quality estimation method considering the effects of packet loss to perceptual quality. Packet loss is a major reason of quality degradation in a packet based speech communications network, whose effects are different according to the input speech characteristics or the performance of the embedded packet loss concealment (PLC) algorithm. For the quality estimation system that involves packet loss effects, we first observe the packet loss of G.729 codec which is one of narrowband codec in VoIP system. In order to quantify the lost packet affects, we design a classification algorithm only using speech parameters of G.729 decoder. Then, the degradation values of each class are iteratively selected that maximizes the correlation with the degradation PESQ-LQ scores, and total quality degradation is modeled by the weighted sum. From analyzing the correlation measures, we obtained correlation values of 0.8950 for the intrusive model and 0.8911 for the non-intrusive method.

Hybrid CTC-Attention Network-Based End-to-End Speech Recognition System for Korean Language

  • Hosung Park;Changmin Kim;Hyunsoo Son;Soonshin Seo;Ji-Hwan Kim
    • Journal of Web Engineering
    • /
    • v.21 no.2
    • /
    • pp.265-284
    • /
    • 2021
  • In this study, an automatic end-to-end speech recognition system based on hybrid CTC-attention network for Korean language is proposed. Deep neural network/hidden Markov model (DNN/HMM)-based speech recognition system has driven dramatic improvement in this area. However, it is difficult for non-experts to develop speech recognition for new applications. End-to-end approaches have simplified speech recognition system into a single-network architecture. These approaches can develop speech recognition system that does not require expert knowledge. In this paper, we propose hybrid CTC-attention network as end-to-end speech recognition model for Korean language. This model effectively utilizes a CTC objective function during attention model training. This approach improves the performance in terms of speech recognition accuracy as well as training speed. In most languages, end-to-end speech recognition uses characters as output labels. However, for Korean, character-based end-to-end speech recognition is not an efficient approach because Korean language has 11,172 possible numbers of characters. The number is relatively large compared to other languages. For example, English has 26 characters, and Japanese has 50 characters. To address this problem, we utilize Korean 49 graphemes as output labels. Experimental result shows 10.02% character error rate (CER) when 740 hours of Korean training data are used.

Treatment of velopharyngeal insufficiency in a patient with a submucous cleft palate using a speech aid: the more treatment options, the better the treatment results

  • Park, Yun-Ha;Jo, Hyun-Jun;Hong, In-Seok;Leem, Dae-Ho;Baek, Jin-A;Ko, Seung-O
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.41
    • /
    • pp.19.1-19.6
    • /
    • 2019
  • Background: The submucous cleft palate (SMCP) is a type of cleft palate that may result in velopharyngeal insufficiency (VPI). Palate muscles completely separate oral and nasal cavities by closing off the velopharynx during functional processes such as speech or swallow. Also, hypernasality may arise from anatomical or neurological abnormalities in these functions. Treatments of this issue involve a combination of surgical intervention, speech aid, and speech therapy. This case report demonstrates successfully treated VPI resulted from SMCP without any surgical intervention but solely with speech aid appliance and speech therapy. Case presentation: A 13-year-old female patient with a speech disorder from velopharyngeal insufficiency that was caused by a submucous cleft palate visited to our OMFS clinic. In the intraoral examination, the patient had a short soft palate and bifid uvula. And the muscles in the palate did not contract properly during oral speech. She had no surgical history such as primary palatoplasty or pharyngoplasty except for tonsillectomy. And there were no other medical histories. Objective speech assessment using nasometer was performed. We diagnosed that the patient had a SMCP. The patient has shown a decrease in speech intelligibility, which resulted from hypernasality. We decided to treat the patient with speech aid (palatal lift) along with speech therapy. During the 7-month treatment, hypernasality measured by a nasometer decreased and speech intelligibility became normal. Conclusions: Surgery remains the first treatment option for patients with velopharyngeal insufficiencies from submucous cleft palates. However, there were few reports about objective speech evaluation pre- or post-operation. Moreover, there has been no report of non-surgical treatment in the recent studies. From this perspective, this report of objective improvement of speech intelligibility of VPI patient with SMCP by non-surgical treatment has a significant meaning. Speech aid can be considered as one of treatment options for management of SMCP.

Survey on the Status and Perceptions, Needs of Non-verbal Autism Spectrum Disorders Intervention of Speech-Language Pathologists (무발화 자폐스펙트럼장애 중재에 대한 언어재활사의 현황과 인식, 요구 조사)

  • Son, So-Yee
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.9
    • /
    • pp.520-530
    • /
    • 2022
  • The purpose of this study is to examine the status and perceptions, needs of speech language pathologists(SLPs) for the intervention of non-verbal autism spectrum disorders(ASD) through a survey. Among SLPs registered in the Korean Association of Speech-Language Pathologists (KSLP), 116 SLPs participated in this survey. The result is as follows. First, 96.6% of SLP reported that they had been referred for non-verbal ASD, and it was found that SELSI was the most used evaluation tool, and communication ability and social interaction were the most frequently used intervention goals. Second, 86.2% of the SLP said that speech therapy had difficulties, and the reason for the burden was the lack of speech therapy methods. Also, the level of knowledge of speech therapy for non-verbal ASD was low in the treatment area, and the level of confidence responded positively only in communication ability and social interaction. Third, education that was considered necessary within the curriculum was education on treatment methods, and it was found that the improvement points of education other than regular courses were increased education such as expert courses and workshops and activation of supervision. From the results of this study, it is expected that the related curriculum will be expanded and improved in the future.

PROSODY IN SPEECH TECHNOLOGY - National project and some of our related works -

  • Hirose Keikichi
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.15-18
    • /
    • 2002
  • Prosodic features of speech are known to play an important role in the transmission of linguistic information in human conversation. Their roles in the transmission of para- and non- linguistic information are even much more. In spite of their importance in human conversation, from engineering viewpoint, research focuses are mainly placed on segmental features, and not so much on prosodic features. With the aim of promoting research works on prosody, a research project 'Prosody and Speech Processing' is now going on. A rough sketch of the project is first given in the paper. Then, the paper introduces several prosody-related research works, which are going on in our laboratory. They include, corpus-based fundamental frequency contour generation, speech rate control for dialogue-like speech synthesis, analysis of prosodic features of emotional speech, reply speech generation in spoken dialogue systems, and language modeling with prosodic boundaries.

  • PDF

Noise Reduction Using MMSE Estimator-based Adaptive Comb Filtering (MMSE Estimator 기반의 적응 콤 필터링을 이용한 잡음 제거)

  • Park, Jeong-Sik;Oh, Yung-Hwan
    • MALSORI
    • /
    • no.60
    • /
    • pp.181-190
    • /
    • 2006
  • This paper describes a speech enhancement scheme that leads to significant improvements in recognition performance when used in the ASR front-end. The proposed approach is based on adaptive comb filtering and an MMSE-related parameter estimator. While adaptive comb filtering reduces noise components remarkably, it is rarely effective in reducing non-stationary noises. Furthermore, due to the uniformly distributed frequency response of the comb-filter, it can cause serious distortion to clean speech signals. This paper proposes an improved comb-filter that adjusts its spectral magnitude to the original speech, based on the speech absence probability and the gain modification function. In addition, we introduce the modified comb filtering-based speech enhancement scheme for ASR in mobile environments. Evaluation experiments carried out using the Aurora 2 database demonstrate that the proposed method outperforms conventional adaptive comb filtering techniques in both clean and noisy environments.

  • PDF

Conveyed Message in YouTube Product Review Videos: The discrepancy between sponsored and non-sponsored product review videos

  • Kim, Do Hun;Suh, Ji Hae
    • The Journal of Information Systems
    • /
    • v.32 no.4
    • /
    • pp.29-50
    • /
    • 2023
  • Purpose The impact of online reviews is widely acknowledged, with extensive research focused on text-based reviews. However, there's a lack of research regarding reviews in video format. To address this gap, this study aims to explore the connection between company-sponsored product review videos and the extent of directive speech within them. This article analyzed viewer sentiments expressed in video comments based on the level of directive speech used by the presenter. Design/methodology/approach This study involved analyzing speech acts in review videos based on sponsorship and examining consumer reactions through sentiment analysis of comments. We used Speech Act theory to perform the analysis. Findings YouTubers who receive company sponsorship for review videos tend to employ more directive speech. Furthermore, this increased use of directive speech is associated with a higher occurrence of negative consumer comments. This study's outcomes are valuable for the realm of user-generated content and natural language processing, offering practical insights for YouTube marketing strategies.

Estimation and Weighting of Sub-band Reliability for Multi-band Speech Recognition (다중대역 음성인식을 위한 부대역 신뢰도의 추정 및 가중)

  • 조훈영;지상문;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.552-558
    • /
    • 2002
  • Recently, based on the human speech recognition (HSR) model of Fletcher, the multi-band speech recognition has been intensively studied by many researchers. As a new automatic speech recognition (ASR) technique, the multi-band speech recognition splits the frequency domain into several sub-bands and recognizes each sub-band independently. The likelihood scores of sub-bands are weighted according to reliabilities of sub-bands and re-combined to make a final decision. This approach is known to be robust under noisy environments. When the noise is stationary a sub-band SNR can be estimated using the noise information in non-speech interval. However, if the noise is non-stationary it is not feasible to obtain the sub-band SNR. This paper proposes the inverse sub-band distance (ISD) weighting, where a distance of each sub-band is calculated by a stochastic matching of input feature vectors and hidden Markov models. The inverse distance is used as a sub-band weight. Experiments on 1500∼1800㎐ band-limited white noise and classical guitar sound revealed that the proposed method could represent the sub-band reliability effectively and improve the performance under both stationary and non-stationary band-limited noise environments.

A Study on a Non-Voice Section Detection Model among Speech Signals using CNN Algorithm (CNN(Convolutional Neural Network) 알고리즘을 활용한 음성신호 중 비음성 구간 탐지 모델 연구)

  • Lee, Hoo-Young
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.6
    • /
    • pp.33-39
    • /
    • 2021
  • Speech recognition technology is being combined with deep learning and is developing at a rapid pace. In particular, voice recognition services are connected to various devices such as artificial intelligence speakers, vehicle voice recognition, and smartphones, and voice recognition technology is being used in various places, not in specific areas of the industry. In this situation, research to meet high expectations for the technology is also being actively conducted. Among them, in the field of natural language processing (NLP), there is a need for research in the field of removing ambient noise or unnecessary voice signals that have a great influence on the speech recognition recognition rate. Many domestic and foreign companies are already using the latest AI technology for such research. Among them, research using a convolutional neural network algorithm (CNN) is being actively conducted. The purpose of this study is to determine the non-voice section from the user's speech section through the convolutional neural network. It collects the voice files (wav) of 5 speakers to generate learning data, and utilizes the convolutional neural network to determine the speech section and the non-voice section. A classification model for discriminating speech sections was created. Afterwards, an experiment was conducted to detect the non-speech section through the generated model, and as a result, an accuracy of 94% was obtained.

Comparison of Korean Real-time Text-to-Speech Technology Based on Deep Learning (딥러닝 기반 한국어 실시간 TTS 기술 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.640-645
    • /
    • 2021
  • The deep learning based end-to-end TTS system consists of Text2Mel module that generates spectrogram from text, and vocoder module that synthesizes speech signals from spectrogram. Recently, by applying deep learning technology to the TTS system the intelligibility and naturalness of the synthesized speech is as improved as human vocalization. However, it has the disadvantage that the inference speed for synthesizing speech is very slow compared to the conventional method. The inference speed can be improved by applying the non-autoregressive method which can generate speech samples in parallel independent of previously generated samples. In this paper, we introduce FastSpeech, FastSpeech 2, and FastPitch as Text2Mel technology, and Parallel WaveGAN, Multi-band MelGAN, and WaveGlow as vocoder technology applying non-autoregressive method. And we implement them to verify whether it can be processed in real time. Experimental results show that by the obtained RTF all the presented methods are sufficiently capable of real-time processing. And it can be seen that the size of the learned model is about tens to hundreds of megabytes except WaveGlow, and it can be applied to the embedded environment where the memory is limited.