• Title/Summary/Keyword: speech task

Search Result 316, Processing Time 0.019 seconds

Harmonics-based Spectral Subtraction and Feature Vector Normalization for Robust Speech Recognition

  • Beh, Joung-Hoon;Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.7-20
    • /
    • 2004
  • In this paper, we propose a two-step noise compensation algorithm in feature extraction for achieving robust speech recognition. The proposed method frees us from requiring a priori information on noisy environments and is simple to implement. First, in frequency domain, the Harmonics-based Spectral Subtraction (HSS) is applied so that it reduces the additive background noise and makes the shape of harmonics in speech spectrum more pronounced. We then apply a judiciously weighted variance Feature Vector Normalization (FVN) to compensate for both the channel distortion and additive noise. The weighted variance FVN compensates for the variance mismatch in both the speech and the non-speech regions respectively. Representative performance evaluation using Aurora 2 database shows that the proposed method yields 27.18% relative improvement in accuracy under a multi-noise training task and 57.94% relative improvement under a clean training task.

  • PDF

The Relationship Between Speech Intelligibility and Comprehensibility for Children with Cochlear Implants (조음중증도에 따른 인공와우이식 아동들의 말명료도와 이해가능도의 상관연구)

  • Heo, Hyun-Sook;Ha, Seung-Hee
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.171-178
    • /
    • 2010
  • This study examined the relationship between speech intelligibility and comprehensibility for hearing impaired children with cochlear implants. Speech intelligibility was measured by orthographic transcription method for acoustic signal at the level of words and sentences. Comprehensibility was evaluated by examining listener's ability to answer questions about the contents of a narrative. Speech samples were collected from 12 speakers(age of 6~15 years) with cochlear implants. For each speaker, 4 different listeners(total of 48 listeners) completed 2 tasks: One task involved making orthographic transcriptions and the other task involved answering comprehension questions. The results of the study were as follows: (1) Speech intelligibility and comprehensibility scores tended to be increased by decreasing of severity. (2) Across all speakers, the relationship was significant between speech intelligibility and comprehensibility scores without considering severity. However, within severity groups, there was the significant relationship between comprehensibility and speech intelligibility only for moderate-severe group. These results suggest that speech intelligibility scores measured by orthographic transcription may not accurately reflect how well listener comprehend speech of children with cochlear implants and therefore, measures of both speech intelligibility and listener comprehension should be considered in evaluating speech ability and information-bearing capability in speakers with cochlear implants.

  • PDF

The influence of task demands on the preparation of spoken word production: Evidence from Korean

  • Choi, Tae-Hwan;Oh, Sujin;Han, Jeong-Im
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.1-7
    • /
    • 2017
  • It was shown in speech production studies that the preparation unit of spoken word production is language particular, such as onset phonemes for English and Dutch, syllables for Mandarin Chinese, and morae for Japanese. However, there have been inconsistent results on whether the onset phoneme is a planning unit of spoken word production in Korean. In this study, two sets of experiments investigated possible influences of task demands on the phonological preparation in native Korean adults, namely, implicit priming and word naming with the form preparation paradigm. Only the word naming task, but not the implicit priming task, showed a significant onset priming effect, even though there were significant syllable priming effects in both tasks. Following the attentional theory ($O^{\prime}S{\acute{e}}aghdha$ & Frazer, 2014), these results suggest that task demands might play a role in the absence/presence of onset priming effects in Korean. Native Korean speakers could maintain their attention to the shared onset phonemes in word naming, which is not very demanding, while they have difficulties in allocating their attention to such units in a more cognitive-demanding implicit priming, even though both tasks involve accessing phonological codes. These findings demonstrate that there are cross-linguistic differences in the first selectable unit in preparation of spoken word production, but within a single language, the preparation unit might not be immutable.

BERT-Based Logits Ensemble Model for Gender Bias and Hate Speech Detection

  • Sanggeon Yun;Seungshik Kang;Hyeokman Kim
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.641-651
    • /
    • 2023
  • Malicious hate speech and gender bias comments are common in online communities, causing social problems in our society. Gender bias and hate speech detection has been investigated. However, it is difficult because there are diverse ways to express them in words. To solve this problem, we attempted to detect malicious comments in a Korean hate speech dataset constructed in 2020. We explored bidirectional encoder representations from transformers (BERT)-based deep learning models utilizing hyperparameter tuning, data sampling, and logits ensembles with a label distribution. We evaluated our model in Kaggle competitions for gender bias, general bias, and hate speech detection. For gender bias detection, an F1-score of 0.7711 was achieved using an ensemble of the Soongsil-BERT and KcELECTRA models. The general bias task included the gender bias task, and the ensemble model achieved the best F1-score of 0.7166.

The Primitive Representation in Speech Perception: Phoneme or Distinctive Features (말지각의 기초표상: 음소 또는 변별자질)

  • Bae, Moon-Jung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.157-169
    • /
    • 2013
  • Using a target detection task, this study compared the processing automaticity of phonemes and features in spoken syllable stimuli to determine the primitive representation in speech perception, phoneme or distinctive feature. For this, we modified the visual search task(Treisman et al., 1992) developed to investigate the processing of visual features(ex. color, shape or their conjunction) for auditory stimuli. In our task, the distinctive features(ex. aspiration or coronal) corresponded to visual primitive features(ex. color and shape), and the phonemes(ex. /$t^h$/) to visual conjunctive features(ex. colored shapes). The automaticity is measured by the set size effect that was the increasing amount of reaction time when the number of distracters increased. Three experiments were conducted. The laryngeal features(experiment 1), the manner features(experiment 2), and the place features(experiment 3) were compared with phonemes. The results showed that the distinctive features are consistently processed faster and automatically than the phonemes. Additionally there were differences in the processing automaticity among the classes of distinctive features. The laryngeal features are the most automatic, the manner features are moderately automatic and the place features are the least automatic. These results are consistent with the previous studies(Bae et al., 2002; Bae, 2010) that showed the perceptual hierarchy of distinctive features.

Effects of number of letters on second language sound length

  • Jeong-Im Han
    • Phonetics and Speech Sciences
    • /
    • v.16 no.3
    • /
    • pp.25-31
    • /
    • 2024
  • The present study replicated and extended a previous research investigating whether orthographic forms, such as a single letter or a digraph representing the same sound, affect sound duration in L2 production. Results of a previous study (Han et al., 2024) showed that Korean learners produced the same English vowel with a short duration when spelled with a single letter and a long duration when spelled with digraphs. This variation in duration did not appear when producing English consonants with various spellings. However, these results may be attributable to the task type, namely the delayed repetition task, which might have prevented direct imitation from sensory memory. To test whether the overt presentation of letters shows orthographic effects for consonants as well as vowels, this study employed a read-aloud task. This study further examined whether individual differences in proficiency, measured by vocabulary size, influenced the magnitude of orthographic effects in the production of English vowels by Korean learners. The present results replicated those from the delayed repetition task, suggesting that the orthographic effects shown in previous research were not attributable to the task type employed to evaluate L2 production. We also found that individual differences in vocabulary size are not strongly related to the influence of orthography on vowel production.

Joint streaming model for backchannel prediction and automatic speech recognition

  • Yong-Seok Choi;Jeong-Uk Bang;Seung Hi Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.118-126
    • /
    • 2024
  • In human conversations, listeners often utilize brief backchannels such as "uh-huh" or "yeah." Timely backchannels are crucial to understanding and increasing trust among conversational partners. In human-machine conversation systems, users can engage in natural conversations when a conversational agent generates backchannels like a human listener. We propose a method that simultaneously predicts backchannels and recognizes speech in real time. We use a streaming transformer and adopt multitask learning for concurrent backchannel prediction and speech recognition. The experimental results demonstrate the superior performance of our method compared with previous works while maintaining a similar single-task speech recognition performance. Owing to the extremely imbalanced training data distribution, the single-task backchannel prediction model fails to predict any of the backchannel categories, and the proposed multitask approach substantially enhances the backchannel prediction performance. Notably, in the streaming prediction scenario, the performance of backchannel prediction improves by up to 18.7% compared with existing methods.

Combining multi-task autoencoder with Wasserstein generative adversarial networks for improving speech recognition performance (음성인식 성능 개선을 위한 다중작업 오토인코더와 와설스타인식 생성적 적대 신경망의 결합)

  • Kao, Chao Yuan;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.6
    • /
    • pp.670-677
    • /
    • 2019
  • As the presence of background noise in acoustic signal degrades the performance of speech or acoustic event recognition, it is still challenging to extract noise-robust acoustic features from noisy signal. In this paper, we propose a combined structure of Wasserstein Generative Adversarial Network (WGAN) and MultiTask AutoEncoder (MTAE) as deep learning architecture that integrates the strength of MTAE and WGAN respectively such that it estimates not only noise but also speech features from noisy acoustic source. The proposed MTAE-WGAN structure is used to estimate speech signal and the residual noise by employing a gradient penalty and a weight initialization method for Leaky Rectified Linear Unit (LReLU) and Parametric ReLU (PReLU). The proposed MTAE-WGAN structure with the adopted gradient penalty loss function enhances the speech features and subsequently achieve substantial Phoneme Error Rate (PER) improvements over the stand-alone Deep Denoising Autoencoder (DDAE), MTAE, Redundant Convolutional Encoder-Decoder (R-CED) and Recurrent MTAE (RMTAE) models for robust speech recognition.

A longitudinal study on the development of English phonological awareness in preschool children (어린이집 유아의 영어 음운 인식 발달 종단 연구)

  • Chung, Hyunsong
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.53-66
    • /
    • 2018
  • This study investigated the development of English phonological awareness in preschool children based on a longitudinal study. It carried out a phonological matching task, mispronunciation task, articulation test, explicit phoneme awareness task, rhyme matching task, and initial-phoneme matching task for three-, four- and five-year-old children. A letter knowledge test was also added to the tests for the 5-year-old children. The results revealed that the development of phonological awareness follows a progression of syllable, then onset and rhyme, then phoneme. It was also revealed that language skills such as vocabulary, detection of mispronunciations, and articulation were partially related to the development of phoneme awareness. Finally, we also found that letter knowledge partially affected the children's development of phonological awareness.

An Analysis of Science-gifted Elementary Students' Perception of Speech and the Relationship between Their Voluntary Speech and Scientific Creativity (초등과학영재학생의 발표에 대한 인식 및 발표의 자발성과 과학창의성의 관계 분석)

  • Kim, Minju;Lim, Chaeseong
    • Journal of Korean Elementary Science Education
    • /
    • v.38 no.3
    • /
    • pp.331-344
    • /
    • 2019
  • This study aims to analyse science-gifted elementary students' perception of speech in general school class, school science class, and science-gifted class and the relationship between their voluntary speech and scientific creativity. For this, 39 fifth-graders in the Science-Gifted Education Center at Seoul Metropolitan Office of Education in Korea were asked about their frequency of voluntary speech on each class situation, the reasons for such behavior, and their general opinions about speech. Also, researchers collected the teachers' observation on students' speech in class. To get the scores for students' scientific creativity, four different subjects of tasks were presented. The students' scientific creativity scores were used for correlation analysis with their frequency of speech. The main findings from this study are as follows: First, science-gifted elementary students tended to be passive in science-gifted class compared to general school and school science class. Second, the main reason for the low frequency of students' speech in school classes is that they do not have many opportunities to make presentations. Third, a survey of students' general thoughts on speech showed that more students wanted to make a speech voluntarily in class than the opposite. Fourth, the four different scientific creativity tasks had little correlation. Fifth, the correlations between the frequency of voluntary speech and the scores of scientific creativity were mostly low, with significant results only for plant task. Sixth, the correlations between the frequency of voluntary speech and the two components that make up scientific creativity, originality and usefulness, were also mostly low, but significant results for both were found in plant task, with originality having a higher correlation than usefulness. Based on this results, this study discussed the meanings and implications of students' voluntary speech on elementary science education and creativity education.