• 제목/요약/키워드: Korean speech

검색결과 5,300건 처리시간 0.035초

A Speech Homomorphic Encryption Scheme with Less Data Expansion in Cloud Computing

  • Shi, Canghong;Wang, Hongxia;Hu, Yi;Qian, Qing;Zhao, Hong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권5호
    • /
    • pp.2588-2609
    • /
    • 2019
  • Speech homomorphic encryption has become one of the key components in secure speech storing in the public cloud computing. The major problem of speech homomorphic encryption is the huge data expansion of speech cipher-text. To address the issue, this paper presents a speech homomorphic encryption scheme with less data expansion, which is a probabilistic statistics and addition homomorphic cryptosystem. In the proposed scheme, the original digital speech with some random numbers selected is firstly grouped to form a series of speech matrix. Then, a proposed matrix encryption method is employed to encrypt that speech matrix. After that, mutual information in sample speech cipher-texts is reduced to limit the data expansion. Performance analysis and experimental results show that the proposed scheme is addition homomorphic, and it not only resists statistical analysis attacks but also eliminates some signal characteristics of original speech. In addition, comparing with Paillier homomorphic cryptosystem, the proposed scheme has less data expansion and lower computational complexity. Furthermore, the time consumption of the proposed scheme is almost the same on the smartphone and the PC. Thus, the proposed scheme is extremely suitable for secure speech storing in public cloud computing.

Study of Boundary Tone according to Speech Rate in Korean (발화 속도에 따른 국어의 경계 성조 연구)

  • Park Mi Young
    • Proceedings of the KSPS conference
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.73-76
    • /
    • 2002
  • The purpose of this paper is to research Korean boundary tone of sentence type and perceptive speaker's attitude according to speech rate - three type. In view of the preceding study, Korean intonation's meaning is determined by boundary tone. Also, in my experimental results, Korean boundary tone of sentence type has preferential tone. However, Korean boundary tone of sentence type is not influential according to speech rate. The speech rate's change of three pattern is influential in auditor's perceptual response. The relationship between the pitch contour of boundary tone and speech rate is not significant.

  • PDF

Development of Speech Training Aids Using Vocal Tract Profile (조음도를 이용한 발음훈련기기의 개발)

  • 박상희;김동준;이재혁;윤태성
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • 제41권2호
    • /
    • pp.209-216
    • /
    • 1992
  • Deafs train articulation by observing mouth of a tutor, sensing tactually the motions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech parameter, or display only frequency spectra in histogram of pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system is to be developed and this system makes a subject know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory motions of the vocal organs from speech signal. Next, a vocal tract profile model using LP analysis is made up. And using this model, articulatory motions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF

An Analysis of Science-gifted Elementary Students' Perception of Speech and the Relationship between Their Voluntary Speech and Scientific Creativity (초등과학영재학생의 발표에 대한 인식 및 발표의 자발성과 과학창의성의 관계 분석)

  • Kim, Minju;Lim, Chaeseong
    • Journal of Korean Elementary Science Education
    • /
    • 제38권3호
    • /
    • pp.331-344
    • /
    • 2019
  • This study aims to analyse science-gifted elementary students' perception of speech in general school class, school science class, and science-gifted class and the relationship between their voluntary speech and scientific creativity. For this, 39 fifth-graders in the Science-Gifted Education Center at Seoul Metropolitan Office of Education in Korea were asked about their frequency of voluntary speech on each class situation, the reasons for such behavior, and their general opinions about speech. Also, researchers collected the teachers' observation on students' speech in class. To get the scores for students' scientific creativity, four different subjects of tasks were presented. The students' scientific creativity scores were used for correlation analysis with their frequency of speech. The main findings from this study are as follows: First, science-gifted elementary students tended to be passive in science-gifted class compared to general school and school science class. Second, the main reason for the low frequency of students' speech in school classes is that they do not have many opportunities to make presentations. Third, a survey of students' general thoughts on speech showed that more students wanted to make a speech voluntarily in class than the opposite. Fourth, the four different scientific creativity tasks had little correlation. Fifth, the correlations between the frequency of voluntary speech and the scores of scientific creativity were mostly low, with significant results only for plant task. Sixth, the correlations between the frequency of voluntary speech and the two components that make up scientific creativity, originality and usefulness, were also mostly low, but significant results for both were found in plant task, with originality having a higher correlation than usefulness. Based on this results, this study discussed the meanings and implications of students' voluntary speech on elementary science education and creativity education.

Relationship between Speech Perception in Noise and Phonemic Restoration of Speech in Noise in Individuals with Normal Hearing

  • Vijayasarathy, Srikar;Barman, Animesh
    • Journal of Audiology & Otology
    • /
    • 제24권4호
    • /
    • pp.167-173
    • /
    • 2020
  • Background and Objectives: Top-down restoration of distorted speech, tapped as phonemic restoration of speech in noise, maybe a useful tool to understand robustness of perception in adverse listening situations. However, the relationship between phonemic restoration and speech perception in noise is not empirically clear. Subjects and Methods: 20 adults (40-55 years) with normal audiometric findings were part of the study. Sentence perception in noise performance was studied with various signal-to-noise ratios (SNRs) to estimate the SNR with 50% score. Performance was also measured for sentences interrupted with silence and for those interrupted by speech noise at -10, -5, 0, and 5 dB SNRs. The performance score in the noise interruption condition was subtracted by quiet interruption condition to determine the phonemic restoration magnitude. Results: Fairly robust improvements in speech intelligibility was found when the sentences were interrupted with speech noise instead of silence. Improvement with increasing noise levels was non-monotonic and reached a maximum at -10 dB SNR. Significant correlation between speech perception in noise performance and phonemic restoration of sentences interrupted with -10 dB SNR speech noise was found. Conclusions: It is possible that perception of speech in noise is associated with top-down processing of speech, tapped as phonemic restoration of interrupted speech. More research with a larger sample size is indicated since the restoration is affected by the type of speech material and noise used, age, working memory, and linguistic proficiency, and has a large individual variability.

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • 제45권1호
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

A Study on the Speech Recognition of Korean Phonemes Using Recurrent Neural Network Models (순환 신경망 모델을 이용한 한국어 음소의 음성인식에 대한 연구)

  • 김기석;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • 제40권8호
    • /
    • pp.782-791
    • /
    • 1991
  • In the fields of pattern recognition such as speech recognition, several new techniques using Artifical Neural network Models have been proposed and implemented. In particular, the Multilayer Perception Model has been shown to be effective in static speech pattern recognition. But speech has dynamic or temporal characteristics and the most important point in implementing speech recognition systems using Artificial Neural Network Models for continuous speech is the learning of dynamic characteristics and the distributed cues and contextual effects that result from temporal characteristics. But Recurrent Multilayer Perceptron Model is known to be able to learn sequence of pattern. In this paper, the results of applying the Recurrent Model which has possibilities of learning tedmporal characteristics of speech to phoneme recognition is presented. The test data consist of 144 Vowel+ Consonant + Vowel speech chains made up of 4 Korean monothongs and 9 Korean plosive consonants. The input parameters of Artificial Neural Network model used are the FFT coefficients, residual error and zero crossing rates. The Baseline model showed a recognition rate of 91% for volwels and 71% for plosive consonants of one male speaker. We obtained better recognition rates from various other experiments compared to the existing multilayer perceptron model, thus showed the recurrent model to be better suited to speech recognition. And the possibility of using Recurrent Models for speech recognition was experimented by changing the configuration of this baseline model.

Gender difference in speech intelligibility using speech intelligibility tests and acoustic analyses

  • Kwon, Ho-Beom
    • The Journal of Advanced Prosthodontics
    • /
    • 제2권3호
    • /
    • pp.71-76
    • /
    • 2010
  • PURPOSE. The purpose of this study was to compare men with women in terms of speech intelligibility, to investigate the validity of objective acoustic parameters related with speech intelligibility, and to try to set up the standard data for the future study in various field in prosthodontics. MATERIALS AND METHODS. Twenty men and women were served as subjects in the present study. After recording of sample sounds, speech intelligibility tests by three speech pathologists and acoustic analyses were performed. Comparison of the speech intelligibility test scores and acoustic parameters such as fundamental frequency, fundamental frequency range, formant frequency, formant ranges, vowel working space area, and vowel dispersion were done between men and women. In addition, the correlations between the speech intelligibility values and acoustic variables were analyzed. RESULTS. Women showed significantly higher speech intelligibility scores than men and there were significant difference between men and women in most of acoustic parameters used in the present study. However, the correlations between the speech intelligibility scores and acoustic parameters were low. CONCLUSION. Speech intelligibility test and acoustic parameters used in the present study were effective in differentiating male voice from female voice and their values might be used in the future studies related patients involved with maxillofacial prosthodontics. However, further studies are needed on the correlation between speech intelligibility tests and objective acoustic parameters.

The Relationship Between Speech Intelligibility and Comprehensibility for Children with Cochlear Implants (조음중증도에 따른 인공와우이식 아동들의 말명료도와 이해가능도의 상관연구)

  • Heo, Hyun-Sook;Ha, Seung-Hee
    • Phonetics and Speech Sciences
    • /
    • 제2권3호
    • /
    • pp.171-178
    • /
    • 2010
  • This study examined the relationship between speech intelligibility and comprehensibility for hearing impaired children with cochlear implants. Speech intelligibility was measured by orthographic transcription method for acoustic signal at the level of words and sentences. Comprehensibility was evaluated by examining listener's ability to answer questions about the contents of a narrative. Speech samples were collected from 12 speakers(age of 6~15 years) with cochlear implants. For each speaker, 4 different listeners(total of 48 listeners) completed 2 tasks: One task involved making orthographic transcriptions and the other task involved answering comprehension questions. The results of the study were as follows: (1) Speech intelligibility and comprehensibility scores tended to be increased by decreasing of severity. (2) Across all speakers, the relationship was significant between speech intelligibility and comprehensibility scores without considering severity. However, within severity groups, there was the significant relationship between comprehensibility and speech intelligibility only for moderate-severe group. These results suggest that speech intelligibility scores measured by orthographic transcription may not accurately reflect how well listener comprehend speech of children with cochlear implants and therefore, measures of both speech intelligibility and listener comprehension should be considered in evaluating speech ability and information-bearing capability in speakers with cochlear implants.

  • PDF

Characteristics of the auditory evaluation of good impression using speech manipulation scripts (말소리 변조 스크립트를 이용한 호감도 청취평가 특징)

  • Kwon, Soonbok
    • Phonetics and Speech Sciences
    • /
    • 제8권4호
    • /
    • pp.131-138
    • /
    • 2016
  • This study analyzes the characteristics of good impression using speech manipulation scripts and investigates the characteristics of preferred speech voice. Fourty male and female college students participated in this study. They have been exposed to the Gyeongsang dialect spoken by their friends and family for more than 15 years. Two sample voices(1 male and 1 female), considered as giving good impression, were subject to voice analysis. Two students were asked to read the sample paragraph of 'Walking' and their voice samples were analyzed through Praat. The collected speech data were manipulated into 4 different sets by changing pitch level, degree of loudness and speech rate. First, both men and women received good impression more from pitch-lowered sound than from the original one. Second, men tended to receive good impression more from slightly louder voice than from the natural-pitched one. Third, it was shown that men often felt more drowned to a voice at slightly faster speech rate than at the original speech rate. Overall, both male and female listeners favored lower pitch over the original pitch. Men tended to prefer louder voice sound while women preferred less loud one. Men received better impression at a lower speech rate but women at a faster speech rate.