• Title/Summary/Keyword: speech situation

Search Result 122, Processing Time 0.021 seconds

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

Current Status and Perspectives of Telepractice in Voice and Speech Therapy (비대면 음성언어치료의 현황과 전망)

  • Seung Jin, Lee
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.33 no.3
    • /
    • pp.130-141
    • /
    • 2022
  • Voice and speech therapy can be performed in various ways depending on the situation, although it is generally performed in a face-to-face manner. Telepractice refers to the provision of specialized voice and speech therapy by speech-language pathologists for assessment, therapy, and counseling by applying telecommunication technology from a remote location. Recently, due to the pandemic situation and the active use of non-face-to-face platforms, interest in telepractice of voice and speech therapy has increased. Moreover, a growing body of literature has been advocating its clinical usefulness and non-inferiority to traditional face-to-face intervention. In this review, the existing discussions, guidelines, and preliminary studies on non-face-toface voice and speech therapy were summarized, and recommendations on the tools for telepractice were provided.

An Approach to Chinese Conversations in the Textbook based on Social Units of Communication (중국어 회화문에 대한 의사소통 분석단위에 기초한 접근)

  • Park, Chan-Wook
    • Cross-Cultural Studies
    • /
    • v.49
    • /
    • pp.127-150
    • /
    • 2017
  • The objective of this study is to classify the conversations in Chinese textbooks into four social units (speech community, speech situation, speech event, speech act) adopted by Dell Hymes (1972), and suggest application of the results involving the conversation to the curriculum of Chinese education. Towards this end, this study assumes every conversation in the Chinese textbooks as coordination of specific speech events and acts under specific situations. This study introduces the concept of social unit adopted by Dell Hymes (1972), and elucidates their role in conversation. Thus, this study reconsiders the conversations recorded in the textbooks not from a morphological or syntactic viewpoint but from a speech perspective. Finally, this study suggests effective use of the results in the Chinese conversation classes.

A Real-Time Implementation of Speech Recognition System Using Oak DSP core in the Car Noise Environment (자동차 환경에서 Oak DSP 코어 기반 음성 인식 시스템 실시간 구현)

  • Woo, K.H.;Yang, T.Y.;Lee, C.;Youn, D.H.;Cha, I.H.
    • Speech Sciences
    • /
    • v.6
    • /
    • pp.219-233
    • /
    • 1999
  • This paper presents a real-time implementation of a speaker independent speech recognition system based on a discrete hidden markov model(DHMM). This system is developed for a car navigation system to design on-chip VLSI system of speech recognition which is used by fixed point Oak DSP core of DSP GROUP LTD. We analyze recognition procedure with C language to implement fixed point real-time algorithms. Based on the analyses, we improve the algorithms which are possible to operate in real-time, and can verify the recognition result at the same time as speech ends, by processing all recognition routines within a frame. A car noise is the colored noise concentrated heavily on the low frequency segment under 400 Hz. For the noise robust processing, the high pass filtering and the liftering on the distance measure of feature vectors are applied to the recognition system. Recognition experiments on the twelve isolated command words were performed. The recognition rates of the baseline recognizer were 98.68% in a stopping situation and 80.7% in a running situation. Using the noise processing methods, the recognition rates were enhanced to 89.04% in a running situation.

  • PDF

Intelligent Speech Recognition System based on Situation Awareness for u-Green City (u-Green City 구현을 위한 상황인지기반 지능형 음성인식 시스템)

  • Cho, Young-Im;Jang, Sung-Soon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.12
    • /
    • pp.1203-1208
    • /
    • 2009
  • Green IT based u-City means that u-City having Green IT concept. If we adopt the situation awareness or not, the processing of Green IT may be reduced. For example, if we recognize a lot of speech sound on CCTV in u-City environment, it takes a lot of processing time and cost. However, if we want recognize emergency sound on CCTV, it takes a few reduced processing cost. So, for detecting emergency state dynamically through CCTV, we propose our advanced speech recognition system. For the purpose of that, we adopt HMM (Hidden Markov Model) for feature extraction. Also, we adopt Wiener filter technique for noise elimination in many information coming from on CCTV in u-City environment.

Disfluency Characteristics in 4-6 Age Bilingual Children (4-6세 이중언어아동의 비유창성 특성 연구)

  • Lee, Soo-Bok;Sim, Hyun-Sub;Shin, Moon-Ja
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.78-83
    • /
    • 2007
  • The purpose of present study was to investigate the characteristics of disfluency between the Korean-English bilingual and Korean monolingual children, matched by their chronological age with the bilingual children. Twenty-eight children, 14 bilingual children and 14 monolingual children participated in this study. The experimental tasks consisted of the play situation and the task situation. The conclusion is (a) The score of total disfluency of the bilingual was significantly higher than that of the monolingual. The score of normal disfluency of the bilingual was significantly higher than that of the monolingual. The most frequent type is Interjection in both groups. All shows higher score in the task situation than the play situation. The bilingual children have quantitative and qualitative differences in disfluency score and types from the monolingual. (b) The bilingual were divided into two groups such as 6 Korean-dominant bilingual and 8 English-dominant bilingual. All shows more disfluency in their non-dominant language. The most frequent type is Interjection in both groups. (c) The higher the chronological age and the expressive language test score is, the lower the disfluency score is. The earlier the exposure age to the 2nd language is, the higher the disfluency score is. There is no correlation between resident month at foreign country and the disfluency.

  • PDF

Normalization in Collection Procedures of Emotional Speech by Scriptual Context (대본 내용에 의한 정서음성 수집과정의 정규화에 대하여)

  • Jo Cheol-Woo
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.123-125
    • /
    • 2006
  • One of the biggest problems unsolved in emotional speech acquisition is how to make or find a situation which is close to natual or desired state from humans. We proposed a method to collect emotional speech data by scriptual context. Several contexts from the scripts of drama were chosen by the experts in the area. Context were divided into 6 classes according to the contents. Two actors, one male and one female, read the text after recognizing the emotional situations in the script.

  • PDF

The Role of Speech Factors in Speech Intelligibility: A Review (언어장애인의 명료도에 영향을 미치는 말요인: 문헌연구)

  • Kim Soo-Jin
    • MALSORI
    • /
    • no.43
    • /
    • pp.25-44
    • /
    • 2002
  • The intelligibility of a spoken message is influenced by a number of factors. Intelligibility is a joint product of a speaker and a listener. In addition, intelligibility varies with the nature of the language context and the context of communication. Thus a single intelligibility score can not be ascribed to a given individual apart from listener and listening situation. But there is a clinical and research need to develop assessment measures of intelligibility that are quantitative and analytic. Before developing the index of intelligibility, the crucial factors need to be examined. Among them, the most significant in intelligibility is the speech factors of speakers. The following section reviews the literature dealing with the contribution of segmental and suprasegmental factors in speech intelligibility regarding the hearing impaired, alaryngeal, and motor disorders.

  • PDF

A Design of the Emergency-notification and Driver-response Confirmation System(EDCS) for an autonomous vehicle safety (자율차량 안전을 위한 긴급상황 알림 및 운전자 반응 확인 시스템 설계)

  • Son, Su-Rak;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.134-139
    • /
    • 2021
  • Currently, the autonomous vehicle market is commercializing a level 3 autonomous vehicle, but it still requires the attention of the driver. After the level 3 autonomous driving, the most notable aspect of level 4 autonomous vehicles is vehicle stability. This is because, unlike Level 3, autonomous vehicles after level 4 must perform autonomous driving, including the driver's carelessness. Therefore, in this paper, we propose the Emergency-notification and Driver-response Confirmation System(EDCS) for an autonomousvehicle safety that notifies the driver of an emergency situation and recognizes the driver's reaction in a situation where the driver is careless. The EDCS uses the emergency situation delivery module to make the emergency situation to text and transmits it to the driver by voice, and the driver response confirmation module recognizes the driver's reaction to the emergency situation and gives the driver permission Decide whether to pass. As a result of the experiment, the HMM of the emergency delivery module learned speech at 25% faster than RNN and 42.86% faster than LSTM. The Tacotron2 of the driver's response confirmation module converted text to speech about 20ms faster than deep voice and 50ms faster than deep mind. Therefore, the emergency notification and driver response confirmation system can efficiently learn the neural network model and check the driver's response in real time.

Preliminary study of the perceptual and acoustic analysis on the speech rate of normal adult: Focusing the differences of the speech rate according to the area (정상 성인 말속도의 청지각적/음향학적 평가에 관한 기초 연구: 지역에 따른 말속도 차이를 중심으로)

  • Lee, Hyun-Joung
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.73-77
    • /
    • 2014
  • The purpose of this study is to investigate the differences of the speech rate according to the area in the perceptual and acoustic analysis. This study examines regional variation in overall speech rate and articulation rate across speaking situations (picture description, free conversation and story retelling) with 14 normal adult (7 in Gyeongnam and 7 in Honam area). The result of an experimental investigation shows that the perceptual speech rate differs significantly between two regional varieties of Koreans with a picture description examined here. A group of Honam speakers spoke significantly faster than a group of Gyeongnam speakers. However, the result of the acoustic analysis shows that the speech rate of the two groups did not differ. And there were significant regional differences in the overall speech rate and articulation rate on the other two speaking situation, free conversation and story retelling. It suggest that we have to study perceptual evaluation with regard to the free conversation and story retelling in future research, and based on the results of this study, a variety of researches on the speech rate will be needed on the various conditions, including various area and SLPs who have wider background and experiences. It is necessary for SLPs to train and experience more to assess patients properly and reliably.