• Title/Summary/Keyword: Speech Texts

Search Result 44, Processing Time 0.022 seconds

A Quantitative Linguistic Study for the Appreciation of the Lexical Richness (어휘 풍부성 평가에 대한 계량언어학적 연구 (프랑스어 텍스트를 중심으로))

  • Bae, Hee-Sook
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.139-149
    • /
    • 2000
  • Studying language by the quantitative linguistic method is not a recent development. Lately however, the interest in the quantitative linguistics has increased according to the demand on communication between human and human or between human and machine. We are required to transfer the system of the natural language onto machine. This requires the study of quantitative linguistics because we are unable to seize the characters of the tiny linguistic units and their structure in an intuitive way. In fact, the quantitative linguistics treats the internal structure of the language by the relation between the linguitic units and their quantitative characters. It is natural then that there is this growing interest in quantitative linguistics. In addition, Korean linguists take interest in the quantitative linguistics, although quantitative linguistics in Korea is not advanced by the level of the statistical analysis. Therefore, this present study shows how statistics can be applied in the field of linguistics through the two texts written in French: Lovers of the Subway and Our life's A. B. C.

  • PDF

Rich Transcription Generation Using Automatic Insertion of Punctuation Marks (자동 구두점 삽입을 이용한 Rich Transcription 생성)

  • Kim, Ji-Hwan
    • MALSORI
    • /
    • no.61
    • /
    • pp.87-100
    • /
    • 2007
  • A punctuation generation system which combines prosodic information with acoustic and language model information is presented. Experiments have been conducted first for the reference text transcriptions. In these experiments, prosodic information was shown to be more useful than language model information. When these information sources are combined, an F-measure of up to 0.7830 was obtained for adding punctuation to a reference transcription. This method of punctuation generation can also be applied to the 1-best output of a speech recogniser. The 1-best output is first time aligned. Based on the time alignment information, prosodic features are generated. As in the approach applied in the punctuation generation for reference transcriptions, the best sequence of punctuation marks for this 1-best output is found using the prosodic feature model and an language model trained on texts which contain punctuation marks.

  • PDF

An Improved Coverless Text Steganography Algorithm Based on Pretreatment and POS

  • Liu, Yuling;Wu, Jiao;Chen, Xianyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1553-1567
    • /
    • 2021
  • Steganography is a current hot research topic in the area of information security and privacy protection. However, most previous steganography methods are not effective against steganalysis and attacks because they are usually carried out by modifying covers. In this paper, we propose an improved coverless text steganography algorithm based on pretreatment and Part of Speech (POS), in which, Chinese character components are used as the locating marks, then the POS is used to hide the number of keywords, the retrieval of stego-texts is optimized by pretreatment finally. The experiment is verified that our algorithm performs well in terms of embedding capacity, the embedding success rate, and extracting accuracy, with appropriate lengths of locating marks and the large scale of the text database.

Vocabulary Analyzer Based on CEFR-J Wordlist for Self-Reflection (VACSR) Version 2

  • Yukiko Ohashi;Noriaki Katagiri;Takao Oshikiri
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.2
    • /
    • pp.75-87
    • /
    • 2023
  • This paper presents a revised version of the vocabulary analyzer for self-reflection (VACSR), called VACSR v.2.0. The initial version of the VACSR automatically analyzes the occurrences and the level of vocabulary items in the transcribed texts, indicating the frequency, the unused vocabulary items, and those not belonging to either scale. However, it overlooked words with multiple parts of speech due to their identical headword representations. It also needed to provide more explanatory result tables from different corpora. VACSR v.2.0 overcomes the limitations of its predecessor. First, unlike VACSR v.1, VACSR v.2.0 distinguishes words that are different parts of speech by syntactic parsing using Stanza, an open-source Python library. It enables the categorization of the same lexical items with multiple parts of speech. Second, VACSR v.2.0 overcomes the limited clarity of VACSR v.1 by providing precise result output tables. The updated software compares the occurrence of vocabulary items included in classroom corpora for each level of the Common European Framework of Reference-Japan (CEFR-J) wordlist. A pilot study utilizing VACSR v.2.0 showed that, after converting two English classes taught by a preservice English teacher into corpora, the headwords used mostly corresponded to CEFR-J level A1. In practice, VACSR v.2.0 will promote users' reflection on their vocabulary usage and can be applied to teacher training.

Corpus-based Korean Text-to-speech Conversion System (콜퍼스에 기반한 한국어 문장/음성변환 시스템)

  • Kim, Sang-hun; Park, Jun;Lee, Young-jik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.24-33
    • /
    • 2001
  • this paper describes a baseline for an implementation of a corpus-based Korean TTS system. The conventional TTS systems using small-sized speech still generate machine-like synthetic speech. To overcome this problem we introduce the corpus-based TTS system which enables to generate natural synthetic speech without prosodic modifications. The corpus should be composed of a natural prosody of source speech and multiple instances of synthesis units. To make a phone level synthesis unit, we train a speech recognizer with the target speech, and then perform an automatic phoneme segmentation. We also detect the fine pitch period using Laryngo graph signals, which is used for prosodic feature extraction. For break strength allocation, 4 levels of break indices are decided as pause length and also attached to phones to reflect prosodic variations in phrase boundaries. To predict the break strength on texts, we utilize the statistical information of POS (Part-of-Speech) sequences. The best triphone sequences are selected by Viterbi search considering the minimization of accumulative Euclidean distance of concatenating distortion. To get high quality synthesis speech applicable to commercial purpose, we introduce a domain specific database. By adding domain specific database to general domain database, we can greatly improve the quality of synthetic speech on specific domain. From the subjective evaluation, the new Korean corpus-based TTS system shows better naturalness than the conventional demisyllable-based one.

  • PDF

A Collaborative Framework for Discovering the Organizational Structure of Social Networks Using NER Based on NLP (NLP기반 NER을 이용해 소셜 네트워크의 조직 구조 탐색을 위한 협력 프레임 워크)

  • Elijorde, Frank I.;Yang, Hyun-Ho;Lee, Jae-Wan
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.99-108
    • /
    • 2012
  • Many methods had been developed to improve the accuracy of extracting information from a vast amount of data. This paper combined a number of natural language processing methods such as NER (named entity recognition), sentence extraction, and part of speech tagging to carry out text analysis. The data source is comprised of texts obtained from the web using a domain-specific data extraction agent. A framework for the extraction of information from unstructured data was developed using the aforementioned natural language processing methods. We simulated the performance of our work in the extraction and analysis of texts for the detection of organizational structures. Simulation shows that our study outperformed other NER classifiers such as MUC and CoNLL on information extraction.

AN ALGORITHM FOR CLASSIFYING EMOTION OF SENTENCES AND A METHOD TO DIVIDE A TEXT INTO SOME SCENES BASED ON THE EMOTION OF SENTENCES

  • Fukoshi, Hirotaka;Sugimoto, Futoshi;Yoneyama, Masahide
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.773-777
    • /
    • 2009
  • In recent years, the field of synthesizing voice has been developed rapidly, and the technologies such as reading aloud an email or sound guidance of a car navigation system are used in various scenes of our life. The sound quality is monotonous like reading news. It is preferable for a text such as a novel to be read by the voice that expresses emotions wealthily. Therefore, we have been trying to develop a system reading aloud novels automatically that are expressed clear emotions comparatively such as juvenile literature. At first it is necessary to identify emotions expressed in a sentence in texts in order to make a computer read texts with an emotionally expressive voice. A method on the basis of the meaning interpretation that utilized artificial intelligence technology for a method to specify emotions of texts is thought, but it is very difficult with the current technology. Therefore, we propose a method to determine only emotion every sentence in a novel by a simpler way. This method determines the emotion of a sentence according to an emotion that words such as a verb in a Japanese verb sentence, and an adjective and an adverb in a adjective sentence, have. The emotional characteristics that these words have are prepared beforehand as a emotional words dictionary by us. The emotions used here are seven types: "joy," "sorrow," "anger," "surprise," "terror," "aversion" or "neutral."

  • PDF

Design and Implementation of Server-Based Web Reader kWebAnywhere (서버 기반 웹 리더 kWebAnywhere의 설계 및 구현)

  • Yun, Young-Sun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.217-225
    • /
    • 2013
  • This paper describes the design and implementation of the kWebAnywhere system based on WebAnywhere, which assists people with severely diminished eye sight and the blind people to access Internet information through Web interfaces. The WebAnywhere is a server-based web reader which reads aloud the web contents using TTS(text-to-speech) technology on the Internet without installing any software on the client's system. The system can be used in general web browsers using a built-in audio function, for blind users who are unable to afford to use a screen reader and for web developers to design web accessibility. However, the WebAnywhere is limited to supporting only a single language and cannot be applied to Korean web contents directly. Thus, in this paper, we modified the WebAnywhere to serve multiple language contents written in both English and Korean texts. The modified WebAnywhere system is called kWebAnywhere to differentiate it with the original system. The kWebAnywhere system is modified to support the Korean TTS system, VoiceText$^{TM}$, and to include user interface to control the parameters of the TTS system. Because the VoiceText$^{TM}$ system does not support the Festival API used in the WebAnywhere, we developed the Festival Wrapper to transform the VoiceText$^{TM}$'s private APIs to the Festival APIs in order to communicate with the WebAnywhere engine. We expect that the developed system can help people with severely diminished eye sight and the blind people to access the internet contents easily.

A Study of Creole Languages' Pronunciation in the West Indies - Centering on Central American $Gar\acute{i}funa$ and Cuban Patois (서인도제도의 로망스어 관련 혼성어 발음에 관한 고찰 - 중미의 $Gar\acute{i}funa$어와 큐바내 Patois어를 중심으로 -)

  • Kim, Woo-Joong
    • Speech Sciences
    • /
    • v.5 no.2
    • /
    • pp.93-107
    • /
    • 1999
  • This study deals with a general review of $Gar\acute{i}funa$ and Patois, creole languages which developed out of the sociohistorical situation of the last centuries and are mainly spoken in the West Indies and Carribean Coasts. In this paper, I present some notes and ideas on the linguistic developments and features of these languages. Especially I describe their function connected with a variety of social circumstances and their phonetical/phonological changes from the base languages. This is a result of fieldwork conducted in Honduras, Belize, Cuba and Mexico, from January 1996 to February 1998, using some surveys and collecting words from different materials and texts. And I hope this paper will contribute to research in 'mixed' languages as well as to historical linguists. I am very grateful to Mr. Mauricio $Tom\acute{a}s$, the only uriversity student in $Traves\acute{i}a$, a small town in nothern Honduras and to Mr. Carlos Marcos, a medical student who is from a Haitian family in Santiago de Cuba. Without their cooperation, I couldn't have conducted this research.

  • PDF

Acoustical Analysis of Phonological Reduction in Conversational Japanese (일본어 회화문에 나타난 축약형의 음운론적 해석과 음향음성학적 분석)

  • Choi, Young-Sook
    • Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.229-241
    • /
    • 2001
  • Using eighteen texts from various genera of present-day Japanese, I collected phonologically reduced forms frequently observed in conversational Japanese, and classified them in search of a unified. explanation of phonological phenomena. I found 7,516 cases of reduced forms which I divided into 43 categories according to the types of phonological changes they have undergone. The general tendencies are that deletion and fusion of a phoneme or an entire syllable takes place frequently, resulting in the decrease in the number of syllables. From a morphosyntactic point of view, phonological reduction often occurs at the NP and VP morpheme boundaries. The following findings are drawn from phonetical observations of reduction. (1) Vowels are more easily deleted than consonants. (2) Bilabials ([m], [b], and [w]) are the most likely candidates for deletion. (3) In a concatenation of vowels, closed vowels are absorbed into open vowels, or two adjacent vowels come to create another vowel, in which case reconstruction of the original sequence is not always predictable. (4) Alveolars are palatalized under the influence of front vowels. (5) Regressive assimilation takes place in a syllable starting with [r], changing the entire syllable into a phonological choked sound or a syllabic nasal, depending on the voicing of the following phoneme.

  • PDF