• Title/Summary/Keyword: Word Corpus

Search Result 284, Processing Time 0.026 seconds

Error Correction and Praat Script Tools for the Buckeye Corpus of Conversational Speech (벅아이 코퍼스 오류 수정과 코퍼스 활용을 위한 프랏 스크립트 툴)

  • Yoon, Kyu-Chul
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.29-47
    • /
    • 2012
  • The purpose of this paper is to show how to convert the label files of the Buckeye Corpus of Spontaneous Speech [1] into Praat format and to introduce some of the Praat scripts that will enable linguists to study various aspects of spoken American English present in the corpus. During the conversion process, several types of errors were identified and corrected either manually or automatically by the use of scripts. The Praat script tools that have been developed can help extract from the corpus massive amounts of phonetic measures such as the VOT of plosives, the formants of vowels, word frequency information and speech rates that span several consecutive words. The script tools can extract additional information concerning the phonetic environment of the target words or allophones.

Document Classification Model Using Web Documents for Balancing Training Corpus Size per Category

  • Park, So-Young;Chang, Juno;Kihl, Taesuk
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.268-273
    • /
    • 2013
  • In this paper, we propose a document classification model using Web documents as a part of the training corpus in order to resolve the imbalance of the training corpus size per category. For the purpose of retrieving the Web documents closely related to each category, the proposed document classification model calculates the matching score between word features and each category, and generates a Web search query by combining the higher-ranked word features and the category title. Then, the proposed document classification model sends each combined query to the open application programming interface of the Web search engine, and receives the snippet results retrieved from the Web search engine. Finally, the proposed document classification model adds these snippet results as Web documents to the training corpus. Experimental results show that the method that considers the balance of the training corpus size per category exhibits better performance in some categories with small training sets.

Automatic Extraction of Alternative Words using Parallel Corpus (병렬말뭉치를 이용한 대체어 자동 추출 방법)

  • Baik, Jong-Bum;Lee, Soo-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.12
    • /
    • pp.1254-1258
    • /
    • 2010
  • In information retrieval, different surface forms of the same object can cause poor performance of systems. In this paper, we propose the method extracting alternative words using translation words as features of each word extracted from parallel corpus, korean/english title pair of patent information. Also, we propose an association word filtering method to remove association words from an alternative word list. Evaluation results show that the proposed method outperforms other alternative word extraction methods.

A Method of Intonation Modeling for Corpus-Based Korean Speech Synthesizer (코퍼스 기반 한국어 합성기의 억양 구현 방안)

  • Kim, Jin-Young;Park, Sang-Eon;Eom, Ki-Wan;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.193-208
    • /
    • 2000
  • This paper describes a multi-step method of intonation modeling for corpus-based Korean speech synthesizer. We selected 1833 sentences considering various syntactic structures and built a corresponding speech corpus uttered by a female announcer. We detected the pitch using laryngograph signals and manually marked the prosodic boundaries on recorded speech, and carried out the tagging of part-of-speech and syntactic analysis on the text. The detected pitch was separated into 3 frequency bands of low, mid, high frequency components which correspond to the baseline, the word tone, and the syllable tone. We predicted them using the CART method and the Viterbi search algorithm with a word-tone-dictionary. In the collected spoken sentences, 1500 sentences were trained and 333 sentences were tested. In the layer of word tone modeling, we compared two methods. One is to predict the word tone corresponding to the mid-frequency components directly and the other is to predict it by multiplying the ratio of the word tone to the baseline by the baseline. The former method resulted in a mean error of 12.37 Hz and the latter in one of 12.41 Hz, similar to each other. In the layer of syllable tone modeling, it resulted in a mean error rate less than 8.3% comparing with the mean pitch, 193.56 Hz of the announcer, so its performance was relatively good.

  • PDF

Word Sense Disambiguation using Korean Word Space Model (한국어 단어 공간 모델을 이용한 단어 의미 중의성 해소)

  • Park, Yong-Min;Lee, Jae-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.6
    • /
    • pp.41-47
    • /
    • 2012
  • Various Korean word sense disambiguation methods have been proposed using small scale of sense-tagged corpra and dictionary definitions to calculate entropy information, conditional probability, mutual information and etc. for each method. This paper proposes a method using Korean Word Space model which builds word vectors from a large scale of sense-tagged corpus and disambiguates word senses with the similarity calculation between the word vectors. Experiment with Sejong morph sense-tagged corpus showed 94% precision for 200 sentences(583 word types), which is much superior to the other known methods.

A Study on Word Vector Models for Representing Korean Semantic Information

  • Yang, Hejung;Lee, Young-In;Lee, Hyun-jung;Cho, Sook Whan;Koo, Myoung-Wan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.41-47
    • /
    • 2015
  • This paper examines whether the Global Vector model is applicable to Korean data as a universal learning algorithm. The main purpose of this study is to compare the global vector model (GloVe) with the word2vec models such as a continuous bag-of-words (CBOW) model and a skip-gram (SG) model. For this purpose, we conducted an experiment by employing an evaluation corpus consisting of 70 target words and 819 pairs of Korean words for word similarities and analogies, respectively. Results of the word similarity task indicated that the Pearson correlation coefficients of 0.3133 as compared with the human judgement in GloVe, 0.2637 in CBOW and 0.2177 in SG. The word analogy task showed that the overall accuracy rate of 67% in semantic and syntactic relations was obtained in GloVe, 66% in CBOW and 57% in SG.

Lexical Bundles in Computer Science Research Articles: A Corpus-Based Study

  • Lee, Je-Young;Lee, Hye Jin
    • International Journal of Contents
    • /
    • v.14 no.4
    • /
    • pp.70-75
    • /
    • 2018
  • The purpose of this corpus-based study was to find 4-word lexical bundles in computer science research articles. As the demand for research articles (RAs) for international publication increases, the need for acquiring field-specific writing conventions for this academic genre has become a burning issue. Particularly, one area of burgeoning interest in the examination of rhetorical structures and linguistic features of RAs is the use of lexical bundles, the indispensable building blocks that make up an academic discourse. To illustrate, different academic discourses rely on distinctive repertoires of lexical bundles. Because lexical bundles are often acquired as a whole, the recurring multi-word sequences can be retrieved automatically to make written discourse more fluent and natural. Therefore, the proper use of rhetorical devices specific to a particular discipline can be a vital indicator of success within the discourse communities. Hence, to identify linguistic features that make up specific registers, this corpus-based study examines the types and usage frequency of lexical bundles in the discipline of CS, one of the most in-demand fields world over. Given that lexical bundles are empirically-derived formulaic multi-word units, identifying core lexical bundles used in RAs, they may provide insights into the specificity of particular CS text types. This will in turn provide empirical evidence of register specificity and technicality within the academic discourse of computer science. As in the results, pedagogical implications and suggestions for future research are discussed.

Fillers in the Hong Kong Corpus of Spoken English (HKCSE)

  • Seto, Andy
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.13-22
    • /
    • 2021
  • The present study employed an analytical framework that is characterised by a synthesis of quantitative and qualitative analyses with a specially designed computer software SpeechActConc to examine speech acts in business communication. The naturally occurring data from the audio recordings and the prosodic transcriptions of the business sub-corpora of the HKCSE (prosodic) are manually annotated with a speech act taxonomy for finding out the frequency of fillers, the co-occurring patterns of fillers with other speech acts, and the linguistic realisations of fillers. The discoursal function of fillers to sustain the discourse or to hold the floor has diverse linguistic realisations, ranging from a sound (e.g. 'uhuh') and a word (e.g. 'well') to sounds (e.g. 'um er') and words, namely phrase ('sort of') and clause (e.g. 'you know'). Some are even combinations of sound(s) and word(s) (e.g. 'and um', 'yes er um', 'sort of erm'). Among the top five frequent linguistic realisations of fillers, 'er' and 'um' are the most common ones found in all the six genres with relatively higher percentages of occurrence. The remaining more frequent realisations consist of clause ('you know'), word ('yeah') and sound ('erm'). These common forms are syntactically simpler than the less frequent realisations found in the genres. The co-occurring patterns of fillers and other speech acts are diverse. The more common co-occurring speech acts with fillers include informing and answering. The findings show that fillers are not only frequently used by speakers in spontaneous conversation but also mostly represented in sounds or non-linguistic realisations.

Corpus-Based Ontology Learning for Semantic Analysis (의미 분석을 위한 말뭉치 기반의 온톨로지 학습)

  • 강신재
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.1
    • /
    • pp.17-23
    • /
    • 2004
  • This paper proposes to determine word senses in Korean language processing by corpus-based ontology learning. Our approach is a hybrid method. First, we apply the previously-secured dictionary information to select the correct senses of some ambiguous words with high precision, and then use the ontology to disambiguate the remaining ambiguous words. The mutual information between concepts in the ontology was calculated before using the ontology as knowledge for disambiguating word senses. If mutual information is regarded as a weight between ontology concepts, the ontology can be treated as a graph with weighted edges, and then we locate the least weighted path from one concept to the other concept. In our practical machine translation system, our word sense disambiguation method achieved a 9% improvement over methods which do not use ontology for Korean translation.

  • PDF

Long Short Term Memory based Political Polarity Analysis in Cyber Public Sphere

  • Kang, Hyeon;Kang, Dae-Ki
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.57-62
    • /
    • 2017
  • In this paper, we applied long short term memory(LSTM) for classifying political polarity in cyber public sphere. The data collected from the cyber public sphere is transformed into word corpus data through word embedding. Based on this word corpus data, we train recurrent neural network (RNN) which is connected by LSTM's. Softmax function is applied at the output of the RNN. We conducted our proposed system to obtain experimental results, and we will enhance our proposed system by refining LSTM in our system.