• Title/Summary/Keyword: Word Corpus

Search Result 284, Processing Time 0.023 seconds

Ordering a Left-branching Language: Heaviness vs. Givenness

  • Choi, Hye-Won
    • 한국언어정보학회지:언어와정보
    • /
    • 제13권1호
    • /
    • pp.39-56
    • /
    • 2009
  • This paper investigates ordering alternation phenomena in Korean using the dative construction data from Sejong Corpus of Modern Korean (Kim, 2000). The paper first shows that syntactic weight and information structure are distinct and independent factors that influence word order in Korean. Moreover, it reveals that heaviness and givenness compete each other and exert diverging effects on word order, which contrasts the converging effects of these factors shown in word orders of right-branching languages like English. The typological variation of syntactic weight effect poses interesting theoretical and empirical questions, which are discussed in relation to processing efficiency in ordering.

  • PDF

A Novel Text to Image Conversion Method Using Word2Vec and Generative Adversarial Networks

  • LIU, XINRUI;Joe, Inwhee
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.401-403
    • /
    • 2019
  • In this paper, we propose a generative adversarial networks (GAN) based text-to-image generating method. In many natural language processing tasks, which word expressions are determined by their term frequency -inverse document frequency scores. Word2Vec is a type of neural network model that, in the case of an unlabeled corpus, produces a vector that expresses semantics for words in the corpus and an image is generated by GAN training according to the obtained vector. Thanks to the understanding of the word we can generate higher and more realistic images. Our GAN structure is based on deep convolution neural networks and pixel recurrent neural networks. Comparing the generated image with the real image, we get about 88% similarity on the Oxford-102 flowers dataset.

한국어 자연발화 음성코퍼스의 남성 모음 포먼트 연구 (A Study on the Male Vowel Formants of the Korean Corpus of Spontaneous Speech)

  • 김순옥;윤규철
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.95-102
    • /
    • 2015
  • The purpose of this paper is to extract the vowel formants of the ten adult male speakers in their twenties and thirties from the Korean Corpus of Spontaneous Speech [4], also known as the Seoul corpus, and to analyze them by comparing to earlier works on the Buckeye Corpus of Conversational Speech [1] in terms of the various linguistic factors that are expected to affect the formant distribution. The vowels extracted from the Korean corpus were also compared to those of the read Korean. The results showed that the distribution of the vowel formants from the Korean corpus was very different from that of read Korean speech. The comparison with English corpus and read English speech showed similar patterns. The factors affecting the Korean vowel formants were the interviewer sex, the location of the target vowel or the syllable containing it with respect to the phrasal word or utterance and the speech rate of the surrounding words.

코퍼스 빈도 정보 활용을 위한 적정 통계 모형 연구: 코퍼스 규모에 따른 타입/토큰의 함수관계 중심으로 (The Statistical Relationship between Linguistic Items and Corpus Size)

  • 양경숙;박병선
    • 한국언어정보학회지:언어와정보
    • /
    • 제7권2호
    • /
    • pp.103-115
    • /
    • 2003
  • In recent years, many organizations have been constructing their own large corpora to achieve corpus representativeness. However, there is no reliable guideline as to how large corpus resources should be compiled, especially for Korean corpora. In this study, we have contrived a new statistical model, ARIMA (Autoregressive Integrated Moving Average), for predicting the relationship between linguistic items (the number of types) and corpus size (the number of tokens), overcoming the major flaws of several previous researches on this issue. Finally, we shall illustrate that the ARIMA model presented is valid, accurate and very reliable. We are confident that this study can contribute to solving some inherent problems of corpus linguistics, such as corpus predictability, corpus representativeness and linguistic comprehensiveness.

  • PDF

주택디자인에서 건축가들의 어휘 사용행태 및 기본어휘에 관한 연구 (A Study on the Lexicon-Use Behaviour of Architects & the Basic Lexicons in House Design)

  • 윤대한
    • 한국주거학회논문집
    • /
    • 제17권5호
    • /
    • pp.27-37
    • /
    • 2006
  • This paper analyzed statistically two corpora that were constructed from the texts about house designs written by Korean architects and PA Awards architects. The main results are as follows; (1) The numbers of words in Korean house-design corpus were 9,352 and those of words in PA Awards house design corpus were 2,379. The former were 18.7% and the latter 4.8% of about 50,000 words regarded as the rest using scale in actual life. (2) When the architects described their house designs, the lexicon-concentration phenomenon was pervasive in both groups. Therefore, we can infer that the high-frequency lexicons are very important in house design. (3) The architects' behaviour patterns of using the house-design lexicons, went by rules according to the word frequency order. The tendency formulas of them had the $R^{2}$ values which were more than 90%. (4) In Korean house design corpus, the high frequency lexicons were '공간', '층', '주택', '집', '대지', '거실', and '실'. In PA awards house design corpus, they were 'house','room','space','living','wall','level' and 'area'. From these results, We can tell that 'space' is the highest frequency word in house design of the two groups, and that '대지 ' and 'wall' are the words that reveal well the differences between the two groups.

한국어 형태소 분석을 위한 효율적 기분석 사전의 구성 방법 (Construction of an Efficient Pre-analyzed Dictionary for Korean Morphological Analysis)

  • 곽수정;김보겸;이재성
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제2권12호
    • /
    • pp.881-888
    • /
    • 2013
  • 기분석 사전은 형태소 분석기의 속도와 정확도를 향상시키고, 과분석을 줄이기 위해 사용된다. 하지만 기분석 사전에 저장된 어절 중에 저장된 형태소 분석 결과가 부족한 어절, 즉 불충분 분석 어절이 존재할 경우 오히려 형태소 분석기의 정확도를 떨어뜨리는 원인으로 작용할 수 있다. 본 논문에서는 세종 형태 분석 말뭉치(문어체, 2011)를 이용해 말뭉치의 크기와 어절 빈도의 변화에 따라 사전의 정답 제시율이 변화하는 양상을 측정하였다. 그리고 통계기반의 형태소 분석기인 SMA와 기분석 사전을 결합한 통합 시스템을 구성하여 기분석 사전의 충분 분석률이 99.82% 이상일 때 시스템 전체 성능이 향상되는 것을 확인하였다. 또한 160만 어절의 말뭉치를 이용할 때는 32회 이상 출현한 어절로, 630만 어절로 구성된 말뭉치를 이용할 때는 64회 이상 출현한 어절로 사전을 구성하는 것이 통합 시스템의 성능을 가장 높게 할 수 있었다.

정보검색 기법과 동적 보간 계수를 이용한 N-gram 언어모델의 적응 (N- gram Adaptation Using Information Retrieval and Dynamic Interpolation Coefficient)

  • 최준기;오영환
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.207-223
    • /
    • 2005
  • The goal of language model adaptation is to improve the background language model with a relatively small adaptation corpus. This study presents a language model adaptation technique where additional text data for the adaptation do not exist. We propose the information retrieval (IR) technique with N-gram language modeling to collect the adaptation corpus from baseline text data. We also propose to use a dynamic language model interpolation coefficient to combine the background language model and the adapted language model. The interpolation coefficient is estimated from the word hypotheses obtained by segmenting the input speech data reserved for held-out validation data. This allows the final adapted model to improve the performance of the background model consistently The proposed approach reduces the word error rate by $13.6\%$ relative to baseline 4-gram for two-hour broadcast news speech recognition.

  • PDF

AutoCor: A Query Based Automatic Acquisition of Corpora of Closely-related Languages

  • Dimalen, Davis Muhajereen D.;Roxas, Rachel Edita O.
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2007년도 정기학술대회
    • /
    • pp.146-154
    • /
    • 2007
  • AutoCor is a method for the automatic acquisition and classification of corpora of documents in closely-related languages. It is an extension and enhancement of CorpusBuilder, a system that automatically builds specific minority language corpora from a closed corpus, since some Tagalog documents retrieved by CorpusBuilder are actually documents in other closely-related Philippine languages. AutoCor used the query generation method odds ratio, and introduced the concept of common word pruning to differentiate between documents of closely-related Philippine languages and Tagalog. The performance of the system using with and without pruning are compared, and common word pruning was found to improve the precision of the system.

  • PDF

Environment for Translation Domain Adaptation and Continuous Improvement of English-Korean Machine Translation System

  • Kim, Sung-Dong;Kim, Namyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권2호
    • /
    • pp.127-136
    • /
    • 2020
  • This paper presents an environment for rule-based English-Korean machine translation system, which supports the translation domain adaptation and the continuous translation quality improvement. For the purposes, corpus is essential, from which necessary information for translation will be acquired. The environment consists of a corpus construction part and a translation knowledge extraction part. The corpus construction part crawls news articles from some newspaper sites. The extraction part builds the translation knowledge such as newly-created words, compound words, collocation information, distributional word representations, and so on. For the translation domain adaption, the corpus for the domain should be built and the translation knowledge should be constructed from the corpus. For the continuous improvement, corpus needs to be continuously expanded and the translation knowledge should be enhanced from the expanded corpus. The proposed web-based environment is expected to facilitate the tasks of domain adaptation and translation system improvement.

음절 bigram를 이용한 띄어쓰기 오류의 자동 교정 (Automatic Correction of Word-spacing Errors using by Syllable Bigram)

  • 강승식
    • 음성과학
    • /
    • 제8권2호
    • /
    • pp.83-90
    • /
    • 2001
  • We proposed a probabilistic approach of using syllable bigrams to the word-spacing problem. Syllable bigrams are extracted and the frequencies are calculated for the large corpus of 12 million words. Based on the syllable bigrams, we performed three experiments: (1) automatic word-spacing, (2) detection and correction of word-spacing errors for spelling checker, and (3) automatic insertion of a space at the end of line in the character recognition system. Experimental results show that the accuracy ratios are 97.7 percent, 82.1 percent, and 90.5%, respectively.

  • PDF