• Title/Summary/Keyword: parsed corpus

Search Result 10, Processing Time 0.017 seconds

Opinion: Strategy of Semi-Automatically Annotating a Full-Text Corpus of Genomics & Informatics

  • Park, Hyun-Seok
    • Genomics & Informatics
    • /
    • v.16 no.4
    • /
    • pp.40.1-40.3
    • /
    • 2018
  • There is a communal need for an annotated corpus consisting of the full texts of biomedical journal articles. In response to community needs, a prototype version of the full-text corpus of Genomics & Informatics, called GNI version 1.0, has recently been published, with 499 annotated full-text articles available as a corpus resource. However, GNI needs to be updated, as the texts were shallow-parsed and annotated with several existing parsers. I list issues associated with upgrading annotations and give an opinion on the methodology for developing the next version of the GNI corpus, based on a semi-automatic strategy for more linguistically rich corpus annotation.

Word Order and Cliticization in Sakizaya: A Corpus-based Approach

  • Lin, Chihkai
    • Asia Pacific Journal of Corpus Research
    • /
    • v.1 no.2
    • /
    • pp.41-56
    • /
    • 2020
  • This paper aims to investigate how word order interacts with cliticization in Sakizaya, a Formosan language. This paper looks into nominative and genitive case markers from a corpus-based approach. The data are collected from an online dictionary of Sakizaya, and they are classified into two word orders: nominative case marker preceding genitive case marker and vice versa. The data are also divided into three categories, according to the demarcation of the case markers, which include right, left, or no demarcation. The corpus includes 700 sentences in the construction of predicate + noun phrase + noun phrase. The results suggest that the two case markers tend to be parsed into the preceding word and show right demarcation. The results also reveal that there are type difference and distance effect of the case markers on the cliticization. Nominative case markers show more right demarcation than genitive case markers do in the corpus. Also, the closer the case markers are to the predicate, the more possible the case markers undergo cliticization.

Korean Nominal Bank, Using Language Resources of Sejong Project (세종계획 언어자원 기반 한국어 명사은행)

  • Kim, Dong-Sung
    • Language and Information
    • /
    • v.17 no.2
    • /
    • pp.67-91
    • /
    • 2013
  • This paper describes Korean Nominal Bank, a project that provides argument structure for instances of the predicative nouns in the Sejong parsed Corpus. We use the language resources of the Sejong project, so that the same set of data is annotated with more and more levels of annotation, since a new type of a language resource building project could bring new information of separate and isolated processing. We have based on the annotation scheme based on the Sejong electronic dictionary, semantically tagged corpus, and syntactically analyzed corpus. Our work also involves the deep linguistic knowledge of syntaxsemantic interface in general. We consider the semantic theories including the Frame Semantics of Fillmore (1976), argument structure of Grimshaw (1990) and argument alternation of Levin (1993), and Levin and Rappaport Hovav (2005). Various syntactic theories should be needed in explaining various sentence types, including empty categories, raising, left (or right dislocation). We also need an explanation on the idiosyncratic lexical feature, such as collocation and etc.

  • PDF

Aspects of Language Use in Newspaper Articles: A Corpus Linguistic Perspective (신문 기사의 언어 사용 양상: 코퍼스언어학적 접근)

  • Song, Kyung-Hwa;Kang, Beom-Mo
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.255-269
    • /
    • 2006
  • The purpose of this study is to analyze newspaper articles from corpus linguistic point of view. We used a large corpus of newspaper articles built from <21st century Sejong Project> and counted occurrences of certain expressions. A newspaper article is divided into the headline, the lead and the body. We tried to figure out how to measure the characteristics of indication and compression which are typical to headlines. Then, we focused on the differences between the headline and the lead. finally, we analyzed the sentence structure and measured the ratio of the frequency of common nouns in the body. This study verifies the existing stylistic theories of newspapers and shows new aspects of language use in newspaper articles. Texts like newspaper articles are the results of human language processing and they in turn affect the development of cognitive ability of language.

  • PDF

Dependency Grammar and the Parsing of Chinese Sentences

  • Lai, Bong-Ycung-Tom;Huang, Changning
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 1994.02a
    • /
    • pp.63-72
    • /
    • 1994
  • Dependency Grammar has been used by Iinguists as the basis of the syntactic components of their grammar formalisms. It has also been used in natural langauge parsing. In China, attempts have been made to use this grammar formalism to parse Chinese sentences using corpus based techniques. This paper reviews the properties of Dependency Grammar as embodied in four axioms for the well-formedness conditions for dependency structures. It is shown that allowing mul tiple governors as done by some followers of this formalism is unnecessary. The practice of augmenting Dependency Grammar with functional labels is discussed in the light of building functional structures when the sentence is parsed. This will also facilitate semantic interpretion.retion.

  • PDF

Bracketing Input for Accurate Parsing

  • No, Yong-Kyoon
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.358-364
    • /
    • 2007
  • Syntax parsers can benefit from speakers' intuition about constituent structures indicated in the input string in the form of parentheses. Focusing on languages like Korean, whose orthographic convention requires more than one word to be written without spaces, we describe an algorithm for passing the bracketing information across the tagger to the probabilistic CFG parser, together with one for heightening (or penalizing, as the case may be) probabilities of putative constituents as they are suggested by the parser. It is shown that two or three constituents marked in the input suffice to guide the parser to the correct parse as the most likely one, even with sentences that are considered long.

  • PDF

Syntactic Category Prediction for Improving Parsing Accuracy in English-Korean Machine Translation (영한 기계번역에서 구문 분석 정확성 향상을 위한 구문 범주 예측)

  • Kim Sung-Dong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.345-352
    • /
    • 2006
  • The practical English-Korean machine translation system should be able to translate long sentences quickly and accurately. The intra-sentence segmentation method has been proposed and contributed to speeding up the syntactic analysis. This paper proposes the syntactic category prediction method using decision trees for getting accurate parsing results. In parsing with segmentation, the segment is separately parsed and combined to generate the sentence structure. The syntactic category prediction would facilitate to select more accurate analysis structures after the partial parsing. Thus, we could improve the parsing accuracy by the prediction. We construct features for predicting syntactic categories from the parsed corpus of Wall Street Journal and generate decision trees. In the experiments, we show the performance comparisons with the predictions by human-built rules, trigram probability and neural networks. Also, we present how much the category prediction would contribute to improving the translation quality.

Cascaded Parsing Korean Sentences Using Grammatical Relations (문법관계 정보를 이용한 단계적 한국어 구문 분석)

  • Lee, Song-Wook
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.69-72
    • /
    • 2008
  • This study aims to identify dependency structures in Korean sentences with the cascaded chunking. In the first stage of the cascade, we find chunks of NP and guess grammatical relations (GRs) using Support Vector Machine (SVM) classifiers for all possible modifier-head pairs of chunks in terms of GR categories as subject, object, complement, adverbial, etc. In the next stages, we filter out incorrect modifier-head relations in each cascade for its corresponding GR using the SVM classifiers and the characteristics of the Korean language such as distance between relations, no-crossing and case property. Through an experiment with a parsed and GR tagged corpus for training the proposed parser, we achieved an overall accuracy of 85.7%.

Sentiment Analysis System Using Stanford Sentiment Treebank (스탠포드 감성 트리 말뭉치를 이용한 감성 분류 시스템)

  • Lee, Songwook
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.39 no.3
    • /
    • pp.274-279
    • /
    • 2015
  • The main goal of this research is to build a sentiment analysis system which automatically determines user opinions of the Stanford Sentiment Treebank in terms of three sentiments such as positive, negative, and neutral. Firstly, sentiment sentences are POS tagged and parsed to dependency structures. All nodes of the Treebank and their polarities are automatically extracted from the Treebank. We train two Support Vector Machines models. One is for a node level classification and the other is for a sentence level. We have tried various type of features such as word lexicons, POS tags, Sentiment lexicons, head-modifier relations, and sibling relations. Though we acquired 74.2% in accuracy on the test set for 3 class node level classification and 67.0% for 3 class sentence level classification, our experimental results for 2 class classification are comparable to those of the state of art system using the same corpus.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.