• Title/Summary/Keyword: lexical resource

Search Result 13, Processing Time 0.025 seconds

A Structural Analysis of Dictionary Text for the Construction of Lexical Data Base (어휘정보구축을 위한 사전텍스트의 구조분석 및 변환)

  • 최병진
    • Language and Information
    • /
    • v.6 no.2
    • /
    • pp.33-55
    • /
    • 2002
  • This research aims at transforming the definition tort of an English-English-Korean Dictionary (EEKD) which is encoded in EST files for the purpose of publishing into a structured format for Lexical Data Base (LDB). The construction of LDB is very time-consuming and expensive work. In order to save time and efforts in building new lexical information, the present study tries to extract useful linguistic information from an existing printed dictionary. In this paper, the process of extraction and structuring of lexical information from a printed dictionary (EEKD) as a lexical resource is described. The extracted information is represented in XML format, which can be transformed into another representation for different application requirements.

  • PDF

A Corpus-based Lexical Analysis of the Speech Texts: A Collocational Approach

  • Kim, Nahk-Bohk
    • English Language & Literature Teaching
    • /
    • v.15 no.3
    • /
    • pp.151-170
    • /
    • 2009
  • Recently speech texts have been increasingly used for English education because of their various advantages as language teaching and learning materials. The purpose of this paper is to analyze speech texts in a corpus-based lexical approach, and suggest some productive methods which utilize English speaking or writing as the main resource for the course, along with introducing the actual classroom adaptations. First, this study shows that a speech corpus has some unique features such as different selections of pronouns, nouns, and lexical chunks in comparison to a general corpus. Next, from a collocational perspective, the study demonstrates that the speech corpus consists of a wide variety of collocations and lexical chunks which a number of linguists describe (Lewis, 1997; McCarthy, 1990; Willis, 1990). In other words, the speech corpus suggests that speech texts not only have considerable lexical potential that could be exploited to facilitate chunk-learning, but also that learners are not very likely to unlock this potential autonomously. Based on this result, teachers can develop a learners' corpus and use it by chunking the speech text. This new approach of adapting speech samples as important materials for college students' speaking or writing ability should be implemented as shown in samplers. Finally, to foster learner's productive skills more communicatively, a few practical suggestions are made such as chunking and windowing chunks of speech and presentation, and the pedagogical implications are discussed.

  • PDF

Automatic Acquisition of Lexical-Functional Grammar Resources from a Japanese Dependency Corpus

  • Oya, Masanori;Genabith, Josef Van
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.375-384
    • /
    • 2007
  • This paper describes a method for automatic acquisition of wide-coverage treebank-based deep linguistic resources for Japanese, as part of a project on treebank-based induction of multilingual resources in the framework of Lexical-Functional Grammar (LFG). We automatically annotate LFG f-structure functional equations (i.e. labelled dependencies) to the Kyoto Text Corpus version 4.0 (KTC4) (Kurohashi and Nagao 1997) and the output of of Kurohashi-Nagao Parser (KNP) (Kurohashi and Nagao 1998), a dependency parser for Japanese. The original KTC4 and KNP provide unlabelled dependencies. Our method also includes zero pronoun identification. The performance of the f-structure annotation algorithm with zero-pronoun identification for KTC4 is evaluated against a manually-corrected Gold Standard of 500 sentences randomly chosen from KTC4 and results in a pred-only dependency f-score of 94.72%. The parsing experiments on KNP output yield a pred-only dependency f-score of 82.08%.

  • PDF

A Study on the Utilization Plan of Lexical Resources for Disaster and Safety Information Management Based on Current Status Analysis (재난안전정보 관리를 위한 어휘자원 현황분석 및 활용방안)

  • Jeong, Him-Chan;Kim, Tae-Young;Kim, Yong;Oh, Hyo-Jung
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.2
    • /
    • pp.137-158
    • /
    • 2017
  • Disaster has a direct influence on the lives of the people, the body, and the property. For effective and rapid disaster responses, coordination process based on sharing and utilizing disaster information is the essential requirement Disaster and safety control agencies produce and manage heterogeneous information. They also develop and use word dictionaries individually. This is a major obstacle to retrieve and access disaster and safety information in terms of practitioners. To solve this problem, standardization of lexical resources related disaster and safety is essentially required. In this paper, we conducted current status analysis about lexical resources in disaster and safety domain. Consequently, we identified the characteristics according to lexical groups. And then we proposed the utilization plan of lexical resources for disaster and safety information management.

Alignment of Hypernym-Hyponym Noun Pairs between Korean and English, Based on the EuroWordNet Approach (유로워드넷 방식에 기반한 한국어와 영어의 명사 상하위어 정렬)

  • Kim, Dong-Sung
    • Language and Information
    • /
    • v.12 no.1
    • /
    • pp.27-65
    • /
    • 2008
  • This paper presents a set of methodologies for aligning hypernym-hyponym noun pairs between Korean and English, based on the EuroWordNet approach. Following the methods conducted in EuroWordNet, our approach makes extensive use of WordNet in four steps of the building process: 1) Monolingual dictionaries have been used to extract proper hypernym-hyponym noun pairs, 2) bilingual dictionary has converted the extracted pairs, 3) Word Net has been used as a backbone of alignment criteria, and 4) WordNet has been used to select the most similar pair among the candidates. The importance of this study lies not only on enriching semantic links between two languages, but also on integrating lexical resources based on a language specific and dependent structure. Our approaches are aimed at building an accurate and detailed lexical resource with proper measures rather than at fast development of generic one using NLP technique.

  • PDF

Ontology-based models of legal knowledge

  • Sagri, Maria-Teresa;Tiscornia, Daniela
    • 한국디지털정책학회:학술대회논문집
    • /
    • 2004.11a
    • /
    • pp.111-127
    • /
    • 2004
  • In this paper we describe an application of the lexical resource JurWordNet and of the Core Legal Ontology as a descriptive vocabulary for modeling legal domains. It can be viewed as the semantic component of a global standardisation framework for digital governments. A content description model provides a repository of structured knowledge aimed at supporting the semantic interoperability between sectors of Public Administration and the communication processes towards citizen. Specific conceptual models built from this base will act as a cognitive interface able to cope with specific digital government issues and to improve the interaction between citizen and Public Bodies. As a Case study, the representation of the click-on licences for re-using Public Sector Information is presented.

  • PDF

Korean Nominal Bank, Using Language Resources of Sejong Project (세종계획 언어자원 기반 한국어 명사은행)

  • Kim, Dong-Sung
    • Language and Information
    • /
    • v.17 no.2
    • /
    • pp.67-91
    • /
    • 2013
  • This paper describes Korean Nominal Bank, a project that provides argument structure for instances of the predicative nouns in the Sejong parsed Corpus. We use the language resources of the Sejong project, so that the same set of data is annotated with more and more levels of annotation, since a new type of a language resource building project could bring new information of separate and isolated processing. We have based on the annotation scheme based on the Sejong electronic dictionary, semantically tagged corpus, and syntactically analyzed corpus. Our work also involves the deep linguistic knowledge of syntaxsemantic interface in general. We consider the semantic theories including the Frame Semantics of Fillmore (1976), argument structure of Grimshaw (1990) and argument alternation of Levin (1993), and Levin and Rappaport Hovav (2005). Various syntactic theories should be needed in explaining various sentence types, including empty categories, raising, left (or right dislocation). We also need an explanation on the idiosyncratic lexical feature, such as collocation and etc.

  • PDF

Construction of Vietnamese SentiWordNet by using Vietnamese Dictionary (베트남어 사전을 사용한 베트남어 SentiWordNet 구축)

  • Vu, Xuan-Son;Park, Seong-Bae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.745-748
    • /
    • 2014
  • SentiWordNet is an important lexical resource supporting sentiment analysis in opinion mining applications. In this paper, we propose a novel approach to construct a Vietnamese SentiWordNet (VSWN). SentiWordNet is typically generated from WordNet in which each synset has numerical scores to indicate its opinion polarities. Many previous studies obtained these scores by applying a machine learning method to WordNet. However, Vietnamese WordNet is not available unfortunately by the time of this paper. Therefore, we propose a method to construct VSWN from a Vietnamese dictionary, not from WordNet. We show the effectiveness of the proposed method by generating a VSWN with 39,561 synsets automatically. The method is experimentally tested with 266 synsets with aspect of positivity and negativity. It attains a competitive result compared with English SentiWordNet that is 0.066 and 0.052 differences for positivity and negativity sets respectively.

Mapping between CoreNet and SUMO through WordNet (WordNet을 매개로 한 CoreNet-SUMO의 매핑)

  • Kang, Sin-Jae;Kang, In-Su;Nam, Se-Jin;Choi, Key-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.276-282
    • /
    • 2011
  • CoreNet is a valuable resource to use in the domain of natural language processing including Korean-Chinese-Japanese multilingual text analysis, and translation among natural languages. CoreNet is mapped to SUMO in order to encourage its application in broader fields and enhance its international status as a multilingual lexical semantic network. To do this, indirect and direct mapping methodologies are used. Through the indirect mapping among CoreNet-KorLex-PWN-SUMO, we alleviate the difficulty of translating CoreNet concept terms in Korean into SUMO concepts in English, and maximize recall of SUMO concepts corresponding to the concept of CoreNet.

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.