• 제목/요약/키워드: lexical resource

검색결과 14건 처리시간 0.023초

어휘정보구축을 위한 사전텍스트의 구조분석 및 변환 (A Structural Analysis of Dictionary Text for the Construction of Lexical Data Base)

  • 최병진
    • 한국언어정보학회지:언어와정보
    • /
    • 제6권2호
    • /
    • pp.33-55
    • /
    • 2002
  • This research aims at transforming the definition tort of an English-English-Korean Dictionary (EEKD) which is encoded in EST files for the purpose of publishing into a structured format for Lexical Data Base (LDB). The construction of LDB is very time-consuming and expensive work. In order to save time and efforts in building new lexical information, the present study tries to extract useful linguistic information from an existing printed dictionary. In this paper, the process of extraction and structuring of lexical information from a printed dictionary (EEKD) as a lexical resource is described. The extracted information is represented in XML format, which can be transformed into another representation for different application requirements.

  • PDF

A Corpus-based Lexical Analysis of the Speech Texts: A Collocational Approach

  • Kim, Nahk-Bohk
    • 영어어문교육
    • /
    • 제15권3호
    • /
    • pp.151-170
    • /
    • 2009
  • Recently speech texts have been increasingly used for English education because of their various advantages as language teaching and learning materials. The purpose of this paper is to analyze speech texts in a corpus-based lexical approach, and suggest some productive methods which utilize English speaking or writing as the main resource for the course, along with introducing the actual classroom adaptations. First, this study shows that a speech corpus has some unique features such as different selections of pronouns, nouns, and lexical chunks in comparison to a general corpus. Next, from a collocational perspective, the study demonstrates that the speech corpus consists of a wide variety of collocations and lexical chunks which a number of linguists describe (Lewis, 1997; McCarthy, 1990; Willis, 1990). In other words, the speech corpus suggests that speech texts not only have considerable lexical potential that could be exploited to facilitate chunk-learning, but also that learners are not very likely to unlock this potential autonomously. Based on this result, teachers can develop a learners' corpus and use it by chunking the speech text. This new approach of adapting speech samples as important materials for college students' speaking or writing ability should be implemented as shown in samplers. Finally, to foster learner's productive skills more communicatively, a few practical suggestions are made such as chunking and windowing chunks of speech and presentation, and the pedagogical implications are discussed.

  • PDF

Automatic Acquisition of Lexical-Functional Grammar Resources from a Japanese Dependency Corpus

  • Oya, Masanori;Genabith, Josef Van
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2007년도 정기학술대회
    • /
    • pp.375-384
    • /
    • 2007
  • This paper describes a method for automatic acquisition of wide-coverage treebank-based deep linguistic resources for Japanese, as part of a project on treebank-based induction of multilingual resources in the framework of Lexical-Functional Grammar (LFG). We automatically annotate LFG f-structure functional equations (i.e. labelled dependencies) to the Kyoto Text Corpus version 4.0 (KTC4) (Kurohashi and Nagao 1997) and the output of of Kurohashi-Nagao Parser (KNP) (Kurohashi and Nagao 1998), a dependency parser for Japanese. The original KTC4 and KNP provide unlabelled dependencies. Our method also includes zero pronoun identification. The performance of the f-structure annotation algorithm with zero-pronoun identification for KTC4 is evaluated against a manually-corrected Gold Standard of 500 sentences randomly chosen from KTC4 and results in a pred-only dependency f-score of 94.72%. The parsing experiments on KNP output yield a pred-only dependency f-score of 82.08%.

  • PDF

재난안전정보 관리를 위한 어휘자원 현황분석 및 활용방안 (A Study on the Utilization Plan of Lexical Resources for Disaster and Safety Information Management Based on Current Status Analysis)

  • 정힘찬;김태영;김용;오효정
    • 정보관리학회지
    • /
    • 제34권2호
    • /
    • pp.137-158
    • /
    • 2017
  • 재난은 국민의 생명 신체 재산에 직접적인 영향을 미치는 사건으로, 재난 발생 시 신속하고 효과적인 대응을 위해서는 관련 정보들을 효율적으로 공유, 활용하는 협조 과정이 무엇보다도 중요하다. 현재 재난안전 유관기관별로 다양한 재난안전정보가 생산 및 관리되고 있지만, 각 기관별로 개별적인 용어와 의미를 정의하여 활용하고 있다. 이는 재난안전정보를 검색하고 접근하려는 실무자 입장에서 큰 걸림돌이며, 기관별 정보 활용도를 저해시키는 요인 중에 하나이다. 이러한 문제점을 해결하기 위해 재난안전정보의 통합적 관리를 위한 어휘자원의 표준화 작업의 선행 연구로, 본 연구에서는 재난안전 유관기관에서 관리하고 있는 어휘자원의 현황분석을 수행하였다. 또한 수집된 어휘자원을 대상으로 정보제공자 및 이용자 관점에서의 활용도 분석을 통해 어휘 그룹별 특성을 파악하고 이에 기반해 재난안전정보 관리를 위한 활용방안을 제안하였다.

유로워드넷 방식에 기반한 한국어와 영어의 명사 상하위어 정렬 (Alignment of Hypernym-Hyponym Noun Pairs between Korean and English, Based on the EuroWordNet Approach)

  • 김동성
    • 한국언어정보학회지:언어와정보
    • /
    • 제12권1호
    • /
    • pp.27-65
    • /
    • 2008
  • This paper presents a set of methodologies for aligning hypernym-hyponym noun pairs between Korean and English, based on the EuroWordNet approach. Following the methods conducted in EuroWordNet, our approach makes extensive use of WordNet in four steps of the building process: 1) Monolingual dictionaries have been used to extract proper hypernym-hyponym noun pairs, 2) bilingual dictionary has converted the extracted pairs, 3) Word Net has been used as a backbone of alignment criteria, and 4) WordNet has been used to select the most similar pair among the candidates. The importance of this study lies not only on enriching semantic links between two languages, but also on integrating lexical resources based on a language specific and dependent structure. Our approaches are aimed at building an accurate and detailed lexical resource with proper measures rather than at fast development of generic one using NLP technique.

  • PDF

Ontology-based models of legal knowledge

  • Sagri, Maria-Teresa;Tiscornia, Daniela
    • 한국디지털정책학회:학술대회논문집
    • /
    • 한국디지털정책학회 2004년도 International Conference on Digital Policy & Management
    • /
    • pp.111-127
    • /
    • 2004
  • In this paper we describe an application of the lexical resource JurWordNet and of the Core Legal Ontology as a descriptive vocabulary for modeling legal domains. It can be viewed as the semantic component of a global standardisation framework for digital governments. A content description model provides a repository of structured knowledge aimed at supporting the semantic interoperability between sectors of Public Administration and the communication processes towards citizen. Specific conceptual models built from this base will act as a cognitive interface able to cope with specific digital government issues and to improve the interaction between citizen and Public Bodies. As a Case study, the representation of the click-on licences for re-using Public Sector Information is presented.

  • PDF

세종계획 언어자원 기반 한국어 명사은행 (Korean Nominal Bank, Using Language Resources of Sejong Project)

  • 김동성
    • 한국언어정보학회지:언어와정보
    • /
    • 제17권2호
    • /
    • pp.67-91
    • /
    • 2013
  • This paper describes Korean Nominal Bank, a project that provides argument structure for instances of the predicative nouns in the Sejong parsed Corpus. We use the language resources of the Sejong project, so that the same set of data is annotated with more and more levels of annotation, since a new type of a language resource building project could bring new information of separate and isolated processing. We have based on the annotation scheme based on the Sejong electronic dictionary, semantically tagged corpus, and syntactically analyzed corpus. Our work also involves the deep linguistic knowledge of syntaxsemantic interface in general. We consider the semantic theories including the Frame Semantics of Fillmore (1976), argument structure of Grimshaw (1990) and argument alternation of Levin (1993), and Levin and Rappaport Hovav (2005). Various syntactic theories should be needed in explaining various sentence types, including empty categories, raising, left (or right dislocation). We also need an explanation on the idiosyncratic lexical feature, such as collocation and etc.

  • PDF

베트남어 사전을 사용한 베트남어 SentiWordNet 구축 (Construction of Vietnamese SentiWordNet by using Vietnamese Dictionary)

  • 뷔쉬에손;박성배
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2014년도 춘계학술발표대회
    • /
    • pp.745-748
    • /
    • 2014
  • SentiWordNet is an important lexical resource supporting sentiment analysis in opinion mining applications. In this paper, we propose a novel approach to construct a Vietnamese SentiWordNet (VSWN). SentiWordNet is typically generated from WordNet in which each synset has numerical scores to indicate its opinion polarities. Many previous studies obtained these scores by applying a machine learning method to WordNet. However, Vietnamese WordNet is not available unfortunately by the time of this paper. Therefore, we propose a method to construct VSWN from a Vietnamese dictionary, not from WordNet. We show the effectiveness of the proposed method by generating a VSWN with 39,561 synsets automatically. The method is experimentally tested with 266 synsets with aspect of positivity and negativity. It attains a competitive result compared with English SentiWordNet that is 0.066 and 0.052 differences for positivity and negativity sets respectively.

WordNet을 매개로 한 CoreNet-SUMO의 매핑 (Mapping between CoreNet and SUMO through WordNet)

  • 강신재;강인수;남세진;최기선
    • 한국지능시스템학회논문지
    • /
    • 제21권2호
    • /
    • pp.276-282
    • /
    • 2011
  • CoreNet은 한-중-일 다국어 텍스트의 분석, 언어 간 변환을 포함한 자연어처리에 유용한 자원이다. CoreNet의 보다 광범위한 분야 및 응용에의 활용을 장려하고 다국어 어휘의미망으로서의 국제적 위상을 제고하기 위해 SUMO에 연결하는 작업을 하였다. CoreNet과 SUMO를 매핑하기 위해 간접 매핑과 직접 매핑 방법을 모두 사용하였는데, CoreNet-KorLex-PWN-SUMO에 이르는 간접 매핑 작업을 통하여 한국어 중심의 CoreNet과 영어로 기술된 SUMO의 언어 간 변환의 어려움을 완화하고 CoreNet 개념에 대응하는 SUMO 클래스의 재현율을 극대화하였다.

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • 제5권3호
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.