• Title/Summary/Keyword: Lexical Information

Search Result 323, Processing Time 0.022 seconds

Ontology-lexicon-based question answering over linked data

  • Jabalameli, Mehdi;Nematbakhsh, Mohammadali;Zaeri, Ahmad
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.239-246
    • /
    • 2020
  • Recently, Linked Open Data has become a large set of knowledge bases. Therefore, the need to query Linked Data using question answering (QA) techniques has attracted the attention of many researchers. A QA system translates natural language questions into structured queries, such as SPARQL queries, to be executed over Linked Data. The two main challenges in such systems are lexical and semantic gaps. A lexical gap refers to the difference between the vocabularies used in an input question and those used in the knowledge base. A semantic gap refers to the difference between expressed information needs and the representation of the knowledge base. In this paper, we present a novel method using an ontology lexicon and dependency parse trees to overcome lexical and semantic gaps. The proposed technique is evaluated on the QALD-5 benchmark and exhibits promising results.

Alignment of Hypernym-Hyponym Noun Pairs between Korean and English, Based on the EuroWordNet Approach (유로워드넷 방식에 기반한 한국어와 영어의 명사 상하위어 정렬)

  • Kim, Dong-Sung
    • Language and Information
    • /
    • v.12 no.1
    • /
    • pp.27-65
    • /
    • 2008
  • This paper presents a set of methodologies for aligning hypernym-hyponym noun pairs between Korean and English, based on the EuroWordNet approach. Following the methods conducted in EuroWordNet, our approach makes extensive use of WordNet in four steps of the building process: 1) Monolingual dictionaries have been used to extract proper hypernym-hyponym noun pairs, 2) bilingual dictionary has converted the extracted pairs, 3) Word Net has been used as a backbone of alignment criteria, and 4) WordNet has been used to select the most similar pair among the candidates. The importance of this study lies not only on enriching semantic links between two languages, but also on integrating lexical resources based on a language specific and dependent structure. Our approaches are aimed at building an accurate and detailed lexical resource with proper measures rather than at fast development of generic one using NLP technique.

  • PDF

Some Issues on Causative Verbs in English

  • Cho, Sae-Youn
    • Language and Information
    • /
    • v.13 no.1
    • /
    • pp.77-92
    • /
    • 2009
  • Geis (1973) has provided various properties of the subjects and by + Gerund Phrase (GerP) in English causative constructions. Among them, the two main issues of Geis's analysis are as follows: unlike Lakoff (1965; 1966), the subject of English causative constructions, including causative-inchoative verbs such as liquefy, first of all, should be acts or events, not persons, and the by + GerP in the construction is a complement of the causative verbs. In addition to these issues, Geis has provided various data exhibiting other idiosyncratic properties and proposed some transformational rules such as the Agent Creation Rule and rule orderings to explain them. Against Geis's claim, I propose that English causative verbs require either Proper nouns or GerP subjects and that the by + GerP in the constructions as a Verbal Modifier needs Gerunds, whose understood Affective-agent subject is identical to the subject of causative verbs with respect to the semantic index value. This enables us to solve the two main issues. At the same time, the other properties Geis mentioned also can be easily accounted for in Head-driven Phrase Structure Grammar (HPSG) by positing a few lexical constraints. On this basis, it is shown that given the few lexical constraints and existing grammatical tools in HPSG, the constraint-based analysis proposed here gives a simpler explanation of the properties of English causative constructions provided by Geis without transformational rules and rule orderings.

  • PDF

Extending the MARTIF and TEI for Korean Lexical Entities (한국어사전 인코딩체계의 확장에 관한 연구: MARTIF와 TEI를 중심으로)

  • 백지원;최석두
    • Journal of the Korean Society for information Management
    • /
    • v.18 no.2
    • /
    • pp.295-322
    • /
    • 2001
  • The purpose of this study is to present a scheme to encode all possible lexical entities in dictionaries, glossaries, encyclopedias, and thesaurus, etc. First, it discussed the nature and structure of dictionaries. Second, two current major terminological data encoding schemes, MARTIF and TEI were analyzed in terms of their flexibility for extension to encompass all lexical entities. Third, an integrated microstructure of dictionaries was presented and compared with the MARTIF and TEI for print dictionaries. Then, the need and 17 suggestions for extended MARTIF and TEI formats were addressed with specific cases, which combined with the suggestions from two studies concerning MARTIF and TEI DTD modification for the markup of Korean dictionary entries.

  • PDF

Co-Event Conflation for Compound Verbs in Korean

  • Jun, Jong-Sup
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.202-209
    • /
    • 2007
  • Compound verbs in Korean show properties of both syntactic phrases and lexical items. Earlier studies of compound verbs have either assumed two homonymous types, i.e. one as a syntactic phrase and the other as a lexical item, or posited some sort of transformation from a syntactic phrase into a lexical item. In this paper, I show empirical and conceptual problems for earlier studies, and present an alternative account in terms of Talmy's (2000) theory of lexicalization. Unlike Talmy who proposed [Path] conflation into [MOVE] for Korean, I suggest several types of [Co-Event] conflation; e.g. [$_{Co-Event}$ Manner] conflation as in kwul-e-kata 'to go by rolling', [$_{Co-Event}$ Concomitance] conflation as in ttal-a-kata 'to follow', [$_{Co-Event}$ Concurrent Result] conflation as in cap-a-kata 'to catch somebody and go', etc. The present proposal not only places Korean compound verbs in a broader picture of cross-linguistic generalizations, but, when viewed from Jackendoff's (1997) productive vs. semi-productive morphology, provides a natural account for classifying the compounds that allow -se intervention from those that do not.

  • PDF

Vocabulary Recognition Retrieval Optimized System using MLHF Model (MLHF 모델을 적용한 어휘 인식 탐색 최적화 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.217-223
    • /
    • 2009
  • Vocabulary recognition system of Mobile terminal is executed statistical method for vocabulary recognition and used statistical grammar recognition system using N-gram. If limit arithmetic processing capacity in memory of vocabulary to grow then vocabulary recognition algorithm complicated and need a large scale search space and many processing time on account of impossible to process. This study suggest vocabulary recognition optimize using MLHF System. MLHF separate acoustic search and lexical search system using FLaVoR. Acoustic search feature vector of speech signal extract using HMM, lexical search recognition execution using Levenshtein distance algorithm. System performance as a result of represent vocabulary dependence recognition rate of 98.63%, vocabulary independence recognition rate of 97.91%, represent recognition speed of 1.61 second.

Web Image Caption Extraction using Positional Relation and Lexical Similarity (위치적 연관성과 어휘적 유사성을 이용한 웹 이미지 캡션 추출)

  • Lee, Hyoung-Gyu;Kim, Min-Jeong;Hong, Gum-Won;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.335-345
    • /
    • 2009
  • In this paper, we propose a new web image caption extraction method considering the positional relation between a caption and an image and the lexical similarity between a caption and the main text containing the caption. The positional relation between a caption and an image represents how the caption is located with respect to the distance and the direction of the corresponding image. The lexical similarity between a caption and the main text indicates how likely the main text generates the caption of the image. Compared with previous image caption extraction approaches which only utilize the independent features of image and captions, the proposed approach can improve caption extraction recall rate, precision rate and 28% F-measure by including additional features of positional relation and lexical similarity.

A Hidden Markov Model Imbedding Multiword Units for Part-of-Speech Tagging

  • Kim, Jae-Hoon;Jungyun Seo
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.7-13
    • /
    • 1997
  • Morphological Analysis of Korean has known to be a very complicated problem. Especially, the degree of part-of-speech(POS) ambiguity is much higher than English. Many researchers have tried to use a hidden Markov model(HMM) to solve the POS tagging problem and showed arround 95% correctness ratio. However, the lack of lexical information involves a hidden Markov model for POS tagging in lots of difficulties in improving the performance. To alleviate the burden, this paper proposes a method for combining multiword units, which are types of lexical information, into a hidden Markov model for POS tagging. This paper also proposes a method for extracting multiword units from POS tagged corpus. In this paper, a multiword unit is defined as a unit which consists of more than one word. We found that these multiword units are the major source of POS tagging errors. Our experiment shows that the error reduction rate of the proposed method is about 13%.

  • PDF

The Method of Using the Automatic Word Clustering System for the Evaluation of Verbal Lexical-Semantic Network (동사 어휘의미망 평가를 위한 단어클러스터링 시스템의 활용 방안)

  • Kim Hae-Gyung;Yoon Ae-Sun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.40 no.3
    • /
    • pp.175-190
    • /
    • 2006
  • For the recent several years, there has been much interest in lexical semantic network However it seems to be very difficult to evaluate the effectiveness and correctness of it and invent the methods for applying it into various problem domains. In order to offer the fundamental ideas about how to evaluate and utilize lexical semantic networks, we developed two automatic vol·d clustering systems, which are called system A and system B respectively. 68.455.856 words were used to learn both systems. We compared the clustering results of system A to those of system B which is extended by the lexical-semantic network. The system B is extended by reconstructing the feature vectors which are used the elements of the lexical-semantic network of 3.656 '-ha' verbs. The target data is the 'multilingual Word Net-CoroNet'. When we compared the accuracy of the system A and system B, we found that system B showed the accuracy of 46.6% which is better than that of system A. 45.3%.