• Title/Summary/Keyword: English sentence processing

Search Result 45, Processing Time 0.022 seconds

An Algorithm for Finding a Relationship Between Entities: Semi-Automated Schema Integration Approach (엔티티 간의 관계명을 생성하는 알고리즘: 반자동화된 스키마 통합)

  • Kim, Yongchan;Park, Jinsoo;Suh, Jihae
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.243-262
    • /
    • 2018
  • Database schema integration is a significant issue in information systems. Because schema integration is a time-consuming and labor-intensive task, many studies have attempted to automate it. Researchers typically use XML as the source schema and leave much of the work to be done through DBA intervention, e.g., there are various naming conflicts related to relationship names in schema integration. In the past, the DBA had to intervene to resolve the naming-conflict name. In this paper, we introduce an algorithm that automatically generates relationship names to resolve relationship name conflicts that occur during schema integration. This algorithm is based on an Internet collocation and English sentence example dictionary. The relationship between the two entities is generated by analyzing examples extracted based on dictionary data through natural language processing. By building a semi-automated schema integration system and testing this algorithm, we found that it showed about 90% accuracy. Using this algorithm, we can resolve the problems related to naming conflicts that occur at schema integration automatically without DBA intervention.

Pivot Discrimination Approach for Paraphrase Extraction from Bilingual Corpus (이중 언어 기반 패러프레이즈 추출을 위한 피봇 차별화 방법)

  • Park, Esther;Lee, Hyoung-Gyu;Kim, Min-Jeong;Rim, Hae-Chang
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.1
    • /
    • pp.57-78
    • /
    • 2011
  • Paraphrasing is the act of writing a text using other words without altering the meaning. Paraphrases can be used in many fields of natural language processing. In particular, paraphrases can be incorporated in machine translation in order to improve the coverage and the quality of translation. Recently, the approaches on paraphrase extraction utilize bilingual parallel corpora, which consist of aligned sentence pairs. In these approaches, paraphrases are identified, from the word alignment result, by pivot phrases which are the phrases in one language to which two or more phrases are connected in the other language. However, the word alignment is itself a very difficult task, so there can be many alignment errors. Moreover, the alignment errors can lead to the problem of selecting incorrect pivot phrases. In this study, we propose a method in paraphrase extraction that discriminates good pivot phrases from bad pivot phrases. Each pivot phrase is weighted according to its reliability, which is scored by considering the lexical and part-of-speech information. The experimental result shows that the proposed method achieves higher precision and recall of the paraphrase extraction than the baseline. Also, we show that the extracted paraphrases can increase the coverage of the Korean-English machine translation.

  • PDF

Processing of the Syntactic Ambiguity Resolution in English as a Foreign Language (외국어로서의 영어 구문 중의성 해결 과정)

  • 정유진;이윤형;황유미;남기춘
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.05a
    • /
    • pp.261-266
    • /
    • 2000
  • 글을 이해하기 위해서는 어휘와 어휘간의 연결 및 전체 구조를 아는 것이 필요하다. 이는 비단 한국어뿐만 아니라 영어나 기타 다른 외국어에서도 마찬가지일 것이다. 본고는 두 가지를 고찰하기 위해 진행되었는데 우선 외국어로서 영어를 처리하는데 발생하는 구문적 중의성을 해결하는데 Garden Path Sentence(GPS), Late Closure(LC), PP의 세 문형에 따라 어떻게 해결하는지 알아보기 위한 것이다. 그리고 각 문형의 중의적 어절에서의 반응과 애매성 해소 어절에서의 반응에 따라 sysntactic module이 작용하는 것인지 알아보고자 한다. 예를 들어 "The boat floated down the streams sank"란 Garden Path 문장이 제시된 경우에 독자는 "sank"란 어휘가 제시되기 전까지 "floated"를 동사로 생각하게 되나 다음에 본동사인 "sank"가 제시될 경우 문장의 해석에 혼란을 갖게 될 것이다. 예문에서 "floated"가 문장에서 어떤 역할을 하는지 결정하는 것은 "sank"를 보고서야 가능하다. 이런 구문적 중의성을 해결하는 방식을 알아보기 위해 어절 단위로 제시된 자극을 읽는 자기 조절 읽기 과제(self-paced reading task)를 사용하였다. 각 어절을 읽는데 걸리는 시간을 측정한 실험 결과 GPS, PP, LC 모두 중의성을 지닌 영역이 중의성을 해소한 후와 각각 유형적으로 큰 차이가 없는 것으로 나타났다. 다만 GPS, CGPS, PP와 CPP는 어절 후반으로 갈수록 반응시간이 짧아졌다. 이는 우리나라 사람의 경우 외국어인 영어의 구문 중의성 해소는 구문 분석 단원(syntactic module)에 의한 자동적 처리라기보다 의미를 고려해 가면서 문법지식을 이용해 추론을 통한 구문 분석이라 할 수 있다.에 의한 자동적 처리라기보다 의미를 고려해 가면서 문법지식을 이용해 추론을 통한 구문 분석이라 할 수 있다.많았다(P<0.05).조군인 Group 1에서보다 높은 수준으로 발현되었다. 하지만 $12.5{\;}\mu\textrm{g}/ml$의 T. denticola sonicated 추출물로 전처리한 Group 3에서는 IL-2와 IL-4의 수준이 유의성있게 억제되어 발현되었다 (p < 0.05). 이러한 결과를 통하여 T. denticola에서 추출된 면역억제 단백질이 Th1과 Th2의 cytokine 분비 기능을 억제하는 것으로 확인 되었으며 이 기전이 감염 근관에서 발견되는 T. denticola의 치수 및 치근단 질환에 대한 병인기전과 관련이 있는 것으로 사료된다.을 보였다. 본 실험 결과, $Depulpin^{\circledR}은{\;}Tempcanal^{\circledR}와{\;}Vitapex^{\circledR}$에 비해 높은 세포 독성을 보여주공 있으나, 좀 더 많은 임상적 검증이 필요할 것으로 사료된다.중요한 역할을 하는 것으로 추론할 수 있다.근관벽을 처리하는 것이 필요하다고 사료된다.크기에 의존하며, 또한 이러한 영향은 $(Ti_{1-x}AI_{x})N$ 피막에 존재하는 AI의 함량이 높고, 초기에 증착된 막의 업자 크기가 작을 수록 클 것으로 여겨진다. 그리고 환경의 의미의 차이에 따라 경관의 미학적 평가가 달라진 것으로 나타났다.corner$적 의도에 의한 경관구성의 일면을 확인할수 있지만 엄밀히 생각하여 보면 이러한 예의 경우도 최락의 총체적인 외형은 마찬가지로 $\ulcorner$순응$\lrcorner$의 범위를 벗어나지 않는다. 그렇기 때문에도 $\ulcorner$순응$\lrcorner$$\ulcorner$표현$\lrcorner$의 성격과 형태를 외형상으로 더욱이 공간상에서는 뚜렷하게 경계

  • PDF

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.