• Title/Summary/Keyword: 문장형태 정보

Search Result 270, Processing Time 0.03 seconds

Character Identification on Multiparty Dialogues using Multimodal Features (멀티모달 자질을 활용한 다중 화자 대화 속 인물 식별)

  • Han, Kijong;Choi, Seong-Ho;Shin, Giyeon;Zhang, Byoung-Tak;Choi, Key-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.215-219
    • /
    • 2018
  • 다중 화자 대화 속 인물 식별이란 여러 등장인물이 나오는 대본에서 '그녀', '아버지' 등 인물을 지칭하는 명사 또는 명사구가 실제 어떤 인물을 나타내는지 파악하는 문제이다. 대본 자연어 데이터만을 입력으로 하는 대화 속 인물 식별 문제는 드라마 대본에 대해서 데이터가 구축 되었고 이를 기반으로 여러 연구가 진행되었다. 그러나, 사람도 다중 화자 대화의 문장만 보고는 인물을 지칭하는 명사 또는 명사구가 실제 어떤 인물인지 파악하기 어려운 경우가 있다. 이에 본 논문에서는 발화가 되는 시점의 영상 장면 정보를 추가적으로 활용하여 인물 식별의 성능을 높이는 방법을 제시한다. 또한 기존 대화 속 인물 식별 연구들은 미리 정의된 인물을 대상으로 분류하는 형태로 접근해왔다. 이는 학습에 사용되지 않았던 인물이 나오는 임의의 다른 드라마 대본이나 대화 등에 바로 적용될 수 없다. 이에 본 논문에서는 영상 정보는 활용하되, 한번 학습하면 임의의 대본에 적용될 수 있도록 사전 인물 정보를 사용하지 않는 상호참조해결 기반의 인물 식별 방법도 제시한다.

  • PDF

Dependency Relation Analysis using Case Frame for Encyclopedia Question-Answering System (백과사전 질의응답을 위한 격틀 기반 의존관계 분석)

  • Lim, Soo-Jong;Jung, Eui-Suk;Jang, Myoung-Gil
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.167-172
    • /
    • 2004
  • 백과사전에서 정답을 찾기 위한 정보 중의 하나로 구조분석 정보를 이용하기 위하여 의존 관계 분석을 통해 정확한 구조분석에 대한 연구를 하였다. 정답을 찾기 위한 대상이 되는 용언과 논항의 관계를 파악하기 위해 먼저 의존관계 분석의 모호성 정도를 줄이기 위해 문장을 구묶음으로 나누었고 나눠진 구묶음에서 중심어와 중심어에 해당하는 의미코드를 추출하였다. 이렇게 구분된 구묶음 간의 의존관계를 파악하기 위하여 주로 격틀과 의미코드에 의존하는 의미자질, 거리 자질, 격관계 자질, 절형태 자질을 이용하여 의존관계 모호성을 해소하였다. 백과사전의 특성상 생략되는 성분과 연속 동사 처리를 하여 보다 정확하게 백과사전 QA시스템에서 정답을 찾을 수 있는 정보를 제공하도록 하였다. 실험결과 동사구와 명사구의 의존관계는 89.43의 성능을 보였고 의존관계에 격을 부여한 경우는 78.40%의 정확율, 백과사전 후처리에 해당하는 복원은 68.23의 성능을 보인다.

  • PDF

The Processing of Irregular Verbals in Korean : Shown in Aphasics (한국어 불규칙 용언의 형태 정보 : 실어증 환자를 중심으로)

  • 김윤정;김수정;김희정;남기춘
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.05a
    • /
    • pp.303-308
    • /
    • 2000
  • 용언은 그 어간이 여러 문법소와 결합하면서 자동적 음운 변동을 제외한 형태 변동이 있는가, 없는가에 의해 규칙 용언/불규칙 용언으로 구분할 수 있다. 이러한 불규칙 용언은 심성 어휘집에 어떤 형태로 저장되어 있으며, 규칙 용언과는 어떠한 관계가 있는지, 나아가 실어증 환자의 경우에는 정상인에 비해 어떤 행동장애를 보이며, 장애가 있다면 어느 경로의 손상으로 인한 장애인지를 알아보는 것이 본 연구의 목적이다. 이를 위해 이해성 실어증 환자 한 명과 음어적 실행증 현상을 동반한 경미한 정도의 실어증 환자를 피험자로 하였다. 실험 과제는 단어 채워 넣기 과제(word completion task)를 사용하였다. 즉 주어진 기본형 용언을 검사 문장의 문맥에 맞게 활용하여 채워 넣는 것이다. 실험 결과에 의하면 환자들은 규칙용언의 활용(예. 먹다/먹는)과 불규칙 용언 중 형태를 유지한 채로 활용하는 경우(예. 줍다/줍고)에는 거의 오류가 없었으나, 불규칙 용언이 형태 변화를 겪어야 할 경우(예. 줍다/주워)에는 대부분 오류를 보였다. 또 이때는 기본형(basic form)을 그대로 유지하는 오류 방향성을 관찰할 수 있었다. 이는 그간 문법으로 구분되어 오던 규칙 용언/불규칙 용언의 정보 처리보다는 형태 유지/형태 변화 정보 처리의 영향이 크다는 것을 알 수 있다. 특히 이해성 실어증 환자는 전체적인 오류율이 매우 높았는데, 규칙 용언의 경우에도 오류를 보였다. 이때, 용언의 어간에 해당하는 부분에는 오류가 없고, 뒤에 따르는 내용과의 관계를 파악해야 하는 문법 기능소, 즉 연결 어미에서 오류를 보여 정보의 유지, 통합에 문제가 있다는 기존의 연구와도 일치하는 결과를 나타냈다.환자는 시제 선어말 어미를 선택하는데도 어려움을 보임이 확인되었다. 실험 3 역시 실험 1과 실험2에서와 동일하게 처리의 어려움을 보였다. 이러한 실험 결과들은 국어의 존칭과 시제 선어말 어미가 통사부에서 구(XP)와 결합하여 새로운 구를 형성하는 통사적 접사로 해석할 수 있으며 Grodzinsky의 가설을 지지하는 결과를 보여 줌으로서 국어에서도 AgrP, TP, CP 사이의 통사적 위계가 있음을 뒷받침하는 증거가 된다.전처리한 Group 3에서는 IL-2와 IL-4의 수준이 유의성있게 억제되어 발현되었다 (p < 0.05). 이러한 결과를 통하여 T. denticola에서 추출된 면역억제 단백질이 Th1과 Th2의 cytokine 분비 기능을 억제하는 것으로 확인 되었으며 이 기전이 감염 근관에서 발견되는 T. denticola의 치수 및 치근단 질환에 대한 병인기전과 관련이 있는 것으로 사료된다.을 보였다. 본 실험 결과, $Depulpin^{\circledR}은{\;}Tempcanal^{\circledR}와{\;}Vitapex^{\circledR}$에 비해 높은 세포 독성을 보여주공 있으나, 좀 더 많은 임상적 검증이 필요할 것으로 사료된다.중요한 역할을 하는 것으로 추론할 수 있다.근관벽을 처리하는 것이 필요하다고 사료된다.크기에 의존하며, 또한 이러한 영향은 $(Ti_{1-x}AI_{x})N$ 피막에 존재하는 AI의 함량이 높고, 초기에 증착된 막의 업자 크기가 작을 수록 클 것으로 여겨진다. 그리고 환경의 의미의 차이에 따라 경관의 미학적 평가가 달라진 것으로 나타났다.corner$적 의도에 의한 경관구성의 일면을 확인

  • PDF

Intrusion Detection System based on Packet Payload Analysis using Transformer

  • Woo-Seung Park;Gun-Nam Kim;Soo-Jin Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.81-87
    • /
    • 2023
  • Intrusion detection systems that learn metadata of network packets have been proposed recently. However these approaches require time to analyze packets to generate metadata for model learning, and time to pre-process metadata before learning. In addition, models that have learned specific metadata cannot detect intrusion by using original packets flowing into the network as they are. To address the problem, this paper propose a natural language processing-based intrusion detection system that detects intrusions by learning the packet payload as a single sentence without an additional conversion process. To verify the performance of our approach, we utilized the UNSW-NB15 and Transformer models. First, the PCAP files of the dataset were labeled, and then two Transformer (BERT, DistilBERT) models were trained directly in the form of sentences to analyze the detection performance. The experimental results showed that the binary classification accuracy was 99.03% and 99.05%, respectively, which is similar or superior to the detection performance of the techniques proposed in previous studies. Multi-class classification showed better performance with 86.63% and 86.36%, respectively.

The Systemic Functional Linguistics Analysis of Texts in Elementary Science Textbooks by Curriculum Revision (교육과정 변천에 따른 초등 과학 교과서 텍스트에 대한 체계기능언어학적 분석)

  • Maeng, Seung-Ho;Kim, Hye-Ree;Kim, Chan-Jong;Lee, Jeong-A
    • Journal of The Korean Association For Science Education
    • /
    • v.27 no.3
    • /
    • pp.242-252
    • /
    • 2007
  • This study analyzed the science texts covering 'air pressure' and 'wind' in common with every curriculum from the syllabus period to the $7^{th}$ curriculum in terms of Systemic Functional Linguistics. Important findings revealed in this study were as follows: In the aspect of ideational metafunction, the texts including much scientific information were reduced by curriculum revision. Most forms of information were 'definition' and 'fact' rather than 'principle'. In the aspect of interpersonal metafunction, the gap between students and texts were getting closer and the social position of students were concerned gradually by curriculum revisions. In the aspect of textual metafunction, the ratios of technical terminology and notation were reduced, however the amount of texts in science textbooks were reduced as well. While the subject was presented in the early texts, it was omitted as time went on. The consistency of subject and theme were reduced in the $7^{th}$ curriculum remarkably.

Science Popularizing Mechanism of a Science Magazine in terms of the Linguistic Features of Earth Science Articles in 'Science Donga' ('과학동아' 지구과학 기사의 언어적 특성으로 본 과학 잡지의 과학 대중화 기제)

  • Ham, Seok-Jin;Maeng, Seung-Ho;Kim, Chan-Jong
    • Journal of the Korean earth science society
    • /
    • v.31 no.1
    • /
    • pp.51-62
    • /
    • 2010
  • The purpose of this study was to investigate how a science magazine played a role in filling the gap between scientists and the general public, and how it contributed to science popularization. We analyzed the linguistic features of the texts used in a science magazine. We used 12 articles (six written by journalists, and six written by scientists) from the Science Donga. Register analysis was conducted in order to define the linguistic features of the texts in terms of ideational meaning, interpersonal meaning and, textual meaning. Results of this study are as follows: (1) the articles written by journalists used a higher mental and verbal processes in which the conversations and thoughts of scientists were expressed. (2) Human agents were relatively explicit in the journalists' articles. However, they were implicit or omitted in the articles of scientists. (3) Interrogative sentences and inclusive imperative sentences, and even omissions were frequently found in the journalists' articles whereas scientists' articles mainly used declarative statements. (4) The clause density of journalist' articles and scientists' were similarly lower than that of science textbooks. (5) The information structure revealed by the patterns of Theme and Rheme that the journalists' articles used in science magazines was simpler than that of science textbooks, while the structure of scientists' articles was more complex than that of journalists'. Based on the linguistic features of the texts used in science magazines, we found that a science magazine contributes to science popularization in two faces: One is in that the articles of journalists present science contents in a way that the readers can follow with ease and feel well-acquainted. The other is that the modified articles of scientists help the general public get familiar with the culuture of science in terms of use of science language.

Statistical Word Sense Disambiguation based on using Variant Window Size (가변길이 윈도우를 이용한 통계 기반 동형이의어의 중의성 해소)

  • Park, Gi-Tae;Lee, Tae-Hoon;Hwang, So-Hyun;Lee, Hyun Ah
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.40-44
    • /
    • 2012
  • 어휘가 갖는 의미적 중의성은 자연어의 특성 중 하나로 자연어 처리의 정확도를 떨어트리는 요인으로, 이러한 중의성을 해소하기 위해 언어적 규칙과 다양한 기계 학습 모델을 이용한 연구가 지속되고 있다. 의미적 중의성을 가지고 있는 동형이의어의 의미분별을 위해서는 주변 문맥이 가장 중요한 자질이 되며, 자질 정보를 추출하기 위해 사용하는 문맥 창의 크기는 중의성 해소의 성능과 밀접한 연관이 있어 신중히 결정되어야 한다. 본 논문에서는 의미분별과정에 필요한 문맥을 가변적인 크기로 사용하는 가변길이 윈도우 방식을 제안한다. 세종코퍼스의 형태의미분석 말뭉치로 학습하여 12단어 32,735문장에 대해 실험한 결과 용언의 경우 평균 정확도 92.2%로 윈도우를 고정적으로 사용한 경우에 비해 향상된 결과를 보였다.

  • PDF

A Study on Checklist Development of Articulating Reading Appreciation (독서감상 표현을 위한 체크리스트 개발에 관한 연구)

  • Lee, Susang;Lim, Yeojoo;Joo, So-Hyun
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.4
    • /
    • pp.205-228
    • /
    • 2021
  • This study focuses on the development of checklist on articulating reading appreciation, which will be used as the initial data for book recommendation for library users. As reading comprehension is prerequisite for reading appreciation, researchers analyzed research articles on reading comprehension to find out the core factors on reading comprehension and categorize them. Studies on reader response theory and literacy education were also examined: key words and phrases that will stimulate readers' response to reading were extracted and formed as questions. These questions were reviewed by experts on reading education. The final checklist consists of 14 questions - 4 questions on literal·inferential comprehension, 3 on evaluative comprehension, and 3 on appreciative comprehension.

A Processing of Progressive Aspect "te-iru" in Japanese-Korean Machine Translation (일한기계번역에서 진행형 "ている"의 번역처리)

  • Kim, Jeong-In;Mun, Gyeong-Hui;Lee, Jong-Hyeok
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.685-692
    • /
    • 2001
  • This paper describes how to disambiguate the aspectual meaning of Japanese expression "-te iru" in Japanese-Korean machine translation Due to grammatical similarities of both languages, almost all Japanese- Korean MT systems have been developed under the direct MT strategy, in which the lexical disambiguation is essential to high-quality translation. Japanese has a progressive aspectual marker “-te iru" which is difficult to translate into Korean equivalents because in Korean there are two different progressive aspectual markers: "-ko issta" for "action progressive" and "-e issta" for "state progressive". Moreover, the aspectual system of both languages does not quite coincide with each other, so the Korean progressive aspect could not be determined by Japanese meaning of " te iru" alone. The progressive aspectural meaning may be parially determined by the meaning of predicates and also the semantic meaning of predicates may be partially reshicted by adverbials, so all Japanese predicates are classified into five classes : the 1nd verb is used only for "action progrssive",2nd verb generally for "action progressive" but occasionally for "state progressive", the 3rd verb only for "state progressive", the 4th verb generally for "state progressive", but occasIonally for "action progressive", and the 5th verb for the others. Some heuristic rules are defined for disambiguation of the 2nd and 4th verbs on the basis of adverbs and abverbial phrases. In an experimental evaluation using more than 15,000 sentances from "Asahi newspapers", the proposed method improved the translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.anslation.

  • PDF

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.