• Title/Summary/Keyword: 한국어 말뭉치

Search Result 524, Processing Time 0.03 seconds

Research on the Utilization of Recurrent Neural Networks for Automatic Generation of Korean Definitional Sentences of Technical Terms (기술 용어에 대한 한국어 정의 문장 자동 생성을 위한 순환 신경망 모델 활용 연구)

  • Choi, Garam;Kim, Han-Gook;Kim, Kwang-Hoon;Kim, You-eil;Choi, Sung-Pil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.99-120
    • /
    • 2017
  • In order to develop a semiautomatic support system that allows researchers concerned to efficiently analyze the technical trends for the ever-growing industry and market. This paper introduces a couple of Korean sentence generation models that can automatically generate definitional statements as well as descriptions of technical terms and concepts. The proposed models are based on a deep learning model called LSTM (Long Sort-Term Memory) capable of effectively labeling textual sequences by taking into account the contextual relations of each item in the sequences. Our models take technical terms as inputs and can generate a broad range of heterogeneous textual descriptions that explain the concept of the terms. In the experiments using large-scale training collections, we confirmed that more accurate and reasonable sentences can be generated by CHAR-CNN-LSTM model that is a word-based LSTM exploiting character embeddings based on convolutional neural networks (CNN). The results of this study can be a force for developing an extension model that can generate a set of sentences covering the same subjects, and furthermore, we can implement an artificial intelligence model that automatically creates technical literature.

Transfer Dictionary for A Token Based Transfer Driven Korean-Japanese Machine Translation (토큰기반 변환중심 한일 기계번역을 위한 변환사전)

  • Yang Seungweon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.3
    • /
    • pp.64-70
    • /
    • 2004
  • Korean and Japanese have same structure of sentences because they belong to same family of languages. So, The transfer driven machine translation is most efficient to translate each other. This paper introduce a method which creates a transfer dictionary for Token Based Transfer Driven Koran-Japanese Machine Translation(TB-TDMT). If the transfer dictionaries are created well, we get rid of useless effort for traditional parsing by performing shallow parsing. The semi-parser makes the dependency tree which has minimum information needed output generating module. We constructed the transfer dictionaries by using the corpus obtained from ETRI spoken language database. Our system was tested with 900 utterances which are collected from travel planning domain. The success-ratio of our system is $92\%$ on restricted testing environment and $81\%$ on unrestricted testing environment.

  • PDF

Korean Noun Extractor using Occurrence Patterns of Nouns and Post-noun Morpheme Sequences (한국어 명사 출현 특성과 후절어를 이용한 명사추출기)

  • Park, Yong-Hyun;Hwang, Jae-Won;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.919-927
    • /
    • 2010
  • Since the performance of mobile devices is recently improved, the requirement of information retrieval is increased in the mobile devices as well as PCs. If a mobile device with small memory uses a tradition language analysis tool to extract nouns from korean texts, it will impose a burden of analysing language. As a result, the need for the language analysis tools adequate to the mobile devices is increasing. Therefore, this paper proposes a new method for noun extraction using post-noun morpheme sequences and noun patterns from a large corpus. The proposed noun extractor has only the dictionary capacity of 146KB and its performance shows 0.86 $F_1$-measure; the capacity of noun dictionary corresponds to only the 4% capacity of the existing noun extractor with a POS tagger. In addition, it easily extract nouns for unknown word because its dependence for noun dictionaries is low.

Advanced detection of sentence boundaries based on hybrid method (하이브리드 방법을 이용한 개선된 문장경계인식)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2009.10a
    • /
    • pp.61-66
    • /
    • 2009
  • 본 논문은 다양한 형태의 웹 문서에 적용하기 위해서, 언어의 통계정보 및 후처리 규칙에 기반 하여 개선된 문장경계 인식 기술을 제안한다. 제안한 방법은 구두점 생략 및 띄어쓰기 오류가 빈번한 웹 문서에 적용하기 위해서 문장경계로 사용될 수 있는 모든 음절을 대상으로 학습하여 문장경계 인식을 수행하였고, 문장경계인식 성능을 최대화 하기 위해서 다양한 실험을 통해 최적의 자질 및 학습데이터를 선정하였고, 다양한 기계학습 기반 분류 모델을 비교하여 최적의 분류모델을 선택하였으며, 학습데이터에 의존적인 통계모델의 오류를 규칙에 기반 해서 보정하였다. 성능 실험은 다양한 형태의 문서별 성능 측정을 위해서 문어체와 구어체가 복합적으로 사용된 신문기사와 블로그 문서(평가셋1), 문어체 위주로 구성된 세종말뭉치와 백과사전 본문(평가셋2), 구두점 생략 및 띄어쓰기 오류가 빈번한 웹 사이트의 게시판 글(평가셋3)을 대상으로 성능 측정을 하였다. 성능척도로는 F-measure를 사용하였으며, 구두점만을 대상으로 문장경계 인식 성능을 평가한 결과, 평가셋1에서는 96.5%, 평가셋2에서는 99.4%를 보였는데, 구어체의 문장경계인식이 더 어려움을 알 수 있었다. 평가셋1의 경우에도 규칙으로 후처리한 경우 정확률이 92.1%에서 99.4%로 올라갔으며, 이를 통해 후처리 규칙의 필요성을 알 수 있었다. 최종 성능평가로는 구두점만을 대상으로 학습된 기본 엔진과 모든 문장경계후보를 인식하도록 개선된 엔진을 평가셋3을 사용하여 비교 평가하였고, 기본 엔진(61.1%)에 비해서 개선된 엔진이 32.0% 성능 향상이 있음을 확인함으로써 제안한 방법이 웹 문서에 효과적임을 입증하였다.

  • PDF

Word Sense Disambiguation Based on Local Syntactic Relations and Sense Co-occurrence Information (국소 구문 관계 및 의미 공기 정보에 기반한 명사 의미 모호성 해소)

  • Kim, Young-Kil;Hong, Mun-Pyo;Kim, Chang-Hyun;Seo, Young-Ae;Yang, Seong-Il;Ryu, Chul;Huang, Yin-Xia;Choi, Sung-Kwon;Park, Sang-Kyu
    • Annual Conference on Human and Language Technology
    • /
    • 2002.10e
    • /
    • pp.184-188
    • /
    • 2002
  • 본 논문에서는 단순히 주변에 위치하는 어휘들간의 문맥 공기 정보를 이용하는 방식과는 달리 국소 구문 관계 및 의미 공기 정보에 기반한 명사 의미 모호성 해소 방안을 제안한다. 기존의 WSD 방법은 구조 분석의 어려움으로 인하여 문장의 구문 관계를 충분히 고려하지 못하고 주변 어휘들과의 공기 관계로 그 의미를 파악하려 했다. 그러나 본 논문에서는 동사구의 논항 의미 관계뿐만 아니라 명사구내에서의 의미 관계도 고려한 국소 구문관계를 고려한 명사 의미 모호성 해소 방법을 제안한다. 이 때, 명사들의 의미는 자동번역 시스템의 목적에 맞게 공기(co-occurrence)하는 동사들에 따라 분류하였다. 그리고 한중 자동 번역 지식으로 사용되는 명사 의미 코드가 부착된 74,880 의미 격틀의 의미 공기정보를 이용하였으며 형태소 태깅된 말뭉치로부터 의미모호성이 발생하지 않게 의미 공기정보 및 명사구 의미 공기 정보를 자동으로 추출하였다. 실험 결과, 의미 모호성이 발생하는 명사들에 대해서 83.9%의 의미 모호성 해소 정확률을 보였다.

  • PDF

Document Classification Methodology Using Autoencoder-based Keywords Embedding

  • Seobin Yoon;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.35-46
    • /
    • 2023
  • In this study, we propose a Dual Approach methodology to enhance the accuracy of document classifiers by utilizing both contextual and keyword information. Firstly, contextual information is extracted using Google's BERT, a pre-trained language model known for its outstanding performance in various natural language understanding tasks. Specifically, we employ KoBERT, a pre-trained model on the Korean corpus, to extract contextual information in the form of the CLS token. Secondly, keyword information is generated for each document by encoding the set of keywords into a single vector using an Autoencoder. We applied the proposed approach to 40,130 documents related to healthcare and medicine from the National R&D Projects database of the National Science and Technology Information Service (NTIS). The experimental results demonstrate that the proposed methodology outperforms existing methods that rely solely on document or word information in terms of accuracy for document classification.

A Korean Document Sentiment Classification System based on Semantic Properties of Sentiment Words (감정 단어의 의미적 특성을 반영한 한국어 문서 감정분류 시스템)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.317-322
    • /
    • 2010
  • This paper proposes how to improve performance of the Korean document sentiment-classification system using semantic properties of the sentiment words. A sentiment word means a word with sentiment, and sentiment features are defined by a set of the sentiment words which are important lexical resource for the sentiment classification. Sentiment feature represents different sentiment intensity in general field and in specific domain. In general field, we can estimate the sentiment intensity using a snippet from a search engine, while in specific domain, training data can be used for this estimation. When the sentiment intensity of the sentiment features are estimated, it is called semantic orientation and is used to estimate the sentiment intensity of the sentences in the text documents. After estimating sentiment intensity of the sentences, we apply that to the weights of sentiment features. In this paper, we evaluate our system in three different cases such as general, domain-specific, and general/domain-specific semantic orientation using support vector machine. Our experimental results show the improved performance in all cases, and, especially in general/domain-specific semantic orientation, our proposed method performs 3.1% better than a baseline system indexed by only content words.

A Deep Learning-based Depression Trend Analysis of Korean on Social Media (딥러닝 기반 소셜미디어 한글 텍스트 우울 경향 분석)

  • Park, Seojeong;Lee, Soobin;Kim, Woo Jung;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.1
    • /
    • pp.91-117
    • /
    • 2022
  • The number of depressed patients in Korea and around the world is rapidly increasing every year. However, most of the mentally ill patients are not aware that they are suffering from the disease, so adequate treatment is not being performed. If depressive symptoms are neglected, it can lead to suicide, anxiety, and other psychological problems. Therefore, early detection and treatment of depression are very important in improving mental health. To improve this problem, this study presented a deep learning-based depression tendency model using Korean social media text. After collecting data from Naver KonwledgeiN, Naver Blog, Hidoc, and Twitter, DSM-5 major depressive disorder diagnosis criteria were used to classify and annotate classes according to the number of depressive symptoms. Afterwards, TF-IDF analysis and simultaneous word analysis were performed to examine the characteristics of each class of the corpus constructed. In addition, word embedding, dictionary-based sentiment analysis, and LDA topic modeling were performed to generate a depression tendency classification model using various text features. Through this, the embedded text, sentiment score, and topic number for each document were calculated and used as text features. As a result, it was confirmed that the highest accuracy rate of 83.28% was achieved when the depression tendency was classified based on the KorBERT algorithm by combining both the emotional score and the topic of the document with the embedded text. This study establishes a classification model for Korean depression trends with improved performance using various text features, and detects potential depressive patients early among Korean online community users, enabling rapid treatment and prevention, thereby enabling the mental health of Korean society. It is significant in that it can help in promotion.

The Method of Using the Automatic Word Clustering System for the Evaluation of Verbal Lexical-Semantic Network (동사 어휘의미망 평가를 위한 단어클러스터링 시스템의 활용 방안)

  • Kim Hae-Gyung;Yoon Ae-Sun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.40 no.3
    • /
    • pp.175-190
    • /
    • 2006
  • For the recent several years, there has been much interest in lexical semantic network However it seems to be very difficult to evaluate the effectiveness and correctness of it and invent the methods for applying it into various problem domains. In order to offer the fundamental ideas about how to evaluate and utilize lexical semantic networks, we developed two automatic vol·d clustering systems, which are called system A and system B respectively. 68.455.856 words were used to learn both systems. We compared the clustering results of system A to those of system B which is extended by the lexical-semantic network. The system B is extended by reconstructing the feature vectors which are used the elements of the lexical-semantic network of 3.656 '-ha' verbs. The target data is the 'multilingual Word Net-CoroNet'. When we compared the accuracy of the system A and system B, we found that system B showed the accuracy of 46.6% which is better than that of system A. 45.3%.

Pivot Discrimination Approach for Paraphrase Extraction from Bilingual Corpus (이중 언어 기반 패러프레이즈 추출을 위한 피봇 차별화 방법)

  • Park, Esther;Lee, Hyoung-Gyu;Kim, Min-Jeong;Rim, Hae-Chang
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.1
    • /
    • pp.57-78
    • /
    • 2011
  • Paraphrasing is the act of writing a text using other words without altering the meaning. Paraphrases can be used in many fields of natural language processing. In particular, paraphrases can be incorporated in machine translation in order to improve the coverage and the quality of translation. Recently, the approaches on paraphrase extraction utilize bilingual parallel corpora, which consist of aligned sentence pairs. In these approaches, paraphrases are identified, from the word alignment result, by pivot phrases which are the phrases in one language to which two or more phrases are connected in the other language. However, the word alignment is itself a very difficult task, so there can be many alignment errors. Moreover, the alignment errors can lead to the problem of selecting incorrect pivot phrases. In this study, we propose a method in paraphrase extraction that discriminates good pivot phrases from bad pivot phrases. Each pivot phrase is weighted according to its reliability, which is scored by considering the lexical and part-of-speech information. The experimental result shows that the proposed method achieves higher precision and recall of the paraphrase extraction than the baseline. Also, we show that the extracted paraphrases can increase the coverage of the Korean-English machine translation.

  • PDF