• 제목/요약/키워드: word Embedding

Search Result 236, Processing Time 0.022 seconds

Bootstrapping-based Bilingual Lexicon Induction by Learning Projection of Word Embedding (부트스트래핑 기반의 단어-임베딩 투영 학습에 의한 대역어 사전 구축)

  • Lee, Jongseo;Wang, JiHyun;Lee, Seung Jin
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.462-467
    • /
    • 2020
  • 대역사전의 구축은 저자원 언어쌍 간의 기계번역의 품질을 높이는데 있어 중요하다. 대역사전 구축을 위해 기존에 제시된 방법론 중 단어 임베딩을 기반으로 하는 방법론 대부분이 영어-프랑스어와 같이 형태적 및 구문적으로 유사한 언어쌍 사이에서는 높은 성능을 보이지만, 영어-중국어와 같이 유사하지 않은 언어쌍에 대해서는 그렇지 못하다는 사실이 널리 알려져 있다. 본 논문에서는 단어 임베딩을 기반으로 부트스트래핑을 통해 대역사전을 구축하는 방법론을 제안한다. 제안하는 방법론은 소량의 seed 사전으로부터 시작해 반복적인 과정을 통해 대역사전을 자동으로 구축하게 된다. 이후, 본 논문의 방법론을 이용해 한국어-영어 언어쌍에 대한 실험을 진행하고, 기존에 대역사전 구축 용도로 많이 활용되고 있는 도구인 Moses에 사용된 방법론과 F1-Score 성능을 비교한다. 실험 결과, F1-Score가 약 42%p 증가함을 확인할 수 있었으며, 초기에 입력해준 seed 사전 대비 7배 크기의 대역사전을 구축하였다.

  • PDF

Automatic Text Summarization based on Selective Copy mechanism against for Addressing OOV (미등록 어휘에 대한 선택적 복사를 적용한 문서 자동요약)

  • Lee, Tae-Seok;Seon, Choong-Nyoung;Jung, Youngim;Kang, Seung-Shik
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.58-65
    • /
    • 2019
  • Automatic text summarization is a process of shortening a text document by either extraction or abstraction. The abstraction approach inspired by deep learning methods scaling to a large amount of document is applied in recent work. Abstractive text summarization involves utilizing pre-generated word embedding information. Low-frequent but salient words such as terminologies are seldom included to dictionaries, that are so called, out-of-vocabulary(OOV) problems. OOV deteriorates the performance of Encoder-Decoder model in neural network. In order to address OOV words in abstractive text summarization, we propose a copy mechanism to facilitate copying new words in the target document and generating summary sentences. Different from the previous studies, the proposed approach combines accurate pointing information and selective copy mechanism based on bidirectional RNN and bidirectional LSTM. In addition, neural network gate model to estimate the generation probability and the loss function to optimize the entire abstraction model has been applied. The dataset has been constructed from the collection of abstractions and titles of journal articles. Experimental results demonstrate that both ROUGE-1 (based on word recall) and ROUGE-L (employed longest common subsequence) of the proposed Encoding-Decoding model have been improved to 47.01 and 29.55, respectively.

Monetary policy synchronization of Korea and United States reflected in the statements (통화정책 결정문에 나타난 한미 통화정책 동조화 현상 분석)

  • Chang, Youngjae
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.1
    • /
    • pp.115-126
    • /
    • 2021
  • Central banks communicate with the market through a statement on the direction of monetary policy while implementing monetary policy. The rapid contraction of the global economy due to the recent Covid-19 pandemic could be compared to the crisis situation during the 2008 global financial crisis. In this paper, we analyzed the text data from the monetary policy statements of the Bank of Korea and Fed reflecting monetary policy directions focusing on how they were affected in the face of a global crisis. For analysis, we collected the text data of the two countries' monetary policy direction reports published from October 1999 to September 2020. We examined the semantic features using word cloud and word embedding, and analyzed the trend of the similarity between two countries' documents through a piecewise regression tree model. The visualization result shows that both the Bank of Korea and the US Fed have published the statements with refined words of clear meaning for transparent and effective communication with the market. The analysis of the dissimilarity trend of documents in both countries also shows that there exists a sense of synchronization between them as the rapid changes in the global economic environment affect monetary policy.

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

A Deep Learning-based Depression Trend Analysis of Korean on Social Media (딥러닝 기반 소셜미디어 한글 텍스트 우울 경향 분석)

  • Park, Seojeong;Lee, Soobin;Kim, Woo Jung;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.1
    • /
    • pp.91-117
    • /
    • 2022
  • The number of depressed patients in Korea and around the world is rapidly increasing every year. However, most of the mentally ill patients are not aware that they are suffering from the disease, so adequate treatment is not being performed. If depressive symptoms are neglected, it can lead to suicide, anxiety, and other psychological problems. Therefore, early detection and treatment of depression are very important in improving mental health. To improve this problem, this study presented a deep learning-based depression tendency model using Korean social media text. After collecting data from Naver KonwledgeiN, Naver Blog, Hidoc, and Twitter, DSM-5 major depressive disorder diagnosis criteria were used to classify and annotate classes according to the number of depressive symptoms. Afterwards, TF-IDF analysis and simultaneous word analysis were performed to examine the characteristics of each class of the corpus constructed. In addition, word embedding, dictionary-based sentiment analysis, and LDA topic modeling were performed to generate a depression tendency classification model using various text features. Through this, the embedded text, sentiment score, and topic number for each document were calculated and used as text features. As a result, it was confirmed that the highest accuracy rate of 83.28% was achieved when the depression tendency was classified based on the KorBERT algorithm by combining both the emotional score and the topic of the document with the embedded text. This study establishes a classification model for Korean depression trends with improved performance using various text features, and detects potential depressive patients early among Korean online community users, enabling rapid treatment and prevention, thereby enabling the mental health of Korean society. It is significant in that it can help in promotion.

A Study on the Extraction of Psychological Distance Embedded in Company's SNS Messages Using Machine Learning (머신 러닝을 활용한 회사 SNS 메시지에 내포된 심리적 거리 추출 연구)

  • Seongwon Lee;Jin Hyuk Kim
    • Information Systems Review
    • /
    • v.21 no.1
    • /
    • pp.23-38
    • /
    • 2019
  • The social network service (SNS) is one of the important marketing channels, so many companies actively exploit SNSs by posting SNS messages with appropriate content and style for their customers. In this paper, we focused on the psychological distances embedded in the SNS messages and developed a method to measure the psychological distance in SNS message by mixing a traditional content analysis, natural language processing (NLP), and machine learning. Through a traditional content analysis by human coding, the psychological distance was extracted from the SNS message, and these coding results were used for input data for NLP and machine learning. With NLP, word embedding was executed and Bag of Word was created. The Support Vector Machine, one of machine learning techniques was performed to train and test the psychological distance in SNS message. As a result, sensitivity and precision of SVM prediction were significantly low because of the extreme skewness of dataset. We improved the performance of SVM by balancing the ratio of data by upsampling technique and using data coded with the same value in first content analysis. All performance index was more than 70%, which showed that psychological distance can be measured well.

Question Answering Optimization via Temporal Representation and Data Augmentation of Dynamic Memory Networks (동적 메모리 네트워크의 시간 표현과 데이터 확장을 통한 질의응답 최적화)

  • Han, Dong-Sig;Lee, Chung-Yeon;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.51-56
    • /
    • 2017
  • The research area for solving question answering (QA) problems using artificial intelligence models is in a methodological transition period, and one such architecture, the dynamic memory network (DMN), is drawing attention for two key attributes: its attention mechanism defined by neural network operations and its modular architecture imitating cognition processes during QA of human. In this paper, we increased accuracy of the inferred answers, by adapting an automatic data augmentation method for lacking amount of training data, and by improving the ability of time perception. The experimental results showed that in the 1K-bAbI tasks, the modified DMN achieves 89.21% accuracy and passes twelve tasks which is 13.58% higher with passing four more tasks, as compared with one implementation of DMN. Additionally, DMN's word embedding vectors form strong clusters after training. Moreover, the number of episodic passes and that of supporting facts shows direct correlation, which affects the performance significantly.

Research on Identifying Mutation-Drug Relationship in Biomedical Literature Using Biomedical Context based pre-trained word embedding (의생명과학 기반 기학습된 워드 임베딩을 이용한 의생명과학 논문 속의 돌연변이-약물 관계 추출 연구)

  • Kim, Hojun;Won, Seongyeon;Gang, Seungwoo;Lee, Kyubum;Kim, Byounggun;Kim, Sunkyu;Kang, Jaewoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.774-777
    • /
    • 2017
  • 의생명과학분야가 계속 발전됨에 따라 매일 평균 3천여 편에 달하는 방대한 양의 의생명과학분야 문헌들이 나오고 있다. 많은 연구가 진행될수록, 새로이 규명된 관계를 습득하고 체계화하는 일이 연구자와 의료계 종사자들에게 더 중요해지고 있다. 하지만 현재로서는 의생명과학분야에 어느 정도의 지식이 있는 사람이 직접 논문을 읽고 해당 논문에서 밝히고 있는 정보를 정리해야만 하는 상황이며, 이로는 기하급수적으로 쌓이는 정보의 양을 대처하기 어렵다. 이를 해결하기 위해 본 논문에서는 기계 학습을 통한 생명의료 객체관계 자동추출 연구를 이용하여 의생명과학분야의 정보를 체계화 하고자 한다. 본 논문에서는 돌연변이와 약물이 함께 등장하는 논문을 뽑아내어 글을 자연어 문장 단위로 나누었다. 추출한 돌연변이와 약물 간의 관계를 직접 사람에 의해 참거짓을 판명하였고, 해당 데이터셋을 기계학습에 이용하여 돌연변이와 약물 간의 관계를 학습시켰다. 최종적으로 GoogleNews의 기사들로 기학습된 워드임베딩, 의생명과학분야 문헌들을 이용하여 기학습된 워드임베딩을 이용하여 학습의 성능을 비교하였고, 의생명과학-문맥 특이적인 워드임베딩이 갖는 강점을 보고한다. 해당 연구를 통해 실제로 논문을 읽지 않고도 의생명과학분야 논문의 핵심적인 내용을 뽑아내는 자동화 시스템을 구축하는 데에 이바지하고, 의생명공학 연구자들의 연구에 핵심적인 도움이 되는 디딤돌이 되고자 한다.

A Study of Research on Methods of Automated Biomedical Document Classification using Topic Modeling and Deep Learning (토픽모델링과 딥 러닝을 활용한 생의학 문헌 자동 분류 기법 연구)

  • Yuk, JeeHee;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.2
    • /
    • pp.63-88
    • /
    • 2018
  • This research evaluated differences of classification performance for feature selection methods using LDA topic model and Doc2Vec which is based on word embedding using deep learning, feature corpus sizes and classification algorithms. In addition to find the feature corpus with high performance of classification, an experiment was conducted using feature corpus was composed differently according to the location of the document and by adjusting the size of the feature corpus. Conclusionally, in the experiments using deep learning evaluate training frequency and specifically considered information for context inference. This study constructed biomedical document dataset, Disease-35083 which consisted biomedical scholarly documents provided by PMC and categorized by the disease category. Throughout the study this research verifies which type and size of feature corpus produces the highest performance and, also suggests some feature corpus which carry an extensibility to specific feature by displaying efficiency during the training time. Additionally, this research compares the differences between deep learning and existing method and suggests an appropriate method by classification environment.

Inverse Document Frequency-Based Word Embedding of Unseen Words for Question Answering Systems (질의응답 시스템에서 처음 보는 단어의 역문헌빈도 기반 단어 임베딩 기법)

  • Lee, Wooin;Song, Gwangho;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.902-909
    • /
    • 2016
  • Question answering system (QA system) is a system that finds an actual answer to the question posed by a user, whereas a typical search engine would only find the links to the relevant documents. Recent works related to the open domain QA systems are receiving much attention in the fields of natural language processing, artificial intelligence, and data mining. However, the prior works on QA systems simply replace all words that are not in the training data with a single token, even though such unseen words are likely to play crucial roles in differentiating the candidate answers from the actual answers. In this paper, we propose a method to compute vectors of such unseen words by taking into account the context in which the words have occurred. Next, we also propose a model which utilizes inverse document frequencies (IDF) to efficiently process unseen words by expanding the system's vocabulary. Finally, we validate that the proposed method and model improve the performance of a QA system through experiments.