• Title/Summary/Keyword: Sentence Extraction

Search Result 98, Processing Time 0.022 seconds

Pivot Discrimination Approach for Paraphrase Extraction from Bilingual Corpus (이중 언어 기반 패러프레이즈 추출을 위한 피봇 차별화 방법)

  • Park, Esther;Lee, Hyoung-Gyu;Kim, Min-Jeong;Rim, Hae-Chang
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.1
    • /
    • pp.57-78
    • /
    • 2011
  • Paraphrasing is the act of writing a text using other words without altering the meaning. Paraphrases can be used in many fields of natural language processing. In particular, paraphrases can be incorporated in machine translation in order to improve the coverage and the quality of translation. Recently, the approaches on paraphrase extraction utilize bilingual parallel corpora, which consist of aligned sentence pairs. In these approaches, paraphrases are identified, from the word alignment result, by pivot phrases which are the phrases in one language to which two or more phrases are connected in the other language. However, the word alignment is itself a very difficult task, so there can be many alignment errors. Moreover, the alignment errors can lead to the problem of selecting incorrect pivot phrases. In this study, we propose a method in paraphrase extraction that discriminates good pivot phrases from bad pivot phrases. Each pivot phrase is weighted according to its reliability, which is scored by considering the lexical and part-of-speech information. The experimental result shows that the proposed method achieves higher precision and recall of the paraphrase extraction than the baseline. Also, we show that the extracted paraphrases can increase the coverage of the Korean-English machine translation.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Web Contents Mining System for Real-Time Monitoring of Opinion Information based on Web 2.0 (웹2.0에서 의견정보의 실시간 모니터링을 위한 웹 콘텐츠 마이닝 시스템)

  • Kim, Young-Choon;Joo, Hae-Jong;Choi, Hae-Gill;Cho, Moon-Taek;Kim, Young-Baek;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.68-79
    • /
    • 2011
  • This paper focuses on the opinion information extraction and analysis system through Web mining that is based on statistics collected from Web contents. That is, users' opinion information which is scattered across several websites can be automatically analyzed and extracted. The system provides the opinion information search service that enables users to search for real-time positive and negative opinions and check their statistics. Also, users can do real-time search and monitoring about other opinion information by putting keywords in the system. Proposing technique proved that the actual performance is excellent by comparison experiment with other techniques. Performance evaluation of function extracting positive/negative opinion information, the performance evaluation applying dynamic window technique and tokenizer technique for multilingual information retrieval, and the performance evaluation of technique extracting exact multilingual phonetic translation are carried out. The experiment with typical movie review sentence and Wikipedia experiment data as object as that applying example is carried out and the result is analyzed.

Large Language Models-based Feature Extraction for Short-Term Load Forecasting (거대언어모델 기반 특징 추출을 이용한 단기 전력 수요량 예측 기법)

  • Jaeseung Lee;Jehyeok Rew
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.51-65
    • /
    • 2024
  • Accurate electrical load forecasting is important to the effective operation of power systems in smart grids. With the recent development in machine learning, artificial intelligence-based models for predicting power demand are being actively researched. However, since existing models get input variables as numerical features, the accuracy of the forecasting model may decrease because they do not reflect the semantic relationship between these features. In this paper, we propose a scheme for short-term load forecasting by using features extracted through the large language models for input data. We firstly convert input variables into a sentence-like prompt format. Then, we use the large language model with frozen weights to derive the embedding vectors that represent the features of the prompt. These vectors are used to train the forecasting model. Experimental results show that the proposed scheme outperformed models based on numerical data, and by visualizing the attention weights in the large language models on the prompts, we identified the information that significantly influences predictions.

Proposal for the Utilization and Refinement Techniques of LLMs for Automated Research Generation (관련 연구 자동 생성을 위한 LLM의 활용 및 정제 기법 제안)

  • Seung-min Choi;Yu-chul, Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.275-287
    • /
    • 2024
  • Research on the integration of Knowledge Graphs (KGs) and Language Models (LMs) has been consistently explored over the years. However, studies focusing on the automatic generation of text using the structured knowledge from KGs have not been as widely developed. In this study, we propose a methodology for automatically generating specific domain-related research items (Related Work) at a level comparable to existing papers. This methodology involves: 1) selecting optimal prompts, 2) extracting triples through a four-step refinement process, 3) constructing a knowledge graph, and 4) automatically generating related research. The proposed approach utilizes GPT-4, one of the large language models (LLMs), and is desigend to automatically generate related research by applying the four-step refinement process. The model demonstrated performance metrics of 17.3, 14.1, and 4.2 in Triple extraction across #Supp, #Cont, and Fluency, respectively. According to the GPT-4 automatic evaluation criteria, the model's performamce improved from 88.5 points vefore refinement to 96.5 points agter refinement out of 100, indicating a significant capability to automatically generate related research at a level similar to that of existing papers.

Automatic Extraction of Paraphrases from a Parallel Bible Corpus (정렬된 성경 코퍼스로부터 바꿔쓰기표현(paraphrase)의 자동 추출)

  • Lee, Kong-Joo;Yun, Bo-Hyun
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.323-336
    • /
    • 2006
  • In this paper, we present a pilot system that can extract paraphrases from a parallel corpus using to-training method. Paraphrases are useful for the applications that should rreate a varied ind fluent text, such as machine translation, question-answering system, and multidocument summarization system. One of the difficulties in extracting paraphrases is to find a rich source from which we can extract paraphrases. The bible is one of the good sources fur extracting paraphrases as it has several Korean versions in which every sentence can be easily aligned by the chapter and the verse. We ran extract not only the lexical-level paraphrases but also the phrasal-level paraphrases from the parallel corpus which consists of the bibles using co-training method.

  • PDF

Embedded clause extraction and restoration for the performance enhancement in Korean-Vietnamese statistical machine translation (한베 통계기계번역의 성능 향상을 위한 내포문 추출 및 복원 기법)

  • Cho, Seung-Woo;Kim, Young-Gil;Kwon, Hong-Seok;Lee, Eui-Hyun;Lee, Won-Ki;Cho, Hyung-Mi;Lee, Jong-Hyeok
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.280-284
    • /
    • 2016
  • 본 논문에서는 기호로 둘러싸인 내포문이 포함된 문장의 번역 성능을 높이는 방법을 제안한다. 입력 문장에서 내포문을 추출하여 여러 문장으로 나타내고, 각각의 문장들을 번역한다. 그리고 번역된 문장들을 복원정보를 활용하여 최종 번역 문장을 생성한다. 이러한 방법론은 입력 문장의 길이를 줄여주며, 그로 인하여 문장 구조가 단순해져 번역 품질이 향상된다. 본 논문에서는 한국어-베트남어 통계 기반 번역기에 대하여 제안한 방법론을 적용하고 실험하였다. 그 결과 BLEU 점수가 약 1.5 향상된 것을 확인할 수 있었다.

  • PDF

Sentence Extraction Using Adapting Method in Multi-Document Summarization (다중문서 요약에서 적응 기법을 이용한 문장 추출)

  • Lim, Jung-Min;Kang, In-Su;Bae, Jae-Hak J.;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.12-19
    • /
    • 2004
  • 기존의 다중 문서요약은 전체 대상문서에 대해서 한번에 요약문을 생산하지만, 본 논문은 요약 대상문서 집합에서 핵심내용을 갖는 문서를 기본 문서로 선택, 임시 요약문장을 추출하고 대상문서 집합에서 순차적으로 문서를 입력받아 중요문장을 추출, 이전에 구축된 요약문장과 현재 추출된 문장을 비교하면서 요약에 필요한 문장을 선택하는 적응 기법을 제안한다. 제안한 방법으로 구현한 시스템은 NTCIR TSC 3에서 사용된 29개의 다중 문서집합을 통해서 성능을 평가하였다. 적응 기법 시스템은 TSC3의 baseline시스템인 Lead 방법보다는 높은 성능을 나타냈지만, TSC 3에 참가한 시스템들과의 비교에서는 월등한 성능 우위를 나타내지 못했다.

  • PDF

Embedded clause extraction and restoration for the performance enhancement in Korean-Vietnamese statistical machine translation (한베 통계기계번역의 성능 향상을 위한 내포문 추출 및 복원 기법)

  • Cho, Seung-Woo;Kim, Young-Gil;Kwon, Hong-Seok;Lee, Eui-Hyun;Lee, Won-Ki;Cho, Hyung-Mi;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.280-284
    • /
    • 2016
  • 본 논문에서는 기호로 둘러싸인 내포문이 포함된 문장의 번역 성능을 높이는 방법을 제안한다. 입력 문장에서 내포문을 추출하여 여러 문장으로 나타내고, 각각의 문장들을 번역한다. 그리고 번역된 문장들을 복원정보를 활용하여 최종 번역 문장을 생성한다. 이러한 방법론은 입력 문장의 길이를 줄여주며, 그로 인하여 문장 구조가 단순해져 번역 품질이 향상된다. 본 논문에서는 한국어-베트남어 통계 기반 번역기에 대하여 제안한 방법론을 적용하고 실험하였다. 그 결과 BLEU 점수가 약 1.5 향상된 것을 확인할 수 있었다.

  • PDF

Multi-Topic Meeting Summarization using Lexical Co-occurrence Frequency and Distribution (어휘의 동시 발생 빈도와 분포를 이용한 다중 주제 회의록 요약)

  • Lee, Byung-Soo;Lee, Jee-Hyong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.13-16
    • /
    • 2015
  • 본 논문에서는 어휘의 동시 발생 (co-occurrence) 빈도와 분포를 이용한 회의록 요약방법을 제안한다. 회의록은 일반 문서와 달리 문서에 여러 세부적인 주제들이 나타나며, 잘못된 형식의 문장, 불필요한 잡담들을 포함하고 있기 때문에 이러한 특징들이 문서요약 과정에서 고려되어야 한다. 기존의 일반적인 문서요약 방법은 하나의 주제를 기반으로 문서 전체에서 가장 중요한 문장으로 요약하기 때문에 다중 주제 회의록 요약에는 적합하지 않다. 제안한 방법은 먼저 어휘의 동시 발생 (co-occurrence) 빈도를 이용하여 회의록 분할 (segmentation) 과정을 수행한다. 다음으로 주제의 구분에 따라 분할된 각 영역 (block)의 중요 단어 집합 생성, 중요 문장 추출 과정을 통해 회의록의 중요 문장들을 선별한다. 마지막으로 추출된 중요 문장들의 위치, 종속 관계를 고려하여 최종적으로 회의록을 요약한다. AMI meeting corpus를 대상으로 실험한 결과, 제안한 방법이 baseline 요약 방법들보다 요약 비율에 따른 평가 및 요약문의 세부 주제별 평가에서 우수한 요약 성능을 보임을 확인하였다.

  • PDF