• Title/Summary/Keyword: Summary Generation

Search Result 94, Processing Time 0.025 seconds

Hadoop Based Wavelet Histogram for Big Data in Cloud

  • Kim, Jeong-Joon
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.668-676
    • /
    • 2017
  • Recently, the importance of big data has been emphasized with the development of smartphone, web/SNS. As a result, MapReduce, which can efficiently process big data, is receiving worldwide attention because of its excellent scalability and stability. Since big data has a large amount, fast creation speed, and various properties, it is more efficient to process big data summary information than big data itself. Wavelet histogram, which is a typical data summary information generation technique, can generate optimal data summary information that does not cause loss of information of original data. Therefore, a system applying a wavelet histogram generation technique based on MapReduce has been actively studied. However, existing research has a disadvantage in that the generation speed is slow because the wavelet histogram is generated through one or more MapReduce Jobs. And there is a high possibility that the error of the data restored by the wavelet histogram becomes large. However, since the wavelet histogram generation system based on the MapReduce developed in this paper generates the wavelet histogram through one MapReduce Job, the generation speed can be greatly increased. In addition, since the wavelet histogram is generated by adjusting the error boundary specified by the user, the error of the restored data can be adjusted from the wavelet histogram. Finally, we verified the efficiency of the wavelet histogram generation system developed in this paper through performance evaluation.

Multi-layered attentional peephole convolutional LSTM for abstractive text summarization

  • Rahman, Md. Motiur;Siddiqui, Fazlul Hasan
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.288-298
    • /
    • 2021
  • Abstractive text summarization is a process of making a summary of a given text by paraphrasing the facts of the text while keeping the meaning intact. The manmade summary generation process is laborious and time-consuming. We present here a summary generation model that is based on multilayered attentional peephole convolutional long short-term memory (MAPCoL; LSTM) in order to extract abstractive summaries of large text in an automated manner. We added the concept of attention in a peephole convolutional LSTM to improve the overall quality of a summary by giving weights to important parts of the source text during training. We evaluated the performance with regard to semantic coherence of our MAPCoL model over a popular dataset named CNN/Daily Mail, and found that MAPCoL outperformed other traditional LSTM-based models. We found improvements in the performance of MAPCoL in different internal settings when compared to state-of-the-art models of abstractive text summarization.

An Efficient ROLAP Cube Generation Scheme (효율적인 ROLAP 큐브 생성 방법)

  • Kim, Myung;Song, Ji-Sook
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.99-109
    • /
    • 2002
  • ROLAP(Relational Online Analytical Processing) is a process and methodology for a multidimensional data analysis that is essential to extract desired data and to derive value-added information from an enterprise data warehouse. In order to speed up query processing, most ROLAP systems pre-compute summary tables. This process is called 'cube generation' and it mostly involves intensive table sorting stages. (1) showed that it is much faster to generate ROLAP summary tables indirectly using a MOLAP(multidimensional OLAP) cube generation algorithm. In this paper, we present such an indirect ROLAP cube generation algorithm that is fast and scalable. High memory utilization is achieved by slicing the input fact table along one or more dimensions before generating summary tables. High speed is achieved by producing summary tables from their smallest parents. We showed the efficiency of our algorithm through experiments.

Improving the effectiveness of document extraction summary based on the amount of sentence information (문장 정보량 기반 문서 추출 요약의 효과성 제고)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.31-38
    • /
    • 2022
  • In the document extraction summary study, various methods for selecting important sentences based on the relationship between sentences were proposed. In the Korean document summary using the summation similarity of sentences, the summation similarity of the sentences was regarded as the amount of sentence information, and the summary sentences were extracted by selecting important sentences based on this. However, the problem is that it does not take into account the various importance that each sentence contributes to the entire document. Therefore, in this study, we propose a document extraction summary method that provides a summary by selecting important sentences based on the amount of quantitative and semantic information in the sentence. As a result, the extracted sentence agreement was 58.56% and the ROUGE-L score was 34, which was superior to the method using only the combined similarity. Compared to the deep learning-based method, the extraction method is lighter, but the performance is similar. Through this, it was confirmed that the method of compressing information based on semantic similarity between sentences is an important approach in document extraction summary. In addition, based on the quickly extracted summary, the document generation summary step can be effectively performed.

Summary program for protective relay setting result composition (보호계전기 정정 결과 요약표 작성 프로그램)

  • Lee, J.M.;Min, B.W.;Lee, S.J.;Choi, M.S.;Kang, S.H.;Cho, B.S.;Lee, O.H.;Choi, H.S.
    • Proceedings of the KIEE Conference
    • /
    • 2000.07a
    • /
    • pp.206-208
    • /
    • 2000
  • Manual generation of the relay setting summary could involve fatal errors which would result in the big damage to the system. This paper reports the automatic generation system of relay setting reports. Analysing the manual reports of various relays the almost unified format has been designed. The developed system utilizes the graphics and tabular format to enhance the user understanding.

  • PDF

Video Summary Technique using Content Curve in MPEG Compressed Domain (MPEG 압축 영역에서 내용 곡선을 이용한 Video 요약 기법)

  • Kim, Tae-Hee;Lee, Woong-Hee;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.10A
    • /
    • pp.1021-1028
    • /
    • 2002
  • This paper proposes a method to extract the content curve that reflects changes in video content from the MPEG video in the compressed domain, and also describes a video summary generation technique which can read video effectively and rapidly from the content curve. Existing video summary techniques have a disadvantage of taking significant amount of time to generate the video summary due to complex calculations in the decoding process. Moreover, the existing techniques, which use video content curve, require to perform many calculations to process the high dimensional content curve. However, the proposed technique generates video summary fast via the linear approximation of the proposed curve, after extraction the two dimensional content curve directly. In addition, the proposed technique has a merit that the user can set any number of key-frames and amount of calculation that form the video summary.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

A Study on the Heat Generation and Thermal Conductivity of Crustal Rocks (지각 구성 암석의 열생산량과 열전도도에 관한 연구)

  • Han, Uk
    • Economic and Environmental Geology
    • /
    • v.26 no.3
    • /
    • pp.371-382
    • /
    • 1993
  • Compilations of thermal conductivity and radiogenic heat production on 20 typical rock types provide a convenient summary. These compilations allow estimates to be made of the radioactive heat generation for the continental crust and thus heat flow anomalies of tectonic origin to be isolated.

  • PDF

Deep Learning-based Text Summarization Model for Explainable Personalized Movie Recommendation Service (설명 가능한 개인화 영화 추천 서비스를 위한 딥러닝 기반 텍스트 요약 모델)

  • Chen, Biyao;Kang, KyungMo;Kim, JaeKyeong
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.109-126
    • /
    • 2022
  • The number and variety of products and services offered by companies have increased dramatically, providing customers with more choices to meet their needs. As a solution to this information overload problem, the provision of tailored services to individuals has become increasingly important, and the personalized recommender systems have been widely studied and used in both academia and industry. Existing recommender systems face important problems in practical applications. The most important problem is that it cannot clearly explain why it recommends these products. In recent years, some researchers have found that the explanation of recommender systems may be very useful. As a result, users are generally increasing conversion rates, satisfaction, and trust in the recommender system if it is explained why those particular items are recommended. Therefore, this study presents a methodology of providing an explanatory function of a recommender system using a review text left by a user. The basic idea is not to use all of the user's reviews, but to provide them in a summarized form using only reviews left by similar users or neighbors involved in recommending the item as an explanation when providing the recommended item to the user. To achieve this research goal, this study aims to provide a product recommendation list using user-based collaborative filtering techniques, combine reviews left by neighboring users with each product to build a model that combines text summary methods among deep learning-based natural language processing methods. Using the IMDb movie database, text reviews of all target user neighbors' movies are collected and summarized to present descriptions of recommended movies. There are several text summary methods, but this study aims to evaluate whether the review summary is well performed by training the Sequence-to-sequence+attention model, which is a representative generation summary method, and the BertSum model, which is an extraction summary model.

An Efficient Machine Learning-based Text Summarization in the Malayalam Language

  • P Haroon, Rosna;Gafur M, Abdul;Nisha U, Barakkath
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1778-1799
    • /
    • 2022
  • Automatic text summarization is a procedure that packs enormous content into a more limited book that incorporates significant data. Malayalam is one of the toughest languages utilized in certain areas of India, most normally in Kerala and in Lakshadweep. Natural language processing in the Malayalam language is relatively low due to the complexity of the language as well as the scarcity of available resources. In this paper, a way is proposed to deal with the text summarization process in Malayalam documents by training a model based on the Support Vector Machine classification algorithm. Different features of the text are taken into account for training the machine so that the system can output the most important data from the input text. The classifier can classify the most important, important, average, and least significant sentences into separate classes and based on this, the machine will be able to create a summary of the input document. The user can select a compression ratio so that the system will output that much fraction of the summary. The model performance is measured by using different genres of Malayalam documents as well as documents from the same domain. The model is evaluated by considering content evaluation measures precision, recall, F score, and relative utility. Obtained precision and recall value shows that the model is trustable and found to be more relevant compared to the other summarizers.