• Title/Summary/Keyword: Length of Document

Search Result 77, Processing Time 0.027 seconds

Implementation of Text Summarize Automation Using Document Length Normalization (문서 길이 정규화를 이용한 문서 요약 자동화 시스템 구현)

  • 이재훈;김영천;이성주
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.51-55
    • /
    • 2001
  • With the rapid growth of the World Wide Web and electronic information services, information is becoming available on-Line at an incredible rate. One result is the oft-decried information overload. No one has time to read everything, yet we often have to make critical decisions based on what we are able to assimilate. The technology of automatic text summarization is becoming indispensable for dealing with this problem. Text summarization is the process of distilling the most important information from a source to produce an abridged version for a particular user or task. Information retrieval(IR) is the task of searching a set of documents for some query-relevant documents. On the other hand, text summarization is considered to be the task of searching a document, a set of sentences, for some topic-relevant sentences. In this paper, we show that document information, that is more reliable and suitable for query, using document length normalization of which is gained through information retrieval . Experimental results of this system in newspaper articles show that document length normalization method superior to other methods use query itself.

  • PDF

An Efficient Block Segmentation and Classification of a Document Image Using Edge Information (문서영상의 에지 정보를 이용한 효과적인 블록분할 및 유형분류)

  • 박창준;전준형;최형문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.120-129
    • /
    • 1996
  • This paper presents an efficient block segmentation and classification using the edge information of the document image. We extract four prominent features form the edge gradient and orientaton, all of which, and thereby the block clssifications, are insensitive to the background noise and the brightness variation of of the image. Using these four features, we can efficiently classify a document image into the seven categrories of blocks of small-size letters, large-size letters, tables, equations, flow-charts, graphs, and photographs, the first five of which are text blocks which are character-recognizable, and the last two are non-character blocks. By introducing the clumn interval and text line intervals of the document in the determination of th erun length of CRLA (constrained run length algorithm), we can obtain an efficient block segmentation with reduced memory size. The simulation results show that the proposed algorithm can rigidly segment and classify the blocks of the documents into the above mentioned seven categories and classification performance is high enough for all the categories except for the graphs with too much variations.

  • PDF

Document Summarization via Convex-Concave Programming

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.293-298
    • /
    • 2016
  • Document summarization is an important task in various areas where the goal is to select a few the most descriptive sentences from a given document as a succinct summary. Even without training data of human labeled summaries, there has been several interesting existing work in the literature that yields reasonable performance. In this paper, within the same unsupervised learning setup, we propose a more principled learning framework for the document summarization task. Specifically we formulate an optimization problem that expresses the requirements of both faithful preservation of the document contents and the summary length constraint. We circumvent the difficult integer programming originating from binary sentence selection via continuous relaxation and the low entropy penalization. We also suggest an efficient convex-concave optimization solver algorithm that guarantees to improve the original objective at every iteration. For several document datasets, we demonstrate that the proposed learning algorithm significantly outperforms the existing approaches.

Automatic Single Document Text Summarization Using Key Concepts in Documents

  • Sarkar, Kamal
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.602-620
    • /
    • 2013
  • Many previous research studies on extractive text summarization consider a subset of words in a document as keywords and use a sentence ranking function that ranks sentences based on their similarities with the list of extracted keywords. But the use of key concepts in automatic text summarization task has received less attention in literature on summarization. The proposed work uses key concepts identified from a document for creating a summary of the document. We view single-word or multi-word keyphrases of a document as the important concepts that a document elaborates on. Our work is based on the hypothesis that an extract is an elaboration of the important concepts to some permissible extent and it is controlled by the given summary length restriction. In other words, our method of text summarization chooses a subset of sentences from a document that maximizes the important concepts in the final summary. To allow diverse information in the summary, for each important concept, we select one sentence that is the best possible elaboration of the concept. Accordingly, the most important concept will contribute first to the summary, then to the second best concept, and so on. To prove the effectiveness of our proposed summarization method, we have compared it to some state-of-the art summarization systems and the results show that the proposed method outperforms the existing systems to which it is compared.

Effectiveness of Fuzzy Graph Based Document Model

  • Aswathy M R;P.C. Reghu Raj;Ajeesh Ramanujan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2178-2198
    • /
    • 2024
  • Graph-based document models have good capabilities to reveal inter-dependencies among unstructured text data. Natural language processing (NLP) systems that use such models as an intermediate representation have shown good performance. This paper proposes a novel fuzzy graph-based document model and to demonstrate its effectiveness by applying fuzzy logic tools for text summarization. The proposed system accepts a text document as input and identifies some of its sentence level features, namely sentence position, sentence length, numerical data, thematic word, proper noun, title feature, upper case feature, and sentence similarity. The fuzzy membership value of each feature is computed from the sentences. We also propose a novel algorithm to construct the fuzzy graph as an intermediate representation of the input document. The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metric is used to evaluate the model. The evaluation based on different quality metrics was also performed to verify the effectiveness of the model. The ANOVA test confirms the hypothesis that the proposed model improves the summarizer performance by 10% when compared with the state-of-the-art summarizers employing alternate intermediate representations for the input text.

Category Factor Based Feature Selection for Document Classification

  • Kang Yun-Hee
    • International Journal of Contents
    • /
    • v.1 no.2
    • /
    • pp.26-30
    • /
    • 2005
  • According to the fast growth of information on the Internet, it is becoming increasingly difficult to find and organize useful information. To reduce information overload, it needs to exploit automatic text classification for handling enormous documents. Support Vector Machine (SVM) is a model that is calculated as a weighted sum of kernel function outputs. This paper describes a document classifier for web documents in the fields of Information Technology and uses SVM to learn a model, which is constructed from the training sets and its representative terms. The basic idea is to exploit the representative terms meaning distribution in coherent thematic texts of each category by simple statistics methods. Vector-space model is applied to represent documents in the categories by using feature selection scheme based on TFiDF. We apply a category factor which represents effects in category of any term to the feature selection. Experiments show the results of categorization and the correlation of vector length.

  • PDF

A Document Summarization System Using Dynamic Connection Graph (동적 연결 그래프를 이용한 자동 문서 요약 시스템)

  • Song, Won-Moon;Kim, Young-Jin;Kim, Eun-Ju;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.62-69
    • /
    • 2009
  • The purpose of document summarization is to provide easy and quick understanding of documents by extracting summarized information from the documents produced by various application programs. In this paper, we propose a document summarization method that creates and analyzes a connection graph representing the similarity of keyword lists of sentences in a document taking into account the mean length(the number of keywords) of sentences of the document. We implemented a system that automatically generate a summary from a document using the proposed method. To evaluate the performance of the method, we used a set of 20 documents associated with their correct summaries and measured the precision, the recall and the F-measure. The experiment results show that the proposed method is more efficient compared with the existing methods.

Local Similarity based Document Layout Analysis using Improved ARLSA

  • Kim, Gwangbok;Kim, SooHyung;Na, InSeop
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.15-19
    • /
    • 2015
  • In this paper, we propose an efficient document layout analysis algorithm that includes table detection. Typical methods of document layout analysis use the height and gap between words or columns. To correspond to the various styles and sizes of documents, we propose an algorithm that uses the mean value of the distance transform representing thickness and compare with components in the local area. With this algorithm, we combine a table detection algorithm using the same feature as that of the text classifier. Table candidates, separators, and big components are isolated from the image using Connected Component Analysis (CCA) and distance transform. The key idea of text classification is that the characteristics of the text parallel components that have a similar thickness and height. In order to estimate local similarity, we detect a text region using an adaptive searching window size. An improved adaptive run-length smoothing algorithm (ARLSA) was proposed to create the proper boundary of a text zone and non-text zone. Results from experiments on the ICDAR2009 page segmentation competition test set and our dataset demonstrate the superiority of our dataset through f-measure comparison with other algorithms.

Application of slab and plate width measurement using laser distance meter (Laser를 이용한 후물 판재류 및 Slap폭 측정 적용)

  • 최철호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.1076-1078
    • /
    • 1996
  • According to ascending needs for quality assurance of plate by customer and automated measurement of dimension for slab, in plate mill, project is on preceeding to install measuring system to measure width, length and camber of slab and plate using laser distance meter. In this document, I will describe not technical point of view but idea of design for installation system.

  • PDF

Tax Avoidance and the Readability of Financial Statements: Empirical Evidence from Indonesia

  • PRATAMA, Bima Yoga;NARSA, Niluh Putu Dian Rosalina Handayani;PRANANJAYA, Kadek Pranetha
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.9 no.2
    • /
    • pp.103-112
    • /
    • 2022
  • This study aims to obtain empirical evidence regarding the link between tax avoidance (TA) and the readability of financial statements. This is a quantitative research using Ordinary Least Squares regression analysis which is then processed using STATA 14.0. A total of 278 companies listed on the Indonesia Stock Exchange during the period 2017-2019 is the data of this study. In detecting TA in a company, this study uses the ETR and CashETR and for the measurement of financial statement readability, this study uses gunning fog index and length of the document. The findings of this study suggest that tax avoidance and clear financial statements are mutually exclusive in the sense that when tax avoidance is practiced, companies will tend to conceal the information conveyed by financial statements. In other words, it is concluded that the more a company engages in tax avoidance, the lower the readability of the company's financial statements. This study provides in-depth evidence that tax avoidance is indirectly related to the disclosure of information by the company. Users of financial statements will realize that the company seeks to make disclosures that are in their best interests to avoid their tax avoidance strategy being detected.