• Title/Summary/Keyword: document image processing

Search Result 105, Processing Time 0.028 seconds

DP-LinkNet: A convolutional network for historical document image binarization

  • Xiong, Wei;Jia, Xiuhong;Yang, Dichun;Ai, Meihui;Li, Lirong;Wang, Song
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1778-1797
    • /
    • 2021
  • Document image binarization is an important pre-processing step in document analysis and archiving. The state-of-the-art models for document image binarization are variants of encoder-decoder architectures, such as FCN (fully convolutional network) and U-Net. Despite their success, they still suffer from three limitations: (1) reduced feature map resolution due to consecutive strided pooling or convolutions, (2) multiple scales of target objects, and (3) reduced localization accuracy due to the built-in invariance of deep convolutional neural networks (DCNNs). To overcome these three challenges, we propose an improved semantic segmentation model, referred to as DP-LinkNet, which adopts the D-LinkNet architecture as its backbone, with the proposed hybrid dilated convolution (HDC) and spatial pyramid pooling (SPP) modules between the encoder and the decoder. Extensive experiments are conducted on recent document image binarization competition (DIBCO) and handwritten document image binarization competition (H-DIBCO) benchmark datasets. Results show that our proposed DP-LinkNet outperforms other state-of-the-art techniques by a large margin. Our implementation and the pre-trained models are available at https://github.com/beargolden/DP-LinkNet.

History Document Image Background Noise and Removal Methods

  • Ganchimeg, Ganbold
    • International Journal of Knowledge Content Development & Technology
    • /
    • v.5 no.2
    • /
    • pp.11-24
    • /
    • 2015
  • It is common for archive libraries to provide public access to historical and ancient document image collections. It is common for such document images to require specialized processing in order to remove background noise and become more legible. Document images may be contaminated with noise during transmission, scanning or conversion to digital form. We can categorize noises by identifying their features and can search for similar patterns in a document image to choose appropriate methods for their removal. In this paper, we propose a hybrid binarization approach for improving the quality of old documents using a combination of global and local thresholding. This article also reviews noises that might appear in scanned document images and discusses some noise removal methods.

A Speed-up method of document image binarization using water flow model (Water flow model을 이용한 문서영상 이진화의 속도 개선)

  • 오현화;이재용;김두식;장승익;임길택;진성일
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.393-396
    • /
    • 2003
  • This paper proposes a method to speed up the document image binarization using a water flow model. The proposed method extracts the region of interest (ROI) around characters from a document image and restricts pouring water onto a 3-dimensional terrain surface of an image only within the ROI. The amount of water to be filled into a local valley is determined automatically depending on its depth and slope. Then, the proposed method accumulates weighted water not only on the locally lowest position but also on its neighbors. Finally, the depth of each pond is adaptively thresholded for robust character segmentation. Experimental results on real document images shows that the proposed method has attained good binarization performance as well as remarkably reduced processing time compared with that of the existing method based on a water flow model.

  • PDF

Evaluation of Restoration Schemes for Bi-Level Digital Image Degraded by Impulse Noise (임펄스 잡음에 의해 훼손된 이진 디지탈 서류 영상의 복구 방법들의 비교 평가)

  • Shin Hyun-Kyung;Shin Joong-Sang
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.369-376
    • /
    • 2006
  • The degradation and its inverse modeling can achieve restoration of corrupted image, caused by scaled digitization and electronic transmission. De-speckle process on the noisy document(or SAR) images is one of the basic examples. Non-linearity of the speckle noise model may hinder the inverse process. In this paper, our study is focused on investigation of the restoration methods for bi-level document image degraded by the impulse noise model. Our study shows that, on bi-level document images, the weighted-median filter and the Lee filter methods are very effective among other spatial filtering methods, but wavelet filter method is ineffective in aspect of processing speed: approximately 100 times slower. Optimal values of the weight to be used in the weighted median filter are investigated and presented in this paper.

Word Image Decomposition from Image Regions in Document Images using Statistical Analyses (문서 영상의 그림 영역에서 통계적 분석을 이용한 단어 영상 추출)

  • Jeong, Chang-Bu;Kim, Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.591-600
    • /
    • 2006
  • This paper describes the development and implementation of a algorithm to decompose word images from image regions mixed text/graphics in document images using statistical analyses. To decompose word images from image regions, the character components need to be separated from graphic components. For this process, we propose a method to separate them with an analysis of box-plot using a statistics of structural components. An accuracy of this method is not sensitive to the changes of images because the criterion of separation is defined by the statistics of components. And then the character regions are determined by analyzing a local crowdedness of the separated character components. finally, we devide the character regions into text lines and word images using projection profile analysis, gap clustering, special symbol detection, etc. The proposed system could reduce the influence resulted from the changes of images because it uses the criterion based on the statistics of image regions. Also, we made an experiment with the proposed method in document image processing system for keyword spotting and showed the necessity of studying for the proposed method.

Document Layout Analysis Based on Fuzzy Energy Matrix

  • Oh, KangHan;Kim, SooHyung
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.1-8
    • /
    • 2015
  • In this paper, we describe a novel method for document layout analysis that is based on a Fuzzy Energy Matrix (FEM). A FEM is a two-dimensional matrix that contains the likelihood of text and non-text and is generated through the use of Fuzzy theory. The key idea is to define an Energy map for the document to categorize text and non-text. The proposed mechanism is designed for execution with a low-resolution document image, and hence our method has a fast processing speed. The proposed method has been tested on public ICDAR 2009 datasets to conduct a comparison against other state-of-the-art methods, and it was also tested with Korean documents. The results of the experiment indicate that this scheme achieves superior segmentation accuracy, in terms of both precision and recall, and also requires less time for computation than other state-of-the-art document image analysis methods.

Machine Learning Based Automatic Categorization Model for Text Lines in Invoice Documents

  • Shin, Hyun-Kyung
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1786-1797
    • /
    • 2010
  • Automatic understanding of contents in document image is a very hard problem due to involvement with mathematically challenging problems originated mainly from the over-determined system induced by document segmentation process. In both academic and industrial areas, there have been incessant and various efforts to improve core parts of content retrieval technologies by the means of separating out segmentation related issues using semi-structured document, e.g., invoice,. In this paper we proposed classification models for text lines on invoice document in which text lines were clustered into the five categories in accordance with their contents: purchase order header, invoice header, summary header, surcharge header, purchase items. Our investigation was concentrated on the performance of machine learning based models in aspect of linear-discriminant-analysis (LDA) and non-LDA (logic based). In the group of LDA, na$\"{\i}$ve baysian, k-nearest neighbor, and SVM were used, in the group of non LDA, decision tree, random forest, and boost were used. We described the details of feature vector construction and the selection processes of the model and the parameter including training and validation. We also presented the experimental results of comparison on training/classification error levels for the models employed.

Distortion Corrected Black and White Document Image Generation Based on Camera (카메라기반의 왜곡이 보정된 흑백 문서 영상 생성)

  • Kim, Jin-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.11
    • /
    • pp.18-26
    • /
    • 2015
  • Geometric distortion and shadow effect due to capturing angle could be included in document copy images that are captured by a camera in stead of a scanner. In this paper, a clean black and white document image generation algorithm by distortion correction and shadow elimination based on a camera, is proposed. In order to correct geometric distortion such as straightening un-straight boundary lines occurred by camera lens radial distortion and eliminating outlying area included by camera direction, second derivative filter based document boundary detection method is developed. Black and white images have been generated by adaptive binarization method by eliminating shadow effect. Experimental results of the black and white document image generation algorithm by recovering geometrical distortion and eliminating shadow effect for the document images captured by smart phone camera, shows very good processing results.

Moire Noise Removal from Document Images on Electronic Monitor (모니터 문서 영상의 모아레 잡음 제거)

  • Simon, Christian;Williem;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.11a
    • /
    • pp.237-238
    • /
    • 2013
  • The quality of document image captured from electronic display might be worse when it is compared with document image captured from paper. The problem appears because of Moir? noise. This problem can lead to achieve inaccurate intermediate result for further image processing. This paper proposes a method to remove Moir? noise of document images captured from electronic display. The proposed algorithm is separated in two parts. In the first step, it corrects the text area region (foreground) with small area of smoothing. Then, it corrects the background area with large area of smoothing.

  • PDF

Patent Document Similarity Based on Image Analysis Using the SIFT-Algorithm and OCR-Text

  • Park, Jeong Beom;Mandl, Thomas;Kim, Do Wan
    • International Journal of Contents
    • /
    • v.13 no.4
    • /
    • pp.70-79
    • /
    • 2017
  • Images are an important element in patents and many experts use images to analyze a patent or to check differences between patents. However, there is little research on image analysis for patents partly because image processing is an advanced technology and typically patent images consist of visual parts as well as of text and numbers. This study suggests two methods for using image processing; the Scale Invariant Feature Transform(SIFT) algorithm and Optical Character Recognition(OCR). The first method which works with SIFT uses image feature points. Through feature matching, it can be applied to calculate the similarity between documents containing these images. And in the second method, OCR is used to extract text from the images. By using numbers which are extracted from an image, it is possible to extract the corresponding related text within the text passages. Subsequently, document similarity can be calculated based on the extracted text. Through comparing the suggested methods and an existing method based only on text for calculating the similarity, the feasibility is achieved. Additionally, the correlation between both the similarity measures is low which shows that they capture different aspects of the patent content.