• Title/Summary/Keyword: Degraded Document Image

Search Result 9, Processing Time 0.027 seconds

Stroke Width-Based Contrast Feature for Document Image Binarization

  • Van, Le Thi Khue;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • v.10 no.1
    • /
    • pp.55-68
    • /
    • 2014
  • Automatic segmentation of foreground text from the background in degraded document images is very much essential for the smooth reading of the document content and recognition tasks by machine. In this paper, we present a novel approach to the binarization of degraded document images. The proposed method uses a new local contrast feature extracted based on the stroke width of text. First, a pre-processing method is carried out for noise removal. Text boundary detection is then performed on the image constructed from the contrast feature. Then local estimation follows to extract text from the background. Finally, a refinement procedure is applied to the binarized image as a post-processing step to improve the quality of the final results. Experiments and comparisons of extracting text from degraded handwriting and machine-printed document image against some well-known binarization algorithms demonstrate the effectiveness of the proposed method.

An Adaptive Binarization Algorithm for Degraded Document Images (저화질 문서영상들을 위한 적응적 이진화 알고리즘)

  • Ju, Jae-Hyon;Oh, Jeong-Su
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7A
    • /
    • pp.581-585
    • /
    • 2012
  • This paper proposes an adaptive binarization algorithm which is highly effective for a degraded document image including printed Hangul and Chinese characters. Because of the attribute of character composed of thin horizontal strokes and thick vertical strokes, the conventional algorithms can't easily extract horizontal strokes which have weaker components than vertical ones in the degraded document image. The proposed algorithm solves the conventional algorithm's problem by adding a vertical-directional reference adaptive binarization algorithm to an omni-directional reference one. The simulation results show the proposed algorithm extracts well characters from various degraded document images.

Skew Correction of Document Images using Edge (에지를 이용한 문서영상의 기울기 보정)

  • Ju, Jae-Hyon;Oh, Jeong-Su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1487-1494
    • /
    • 2012
  • This paper proposes an algorithm detecting the skew of the degraded as well as the clear document images using edge and correcting it. The proposed algorithm detects edges in a character region selected by image complexity and generates projection histograms by projecting them to various directions. And then it detects the document skew by estimating the edge concentrations in the histograms and corrects the skewed document image. For the fast skew detection, the proposed algorithm uses downsampling and 3 step coarse-to-fine searching. In the skew detection of the clear and the degraded images, the maximum and the average detection errors in the proposed algorithm are about 50% of one in a conventional similar algorithm and the processing time is reduced to about 25%. In the non-uniform luminance images acquired by a mobile device, the conventional algorithm can't detect skews since it can't get valid binary images, while the proposed algorithm detect them with the average detection error of 0.1o or under.

Deskewing Document Image using the Gradient of the Spaces Between Sentences. (문장 사이의 공백 기울기를 이용한 문서 이미지 기울기 보정)

  • Heo, Woo-hyung;Gu, Eun-jin;Kim, Cheol-ki;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.379-381
    • /
    • 2013
  • In this paper, we propose a method to detect the gradient of the spaces between sentences and to deskew in the document image. First, gradient is measured by pixels for spaces between sentences that has been done an edge extraction in document image and then skewed image is corrected by using the value of the gradient which has been measured. Since document image is divided into several areas, it shows a robust processing result by handling the margin, images, and multistage form in the document. Because the proposed method does not use pixel of the character region but use the blank area, degraded document image as well as vivid document image is effectively corrected than conventional method.

  • PDF

DP-LinkNet: A convolutional network for historical document image binarization

  • Xiong, Wei;Jia, Xiuhong;Yang, Dichun;Ai, Meihui;Li, Lirong;Wang, Song
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1778-1797
    • /
    • 2021
  • Document image binarization is an important pre-processing step in document analysis and archiving. The state-of-the-art models for document image binarization are variants of encoder-decoder architectures, such as FCN (fully convolutional network) and U-Net. Despite their success, they still suffer from three limitations: (1) reduced feature map resolution due to consecutive strided pooling or convolutions, (2) multiple scales of target objects, and (3) reduced localization accuracy due to the built-in invariance of deep convolutional neural networks (DCNNs). To overcome these three challenges, we propose an improved semantic segmentation model, referred to as DP-LinkNet, which adopts the D-LinkNet architecture as its backbone, with the proposed hybrid dilated convolution (HDC) and spatial pyramid pooling (SPP) modules between the encoder and the decoder. Extensive experiments are conducted on recent document image binarization competition (DIBCO) and handwritten document image binarization competition (H-DIBCO) benchmark datasets. Results show that our proposed DP-LinkNet outperforms other state-of-the-art techniques by a large margin. Our implementation and the pre-trained models are available at https://github.com/beargolden/DP-LinkNet.

Evaluation of Restoration Schemes for Bi-Level Digital Image Degraded by Impulse Noise (임펄스 잡음에 의해 훼손된 이진 디지탈 서류 영상의 복구 방법들의 비교 평가)

  • Shin Hyun-Kyung;Shin Joong-Sang
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.369-376
    • /
    • 2006
  • The degradation and its inverse modeling can achieve restoration of corrupted image, caused by scaled digitization and electronic transmission. De-speckle process on the noisy document(or SAR) images is one of the basic examples. Non-linearity of the speckle noise model may hinder the inverse process. In this paper, our study is focused on investigation of the restoration methods for bi-level document image degraded by the impulse noise model. Our study shows that, on bi-level document images, the weighted-median filter and the Lee filter methods are very effective among other spatial filtering methods, but wavelet filter method is ineffective in aspect of processing speed: approximately 100 times slower. Optimal values of the weight to be used in the weighted median filter are investigated and presented in this paper.

Adaptive Binarization for Camera-based Document Recognition (카메라 기반 문서 인식을 위한 적응적 이진화)

  • Kim, In-Jung
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.132-140
    • /
    • 2007
  • The quality of the camera image is worse than that of the scanner image because of lighting variation and inaccurate focus. This paper proposes a binarization method for camera-based document recognition, which is tolerant to low-quality camera images. Based on an existing method reported to be effective in previous evaluations, we enhanced the adaptability to the image with a low contrast due to low intensity and inaccurate focus. Furthermore, applying an additional small-size window in the binarization process, it is effective to extract the fine detail of character structure, which is often degraded by conventional methods. In experiments, we applied the proposed method as well as other methods to a document recognizer and compared the performance for many cm images. The result showed the proposed method is effective for recognition of document images captured by the camera.

  • PDF

Correction of Specular Region on Document Images (문서 영상의 전반사 영역 보정 기법)

  • Simon, Christian;Williem;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.11a
    • /
    • pp.239-240
    • /
    • 2013
  • The quality of document images captured by digital camera might be degraded because of non-uniform illumination condition. The high illumination (glare distortion) affects on the contrast condition of the document images. This condition leads to the poor contrast condition of the text in document image. So, optical character recognition (OCR) system might hardly recognize text in the high illuminated area. The method to increase the contrast condition between text (foreground) and background in high illuminated area is proposed in this paper.

  • PDF

Performance Improvement of Deep Clustering Networks for Multi Dimensional Data (다차원 데이터에 대한 심층 군집 네트워크의 성능향상 방법)

  • Lee, Hyunjin
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.952-959
    • /
    • 2018
  • Clustering is one of the most fundamental algorithms in machine learning. The performance of clustering is affected by the distribution of data, and when there are more data or more dimensions, the performance is degraded. For this reason, we use a stacked auto encoder, one of the deep learning algorithms, to reduce the dimension of data which generate a feature vector that best represents the input data. We use k-means, which is a famous algorithm, as a clustering. Sine the feature vector which reduced dimensions are also multi dimensional, we use the Euclidean distance as well as the cosine similarity to increase the performance which calculating the similarity between the center of the cluster and the data as a vector. A deep clustering networks combining a stacked auto encoder and k-means re-trains the networks when the k-means result changes. When re-training the networks, the loss function of the stacked auto encoder and the loss function of the k-means are combined to improve the performance and the stability of the network. Experiments of benchmark image ad document dataset empirically validated the power of the proposed algorithm.