• Title/Summary/Keyword: text enhancement

Search Result 71, Processing Time 0.021 seconds

Adaptive Error Diffusion for Text Enhancement (문자 영역을 강조하기 위한 적응적 오차 확산법)

  • Kwon Jae-Hyun;Son Chang-Hwan;Park Tae-Yong;Cho Yang-Ho;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.1 s.307
    • /
    • pp.9-16
    • /
    • 2006
  • This Paper proposes an adaptive error diffusioThis paper proposes an adaptive error diffusion algorithm for text enhancement followed by an efficient text segmentation that uses the maximum gradient difference (MGD). The gradients are calculated along with scan lines, and the MGD values are filled within a local window to merge the potential text segments. Isolated segments are then eliminated in the non-text region filtering process. After the left segmentation, a conventional error diffusion method is applied to the background, while the edge enhancement error diffusion is used for the text. Since it is inevitable that visually objectionable artifacts are generated when using two different halftoning algorithms, the gradual dilation is proposed to minimize the boundary artifacts in the segmented text blocks before halftoning. Sharpening based on the gradually dilated text region (GDTR) prevents the printing of successive dots around the text region boundaries. The error diffusion algorithm with edge enhancement is extended to halftone color images to sharpen the tort regions. The proposed adaptive error diffusion algorithm involves color halftoning that controls the amount of edge enhancement using a general error filter. The multiplicative edge enhancement parameters are selected based on the amount of edge sharpening and color difference. Plus, the additional error factor is introduced to reduce the dot elimination artifact generated by the edge enhancement error diffusion. By using the proposed algorithm, the text of a scanned image is sharper than that with a conventional error diffusion without changing background.

Representative Batch Normalization for Scene Text Recognition

  • Sun, Yajie;Cao, Xiaoling;Sun, Yingying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2390-2406
    • /
    • 2022
  • Scene text recognition has important application value and attracted the interest of plenty of researchers. At present, many methods have achieved good results, but most of the existing approaches attempt to improve the performance of scene text recognition from the image level. They have a good effect on reading regular scene texts. However, there are still many obstacles to recognizing text on low-quality images such as curved, occlusion, and blur. This exacerbates the difficulty of feature extraction because the image quality is uneven. In addition, the results of model testing are highly dependent on training data, so there is still room for improvement in scene text recognition methods. In this work, we present a natural scene text recognizer to improve the recognition performance from the feature level, which contains feature representation and feature enhancement. In terms of feature representation, we propose an efficient feature extractor combined with Representative Batch Normalization and ResNet. It reduces the dependence of the model on training data and improves the feature representation ability of different instances. In terms of feature enhancement, we use a feature enhancement network to expand the receptive field of feature maps, so that feature maps contain rich feature information. Enhanced feature representation capability helps to improve the recognition performance of the model. We conducted experiments on 7 benchmarks, which shows that this method is highly competitive in recognizing both regular and irregular texts. The method achieved top1 recognition accuracy on four benchmarks of IC03, IC13, IC15, and SVTP.

A Method for Text Detection and Enhancement using Spatio-Temporal Information (시공간 정보를 이용한 자막 탐지 및 향상 기법)

  • Jeong, Jong-Myeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.8
    • /
    • pp.43-50
    • /
    • 2009
  • Text information in a digital video provides crucial information to acquire semantic information of the video. In the proposed method. text candidate regions are extracted from input sequence by using characteristics of stroke and text candidate regions are localized by using projection to produce text bounding boxes. Bounding boxes containing text regions are verified geometrically and each bounding box existing same location is tracked by calculating matching measure. which is defined as the mean of absolute difference between bounding boxes in the current frame and previous frames. Finally. text regions are enhanced using temporal redundancy of bounding boxes to produce final results. Experimental results for various videos show the validity of the proposed method.

Development Status and Prospects of Graphical Password Authentication System in Korea

  • Yang, Gi-Chul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5755-5772
    • /
    • 2019
  • Security is becoming more important as society changes rapidly. In addition, today's ICT environment demands changes in existing security technologies. As a result, password authentication methods are also changing. The authentication method most often used for security is password authentication. The most-commonly used passwords are text-based. Security enhancement requires longer and more complex passwords, but long, complex, text-based passwords are hard to remember and inconvenient to use. Therefore, authentication techniques that can replace text-based passwords are required today. Graphical passwords are more difficult to steal than text-based passwords and are easier for users to remember. In recent years, researches into graphical passwords that can replace existing text-based passwords are being actively conducting in various places throughout the world. This article surveys recent research and development directions of graphical password authentication systems in Korea. For this purpose, security authentication methods using graphical passwords are categorized into technical groups and the research associated with graphical passwords performed in Korea is explored. In addition, the advantages and disadvantages of all investigated graphical password authentication methods were analyzed along with their characteristics.

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

Research on Chinese Microblog Sentiment Classification Based on TextCNN-BiLSTM Model

  • Haiqin Tang;Ruirui Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.842-857
    • /
    • 2023
  • Currently, most sentiment classification models on microblogging platforms analyze sentence parts of speech and emoticons without comprehending users' emotional inclinations and grasping moral nuances. This study proposes a hybrid sentiment analysis model. Given the distinct nature of microblog comments, the model employs a combined stop-word list and word2vec for word vectorization. To mitigate local information loss, the TextCNN model, devoid of pooling layers, is employed for local feature extraction, while BiLSTM is utilized for contextual feature extraction in deep learning. Subsequently, microblog comment sentiments are categorized using a classification layer. Given the binary classification task at the output layer and the numerous hidden layers within BiLSTM, the Tanh activation function is adopted in this model. Experimental findings demonstrate that the enhanced TextCNN-BiLSTM model attains a precision of 94.75%. This represents a 1.21%, 1.25%, and 1.25% enhancement in precision, recall, and F1 values, respectively, in comparison to the individual deep learning models TextCNN. Furthermore, it outperforms BiLSTM by 0.78%, 0.9%, and 0.9% in precision, recall, and F1 values.