• Title/Summary/Keyword: text image

Search Result 981, Processing Time 0.028 seconds

Locating Text in Web Images Using Image Based Approaches (웹 이미지로부터 이미지기반 문자추출)

  • Chin, Seongah;Choo, Moonwon
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.27-39
    • /
    • 2002
  • A locating text technique capable of locating and extracting text blocks in various Web images is presented here. Until now this area of work has been ignored by researchers even if this sort of text may be meaningful for internet users. The algorithms associated with the technique work without prior knowledge of the text orientation, size or font. In the work presented in this research, our text extraction algorithm utilizes useful edge detection followed by histogram analysis on the genuine characteristics of letters defined by text clustering region, to properly perform extraction of the text region that does not depend on font styles and sizes. By a number of experiments we have showed impressively acceptable results.

  • PDF

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

A Text Detection Method Using Wavelet Packet Analysis and Unsupervised Classifier

  • Lee, Geum-Boon;Odoyo Wilfred O.;Kim, Kuk-Se;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.4 no.4
    • /
    • pp.174-179
    • /
    • 2006
  • In this paper we present a text detection method inspired by wavelet packet analysis and improved fuzzy clustering algorithm(IAFC).This approach assumes that the text and non-text regions are considered as two different texture regions. The text detection is achieved by using wavelet packet analysis as a feature analysis. The wavelet packet analysis is a method of wavelet decomposition that offers a richer range of possibilities for document image. From these multi scale features, we adapt the improved fuzzy clustering algorithm based on the unsupervised learning rule. The results show that our text detection method is effective for document images scanned from newspapers and journals.

Improved Text Recognition using Analysis of Illumination Component in Color Images (컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.131-136
    • /
    • 2007
  • This paper proposes a new approach to eliminate the reflectance component for the detection of text in color images. Color images, printed by color printing technology, normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. We were able to go ahead through the ability to extract by reducing the blur phenomenon of text by light because reflection component by an illumination change and removed it and extracted text. The experimental results have shown a superior performance even when an image has a complex background. Text detection and recognition performance is influenced by changing the illumination condition. Our method is robust to the images with different illumination conditions.

  • PDF

Implementation and Evaluation of Integrated Viewier for Displanning Text and TIFF Image Materials on the Internet Environments (인터넷상에서 텍스트와 TIFF 이미지 자료 디스플레이를 위한 뷰어 구현 및 평가)

  • 최흥식
    • Journal of the Korean Society for information Management
    • /
    • v.17 no.1
    • /
    • pp.67-87
    • /
    • 2000
  • The purpose of the study is to develop an integrated viewer which can display both text and image files on the Internet environment. Up to now, most viewers for full-text databases can be displayed documents only by image or graphic viewers. The newly developed system can compress document files in commercial word processors (e.g, 한글TM, WordTM, ExceITM, PowerpointTM, HunminJungumTM, ArirangTM, CADTM), as well as conventional TIFF image file in smaller size, which were converted into DVI(DeVice Independent) file format, and display them on computer screen. IDoc Viewer was evaluated to test its performance by user group, consisting of 5 system developers, 5 librarians, and 10 end-users. IDoc Viewer has been proved to be good or excellent at 20 out of 26 check lists.

  • PDF

Transforming Text into Video: A Proposed Methodology for Video Production Using the VQGAN-CLIP Image Generative AI Model

  • SukChang Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.3
    • /
    • pp.225-230
    • /
    • 2023
  • With the development of AI technology, there is a growing discussion about Text-to-Image Generative AI. We presented a Generative AI video production method and delineated a methodology for the production of personalized AI-generated videos with the objective of broadening the landscape of the video domain. And we meticulously examined the procedural steps involved in AI-driven video production and directly implemented a video creation approach utilizing the VQGAN-CLIP model. The outcomes produced by the VQGAN-CLIP model exhibited a relatively moderate resolution and frame rate, and predominantly manifested as abstract images. Such characteristics indicated potential applicability in OTT-based video content or the realm of visual arts. It is anticipated that AI-driven video production techniques will see heightened utilization in forthcoming endeavors.

A Comparative Study on OCR using Super-Resolution for Small Fonts

  • Cho, Wooyeong;Kwon, Juwon;Kwon, Soonchu;Yoo, Jisang
    • International journal of advanced smart convergence
    • /
    • v.8 no.3
    • /
    • pp.95-101
    • /
    • 2019
  • Recently, there have been many issues related to text recognition using Tesseract. One of these issues is that the text recognition accuracy is significantly lower for smaller fonts. Tesseract extracts text by creating an outline with direction in the image. By searching the Tesseract database, template matching with characters with similar feature points is used to select the character with the lowest error. Because of the poor text extraction, the recognition accuracy is lowerd. In this paper, we compared text recognition accuracy after applying various super-resolution methods to smaller text images and experimented with how the recognition accuracy varies for various image size. In order to recognize small Korean text images, we have used super-resolution algorithms based on deep learning models such as SRCNN, ESRCNN, DSRCNN, and DCSCN. The dataset for training and testing consisted of Korean-based scanned images. The images was resized from 0.5 times to 0.8 times with 12pt font size. The experiment was performed on x0.5 resized images, and the experimental result showed that DCSCN super-resolution is the most efficient method to reduce precision error rate by 7.8%, and reduce the recall error rate by 8.4%. The experimental results have demonstrated that the accuracy of text recognition for smaller Korean fonts can be improved by adding super-resolution methods to the OCR preprocessing module.

Extraction of Text Regions from Spam-Mail Images Using Color Layers (색상레이어를 이용한 스팸메일 영상에서의 텍스트 영역 추출)

  • Kim Ji-Soo;Kim Soo-Hyung;Han Seung-Wan;Nam Taek-Yong;Son Hwa-Jeong;Oh Sung-Ryul
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.409-416
    • /
    • 2006
  • In this paper, we propose an algorithm for extracting text regions from spam-mail images using color layer. The CLTE(color layer-based text extraction) divides the input image into eight planes as color layers. It extracts connected components on the eight images, and then classifies them into text regions and non-text regions based on the component sizes. We also propose an algorithm for recovering damaged text strokes from the extracted text image. In the binary image, there are two types of damaged strokes: (1) middle strokes such as 'ㅣ' or 'ㅡ' are deleted, and (2) the first and/or last strokes such as 'ㅇ' or 'ㅁ' are filled with black pixels. An experiment with 200 spam-mail images shows that the proposed approach is more accurate than conventional methods by over 10%.

An Extracting Text Area Using Adaptive Edge Enhanced MSER in Real World Image (실세계 영상에서 적응적 에지 강화 기반의 MSER을 이용한 글자 영역 추출 기법)

  • Park, Youngmok;Park, Sunhwa;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.17 no.4
    • /
    • pp.219-226
    • /
    • 2016
  • In our general life, what we recognize information with our human eyes and use it is diverse and massive. But even the current technologies improved by artificial intelligence are exorbitantly deficient comparing to human visual processing ability. Nevertheless, many researchers are trying to get information in everyday life, especially concentrate effort on recognizing information consisted of text. In the fields of recognizing text, to extract the text from the general document is used in some information processing fields, but to extract and recognize the text from real image is deficient too much yet. It is because the real images have many properties like color, size, orientation and something in common. In this paper, we applies an adaptive edge enhanced MSER(Maximally Stable Extremal Regions) to extract the text area in those diverse environments and the scene text, and show that the proposed method is a comparatively nice method with experiments.

Web Site Creation Method by Using Text2Image Technology (웹 기반 Text2Image 기술을 이용한 웹 사이트 제작 기법)

  • Ban, Tae-Hak;Kim, Kun-Sub;Min, Kyoung-Ju;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.227-229
    • /
    • 2011
  • To develop an effective web site, Ajax, jQuery and various techniques have been developed. Administrator interface to powerful, where the administrator can easily create a variety of functions from the menu, but this is not leaving the form of text or code, modify it and then when is low. To solve these problems, in this paper we are making conjunction with CSS which provide the database with open font and CSS features in variety of images, converted into text using the open font, these existing production methods with a Web site that provides a differentiated ways.

  • PDF