• Title/Summary/Keyword: Character Extraction

Search Result 304, Processing Time 0.022 seconds

Extraction Method of Face Area in Movie Using MRCNN (MRCNN을 이용한 영화속 등장인물 면적추출 방법)

  • Kim, Yeonghuh;You, Eun Soon;Kang, SooHwan;Park, Seung-Bo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.51-52
    • /
    • 2019
  • 본 연구는 영화에 대한 정량적 분석을 위해 MRCNN을 활용한 영화 속 등장인물의 얼굴 면적을 검출하였다. MRCNN을 선택한 이유는 기존 얼굴 인식 시스템이 갖는 한계(뒷모습, 누워있는 모습의 측정 오류)의 개선과 면밀한 계산을 하고자 함이었다. 영화 한편에서 주인공과 상대주인공이 함께 등장한 씬을 선별한 726개의 이미지 중 496개의 이미지가 마스킹이 됨으로서 68%의 성능을 보였다. 반면에 230개의 이미지 파일에서는 다소 문제가 발견되어 32%의 오차가 발생했다. 오차를 개선하기 위해서 주요 인물을 학습시킨 뒤 마스킹을 씌우는 작업을 함으로써 현 확률보다 높은 확률로 정상적으로 이미지가 추출될 수 있도록 시험해 볼 것이다.

  • PDF

Development of RPA with Information Extraction Module (문서에서 정보 추출 기능을 갖는 RPA 개발)

  • Kim, Ki-Tae;Jeong, Su-Na;Lee, Se-Hoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.435-436
    • /
    • 2021
  • 본 논문에서는 RPA(Robotic Process Automation) Tool 개발 과정 중 OCR기법을 활용한 영수증 인식 후 가계부 생성에 관한 자동화 처리 과정을 기술한다. 개발된 RPA 툴은 AI분야에 사용될 데이터의 데이터 전처리 기능을 제공하고 그 외에 반복적으로 사용되는 기능들의 자동화를 제공한다. 그 중 영수증을 이용하여 가계부 작성을 자동으로 처리해주는 기능은 반복적이고 시간이 많이 소요되는 작업으로 이 기능을 활용하면 작업의 수행시간을 단축하고 효율적인 관리가 가능하다.

  • PDF

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Development of a Ship's Logbook Data Extraction Model Using OCR Program (OCR 프로그램을 활용한 선박 항해일지 데이터 추출 모델 개발)

  • Dain Lee;Sung-Cheol Kim;Ik-Hyun Youn
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.1
    • /
    • pp.97-107
    • /
    • 2024
  • Despite the rapid advancement in image recognition technology, achieving perfect digitization of tabular documents and handwritten documents still challenges. The purpose of this study is to improve the accuracy of digitizing the logbook by correcting errors by utilizing associated rules considered during logbook entries. Through this, it is expected to enhance the accuracy and reliability of data extracted from logbook through OCR programs. This model is to improve the accuracy of digitizing the logbook of the training ship "Saenuri" at the Mokpo Maritime University by correcting errors identified after Optical Character Recognition (OCR) program recognition. The model identified and corrected errors by utilizing associated rules considered during logbook entries. To evaluate the effect of model, the data before and after correction were divided by features, and comparisons were made between the same sailing number and the same feature. Using this model, approximately 10.6% of errors out of the total estimated error rate of about 11.8% were identified, and 56 out of 123 errors were corrected. A limitation of this study is that it only focuses on information from Dist.Run to Stand Course sections of the logbook, which contain navigational information. Future research will aim to correct more information from the logbook, including weather information, to overcome this limitation.

Accuracy Analysis of Ortho Imagery with Different Topographic Characteristic (지역적 특성에 따른 정사영상의 정확도 분석)

  • Jo, Hyun-Wook;Park, Joon-Kyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.11 no.1
    • /
    • pp.80-89
    • /
    • 2008
  • Mapping applications using satellite imagery have been possible to quantitative analysis since SPOT satellite with stereo image was launched. Especially, high resolution satellite imagery was efficiently used in the field of digital mapping for the areas which are difficult to produce large-scale maps by aerial photogrammetry or carry out ground control point surveying due to unaccessibility. This study extracted the geospatial information out of consideration for topographic characteristic from ortho imagery of the National Geospatial-intelligence Agency(NGA) in the United States of America and analyzed the accuracy of plane coordinate for ortho imagery. For this purpose, the accuracy according to topographic character by comparison between both extraction data from ortho imagery and the digital topographic maps of 1:5000 scale which were produced by Korea National Geographic Information Institute(NGI) was evaluated. It is expected that the results of this study will be fully used as basic information for ground control point acquisition or digital mapping in unaccessible area.

  • PDF

Remote Drawing Technology Based on Motion Trajectories Analysis (움직임 궤적 분석 기반의 원거리 판서 기술)

  • Leem, Seung-min;Jeong, Hyeon-seok;Kim, Sung-young
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.229-236
    • /
    • 2016
  • In this paper, we suggest new technology that can draw characters at a long distance by tracking a hand and analysing the trajectories of hand positions. It's difficult to recognize the shape of a character without discriminating effective strokes from all drawing strokes. We detect end points from input trajectories of a syllable with camera system and localize strokes by using detected end points. Then we classify the patterns of the extracted strokes into eight classes and finally into two categories of stroke that is part of syllable and not. We only draw the strokes that are parts of syllable and can display a character. We can get 88.3% in classification accuracy of stroke patterns and 91.1% in stroke type classification.

Recognition of Numeric Characters in License Plates using Eigennumber (고유 숫자를 이용한 번호판 숫자 인식)

  • Park, Kyung-Soo;Kang, Hyun-Chul;Lee, Wan-Joo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.1-7
    • /
    • 2007
  • In order to recognize a vehicle license plate, the region of the license plate should be extracted from a vehicle image. Then, character region should be separated from the background image and characters are recognized using some neural networks with selected feature vectors. Of course, choice of feature vectors which serve as the basis of the character recognition has an important effect on recognition result as well as reduction of data amount. In this paper, we propose a novel feature extraction method in which number images are decomposed into linear combination of eigennumbers and show the validity of this method by applying to the recognition of numeric characters in license plates. The experimental results show the recognition rate of 95.3% for about 500 vehicle images with multi-layer perceptron neural network in the eigennumber space. Compared with the conventional mesh feature, it shows a better recognition rate by 5%.

Feature extraction motivated by human information processing method and application to handwritter character recognition (인간의 정보처리 방법에 기반한 특징추출 및 필기체 문자인식에의 응용)

  • 윤성수;변혜란;이일병
    • Korean Journal of Cognitive Science
    • /
    • v.9 no.1
    • /
    • pp.1-11
    • /
    • 1998
  • In this paper, the features which are thought to be used by humans based on the psychological experiment of human information processing are applied to character recognition problem. Man will deal with a little large area information as well as pixel by pixel information. Therefore we define the feature that represents a little wide region I information called region feature, and combine the features derived from region feature and pixel by pixel features that have been used by now. The features we used are the result of region feature based preanalysis, mesh with region attributes, cross distance difference and gradient. The training and test data in the experiment are handwritten Korean alphabets, digits and English alphabets, which are trained on neural network using back propagation algorithm and recognition results are 90.27-93.25%, 98.00% and 79.73-85.75%, respectively Experimental results show that the feature we are suggesting in this paper is 1-2% better than UDLRH feature similar in attribute to region feature, and the tendency of misrecognition is more easily acceptable by humans.

  • PDF

A Prediction Model for TVOC and HCHO Emission of Paint Materials (페인트에서 방출되는 TVOC 및 HCHO 방출량 예측모델)

  • Kim, Hyung-Soo;Lee, Kyung-Hoi
    • KIEAE Journal
    • /
    • v.3 no.1
    • /
    • pp.13-20
    • /
    • 2003
  • It is highly recognized that there is need for protection against indoor air pollution, as we realize environmental pollution is growing, For example, in an indoor environment, a person spends more than 80 percent of their time inside the building. Thus, concern about indoor decoration materials is growing, since they cause pollution in the rooms of an apartment, as well as in offices. As the indoor decoration materials become more diverse and lusurious, so the effect of VOCs(Volatile Organic Compounds) and HCHO(Formaldehy) is growing. The indoor decoration materials cause the Sick Building Syndrome, such as headaches, dizziness, or lack of concentraion, and they in turn cause serious deterioration in people's health. In this study, I probed the status of the indoor air pollution and carried on an investigation and analysis about the prevention technique. In doing so, I performed experimental tests and an assessment of the indoor decoration materials of an apartment. I also examined elements of the emitted and the emission. Finally, I examined the character of emissions, by changing environmental conditions, such as the temperature, humidity, and ventilation. With respect to VOCs tests, I applied the method of solid state adsorption using the adsorptive tube, based on the measurement of the American EPA TO-17, ASTM 5116-97, and the measurement of the Japanese Wall Decoration Industrial Association. The tested sample was analyzed by High Performance Liquid Chromatography, after going through the process of dissolvent extraction. As subjects of the test, Paint were selected. The process of this test is as follows; first, I figured out the character of the emission, by measuring the emitted concentration of VOCs and HOHC from the indoor decoration materials of an apartment. Second, I made a small-scale chamber and the test was processed in the chamber in order to suggest an environment-friendly prediction modlel development.

Face and Its Components Extraction of Animation Characters Based on Dominant Colors (주색상 기반의 애니메이션 캐릭터 얼굴과 구성요소 검출)

  • Jang, Seok-Woo;Shin, Hyun-Min;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.93-100
    • /
    • 2011
  • The necessity of research on extracting information of face and facial components in animation characters have been increasing since they can effectively express the emotion and personality of characters. In this paper, we introduce a method to extract face and facial components of animation characters by defining a mesh model adequate for characters and by using dominant colors. The suggested algorithm first generates a mesh model for animation characters, and extracts dominant colors for face and facial components by adapting the mesh model to the face of a model character. Then, using the dominant colors, we extract candidate areas of the face and facial components from input images and verify if the extracted areas are real face or facial components by means of color similarity measure. The experimental results show that our method can reliably detect face and facial components of animation characters.