• 제목/요약/키워드: Word Segmentation

검색결과 135건 처리시간 0.022초

딥러닝 신경망을 이용한 문자 및 단어 단위의 영문 차량 번호판 인식 (Character Level and Word Level English License Plate Recognition Using Deep-learning Neural Networks)

  • 김진호
    • 디지털산업정보학회논문지
    • /
    • 제16권4호
    • /
    • pp.19-28
    • /
    • 2020
  • Vehicle license plate recognition system is not generalized in Malaysia due to the loose character layout rule and the varying number of characters as well as the mixed capital English characters and italic English words. Because the italic English word is hard to segmentation, a separate method is required to recognize in Malaysian license plate. In this paper, we propose a mixed character level and word level English license plate recognition algorithm using deep learning neural networks. The difference of Gaussian method is used to segment character and word by generating a black and white image with emphasized character strokes and separated touching characters. The proposed deep learning neural networks are implemented on the LPR system at the gate of a building in Kuala-Lumpur for the collection of database and the evaluation of algorithm performance. The evaluation results show that the proposed Malaysian English LPR can be used in commercial market with 98.01% accuracy.

접사정보 및 선호패턴을 이용한 복합명사의 역방향 분해 알고리즘 (A Reverse Segmentation Algorithm of Compound Nouns Using Affix Information and Preference Pattern)

  • 류방;백현철;김상복
    • 한국멀티미디어학회논문지
    • /
    • 제7권3호
    • /
    • pp.418-426
    • /
    • 2004
  • 본 논문에서는 음절간 상호정 보를 이용하여 한국어 복합명사의 역방향 분해 알고리즘을 제 안한다. 한국어 복합명사는 그 구조가 한자어에 의해 파생 한것이 대부분이며 음절 상호간 선호 음절이 존재하므로, 이 정보와 접사정보를 복합명사의 분해규칙으로 이용한다. 성능을 평가하기 위해 36061개의 복합명사를 이용하여 본 논문에서 제안한 알고리즘의 분해한 결과 99.3%의 분해 정확율을 얻었다. 실험과 관련한 기존 알고리즘간의 비교에서도 우수한 결과를 얻었으며, 특히 4음절과 5음절 복합명사의 경우 대부분 정확한 분해 결과를 얻었다.

  • PDF

무제약 필기체 한글 분할을 위한 가상 네트워크 탐색 시스템의 설계 및 구현 (Design and Implementation of Virtual Network Search System for Segmentation of Unconstrained Handwritten Hangul)

  • 박성호;조범준
    • 한국멀티미디어학회논문지
    • /
    • 제8권5호
    • /
    • pp.651-659
    • /
    • 2005
  • 본 논문에서는 무제약 필기체 한글 분할을 위하여 기존 방법들에서 제시된 적이 없는 문자간 여백에서 가상 네트워크 탐색 시스템을 이용하는 새로운 방법을 설계하고 구현하였다 제안된 방법은 다양한 필기자들이 제한 없이 쓰여진 모든 문자들에 대하여 적용이 가능하도록 설계되었고, 또한 문자간 여백에서 생성되는 가상 네트워크의 경로를 이용함으로서 꺾은선 형태의 다양한 분할경로를 얻을 수 있도록 설계되었다. 또한 탐색 시스템을 구현하는 과정에서 분할대상 블록의 길이에 따른 탐색 윈도우를 달리 적용함으로서 원하지 않는 영역에서 분할경로가 생성되는 것을 방지하였다 본 논문에서 제안하는 가상 네트워크 탐색 시스템에 대해 임의의 필기자들로 부터 자체적으로 수집한 800여개의 데이터를 대상으로 실험을 수행한 결과, 중첩되거나 접촉된 문자들을 포함하여 전체적으로 $91.4\%$ 정도의 분할 정확도를 얻을 수 있었다.

  • PDF

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

자동 음성분할 및 레이블링 시스템의 성능향상 (Performance Improvement of Automatic Speech Segmentation and Labeling System)

  • 홍성태;김제우;김형순
    • 대한음성학회지:말소리
    • /
    • 제35_36호
    • /
    • pp.175-188
    • /
    • 1998
  • Database segmented and labeled up to phoneme level plays an important role in phonetic research and speech engineering. However, it usually requires manual segmentation and labeling, which is time-consuming and may also lead to inconsistent consequences. Automatic segmentation and labeling can be introduced to solve these problems. In this paper, we investigate a method to improve the performance of automatic segmentation and labeling system, where Spectral Variation Function(SVF), modification of silence model, and use of energy variations in postprocessing stage are considered. In this paper, SVF is applied in three ways: (1) addition to feature parameters, (2) postprocessing of phoneme boundaries, (3) restricting the Viterbi path so that the resulting phoneme boundaries may be located in frames around SVF peaks. In the postprocessing stage, positions with greatest energy variation during transitional period between silence and other phonemes were used to modify boundaries. In order to evaluate the performance of the system, we used 452 phonetically balanced word(PBW) database for training phoneme models and phonetically balanced sentence(PBS) database for testing. According to our experiments, 83.1% (6.2% improved) and 95.8% (0.9% improved) of phoneme boundaries were within 20ms and 40ms of the manually segmented boundaries, respectively.

  • PDF

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

한국어 교재의 행 바꾸기 -띄어쓰기와 읽기 능력의 계발 - (Examining Line-breaks in Korean Language Textbooks: the Promotion of Word Spacing and Reading Skills)

  • 조인정;김단비
    • 한국어교육
    • /
    • 제23권1호
    • /
    • pp.77-100
    • /
    • 2012
  • This study investigates issues in relation to text segmenting, in particular, line breaks in Korean language textbooks. Research on L1 and L2 reading has shown that readers process texts by chunking (grouping words into phrases or meaningful syntactic units) and, therefore, phrase-cued texts are helpful for readers whose syntactic knowledge has not yet been fully developed. In other words, it would be important for language textbooks to avoid awkward syntactic divisions at the end of a line, in particular, those textbooks for beginners and intermediate level learners. According to our analysis of a number of major Korean language textbooks for beginner-level learners, however, many textbooks were found to display line-breaks of awkward syntactic division. Moreover, some textbooks displayed frequent instances where a single word (or eojeol in the case of Korean) is split between different lines. This can hamper not only learners' learning of the rules of spaces between eojeols in Korean, but also learners' development in automatic word recognition, which is an essential part of reading processes. Based on the findings of our textbook analysis and of existing research on reading, this study suggests ways to overcome awkward line-breaks in Korean language textbooks.

한의학 고문헌 텍스트 분석을 위한 비지도학습 기반 단어 추출 방법 비교 (Comparison of Word Extraction Methods Based on Unsupervised Learning for Analyzing East Asian Traditional Medicine Texts)

  • 오준호
    • 대한한의학원전학회지
    • /
    • 제32권3호
    • /
    • pp.47-57
    • /
    • 2019
  • Objectives : We aim to assist in choosing an appropriate method for word extraction when analyzing East Asian Traditional Medical texts based on unsupervised learning. Methods : In order to assign ranks to substrings, we conducted a test using one method(BE:Branching Entropy) for exterior boundary value, three methods(CS:cohesion score, TS:t-score, SL:simple-ll) for interior boundary value, and six methods(BExSL, BExTS, BExCS, CSxTS, CSxSL, TSxSL) from combining them. Results : When Miss Rate(MR) was used as the criterion, the error was minimal when the TS and SL were used together, while the error was maximum when CS was used alone. When number of segmented texts was applied as weight value, the results were the best in the case of SL, and the worst in the case of BE alone. Conclusions : Unsupervised-Learning-Based Word Extraction is a method that can be used to analyze texts without a prepared set of vocabulary data. When using this method, SL or the combination of SL and TS could be considered primarily.

인터넷 쇼핑가치에 따른 중국 패션제품 소비자 세분집단의 온라인 구전 및 구매행동 (Segmentation of Chinese Fashion Product Consumers according to Internet Shopping Values and Their Online Word-of-Mouth and Purchase Behavior)

  • 윤미;유혜경;황선아
    • 한국의류산업학회지
    • /
    • 제18권3호
    • /
    • pp.317-326
    • /
    • 2016
  • The main purposes of this study were to segment Chinese consumers who purchase fashion products through internet commerce according to internet shopping values, to compare their online word-of-mouth acceptance and dissemination behavior, and to examine the demographic characteristics and purchase behavior of the segments. 715 questionnaires were collected through internet survey from January $19^{th}$ to March $16^{th}$, 2015 and a total of 488 were used for the final data analysis. The respondents were twenty to thirty nine years old men and women living in all over China. Hedonic and utilitarian shopping values were identified through factor analysis and based on the shopping values, the respondents were categorized into four groups-ambivalent shopping value group, hedonic shopping value group, utilitarian shopping value group and indifferent group. Among these groups, there were significant differences in terms of online word-of-mouth acceptance as well as dissemination level and motivation. In overall, ambivalent shopping value group showed high online word-of-mouth acceptance as well as dissemination motivation. The groups also showed significant differences in clothing selection criteria, frequently purchased internet shopping sites, online clothing shopping frequency and information sources. The groups also differed in terms of age, residential area, education level, occupation and income. However, there were no significant differences in gender and marital status among the groups.

미등록어의 의미 범주 분석을 이용한 복합명사 분해 (Segmentation of Korean Compound Nouns Using Semantic Category Analysis of Unregistered Nouns)

  • 강유환;서영훈
    • Journal of Information Technology Applications and Management
    • /
    • 제11권4호
    • /
    • pp.95-102
    • /
    • 2004
  • This paper proposes a method of segmenting compound nouns which include unregistered nouns into a correct combination of unit nouns using characteristics of person's names, loanwords, and location names. Korean person's name is generally composed of 3 syllables, only relatively small number of syllables is used as last names, and the second and the third syllables combination is somewhat restrictive. Also many person's names appear with clue words in compound nouns. Most loanwords have one or more syllables which cannot appear in Korean words, or have sequences of syllables different from usual Korean words. Location names are generally used with clue words designating districts in compound nouns. Use of above characteristics to analyze compound nouns not only makes segmentation more accurate, helps natural language systems use semantic categories of those unregistered nouns. Experimental results show that the precision of our method is approximately 98% on average. The precision of human names and loanwords recognition is about 94% and about 92% respectively.

  • PDF