• 제목/요약/키워드: Web character

Search Result 151, Processing Time 0.028 seconds

Development of Chinese Character Writing Recognition For Chinese Character Edutainment Contents (한자 에듀테인먼트 콘텐츠를 위한 한자쓰기인식기능개발)

  • Park, Hwa-Jin;Min, So-Young;Lee, Ha-Na;Park, Young-Ho
    • Journal of Digital Contents Society
    • /
    • v.10 no.4
    • /
    • pp.529-536
    • /
    • 2009
  • Interest in Chinese edutainment contents product has been increasing with the importance of Chinese Education Recently, empirical education with some fun activities is in the limelight breaking the traditional passive learning. Due to a such social necessity, we develop web-based Chinese Edutainment Contents for children, utilizing multimedia functions. Especially since writing Education is very important in learning chinese characters, we also developed a writing recognition function which checks the order of making strokes in writing Chinese character. Different from the existing outline area-based writing system, it determines if the character is well-written comparing to the prestored reference points in each Chinese characters, after recognizing the order of strokes and extracting peculiar points.

  • PDF

GML-based Strategic Approach and Its Application for Geo-scientific Infrastructure Building

  • Moon, Sun-Hee;Lee, Ki-Won;Kwon, Byung-Doo
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.235-237
    • /
    • 2003
  • GIS became increasingly important in information-oriented society as social indirect capital and many GIS data are developed. To use these data effectively standard format that enables easy to transport and store is needed. For this purpose, OGC developed GML based on XML as web standard format of geographical information. In this study, web based mapping with respect to digital geologic map and gravity anomaly map was accomplished using GML. While, styling methods were implemented in XSLT to make the visualizing suitable for the character of each layer, so it is possible to make dynamic maps in the SVG. GML-base map produced in this study can be transferred and represented without loss of the meaning and degrading on the web.

  • PDF

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Breaking character and natural image based CAPTCHA using feature classification (특징 분리를 통한 자연 배경을 지닌 글자 기반 CAPTCHA 공격)

  • Kim, Jaehwan;Kim, Suah;Kim, Hyoung Joong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.5
    • /
    • pp.1011-1019
    • /
    • 2015
  • CAPTCHA(Completely Automated Public Turing test to tell Computers and Humans Apart) is a test used in computing to distinguish whether or not the user is computer or human. Many web sites mostly use the character-based CAPTCHA consisting of digits and characters. Recently, with the development of OCR technology, simple character-based CAPTCHA are broken quite easily. As an alternative, many web sites add noise to make it harder for recognition. In this paper, we analyzed the most recent CAPTCHA, which incorporates the addition of the natural images to obfuscate the characters. We proposed an efficient method using support vector machine to separate the characters from the background image and use convolutional neural network to recognize each characters. As a result, 368 out of 1000 CAPTCHAs were correctly identified, it was demonstrated that the current CAPTCHA is not safe.

Platform Independent Game Development Using HTML5 Canvas (HTML5 캔버스를 이용한 플랫폼 독립적인 게임의 구현)

  • Jang, Seok-Woo;Huh, Moon-Haeng
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3042-3048
    • /
    • 2014
  • Recently, HTML5 have drawn many people's attention since it is considered as a next-generation web standard and can implement a lot of graphic and multimedia-related techniques on a web browser without installing programs separately. In this paper, we implement a game independent of platforms, such as iOS and Android, using the HTML5 canvas. In the game, the main character can move up, down, left, and right not to collide with neighboring enemies. If the character collides with an enemy, the HP (hit point) gauge bar reduces. On the other hand, if the character obtains heart items, the gauge bar increases. In the future, we will add various items to the game and will diversify its user interfaces by applying computer vision techniques such as various gesture recognition.

Artificial intelligence wearable platform that supports the life cycle of the visually impaired (시각장애인의 라이프 사이클을 지원하는 인공지능 웨어러블 플랫폼)

  • Park, Siwoong;Kim, Jeung Eun;Kang, Hyun Seo;Park, Hyoung Jun
    • Journal of Platform Technology
    • /
    • v.8 no.4
    • /
    • pp.20-28
    • /
    • 2020
  • In this paper, a voice, object, and optical character recognition platform including voice recognition-based smart wearable devices, smart devices, and web AI servers was proposed as an appropriate technology to help the visually impaired to live independently by learning the life cycle of the visually impaired in advance. The wearable device for the visually impaired was designed and manufactured with a reverse neckband structure to increase the convenience of wearing and the efficiency of object recognition. And the high-sensitivity small microphone and speaker attached to the wearable device was configured to support the voice recognition interface function consisting of the app of the smart device linked to the wearable device. From experimental results, the voice, object, and optical character recognition service used open source and Google APIs in the web AI server, and it was confirmed that the accuracy of voice, object and optical character recognition of the service platform achieved an average of 90% or more.

  • PDF

Dialectics of Motherhood-based Existence - Focusing on Charlotte's Web -

  • Yun, Jeong-Mi;Lee, Soo-Kyung
    • Cartoon and Animation Studies
    • /
    • s.45
    • /
    • pp.345-366
    • /
    • 2016
  • In Charlotte's Web, each character motivates the other and strives for the new generation based upon motherhood. The intersection between life and death is directly and symbolically addressed as a component of the natural life cycle. Borrowing Kristeva's theory of the semiotic, the symbolic and the chora, this study investigates the dialectical oscillation between the semiotic and the symbolic and the social circumstances of subjects in signification as well as highlights the features of character growth. From a feminist perspective, herein, motherhood is translated not only as a robust foundation for relations among characters but also as an impetus for developing into a good and influential individual who embraces all organisms with care and consideration. Charlotte's Web clearly shows how the semiotic and symbolic elements of each being, united by motherhood, interact and lead to positive change. Though the world appears to consist of incompatible ingredients, they are combined. Charlotte's Web awakens the fact that their harmony makes a commitment to building a more wonderful place. It can be suggested that Charlotte's Web, where animal characters contain two tendencies of the human mind, exhibits human development proceedings.

A Study of consumer's behavior and classifications by advertising techniques of mobile character (모바일 캐릭터의 광고기법에 따른 타켓별 유형분류와 소비자 반응 연구)

  • 강대인;주효정
    • Archives of design research
    • /
    • v.17 no.2
    • /
    • pp.393-402
    • /
    • 2004
  • The mobile advertisement has varied on lifestyle of people, who live aninformation-oriented society with portable equipment such as web phone and PDA(Personal Digital Assistant). Also, the advertising has expended mobile techniques and its application field unpredictably. The intrinsic characteristic of misdistribution, reach, and convenience in mobile advertisement add up the capacity of a location, information and individualize. This market condition leads the basic audio focused formal mobile advertisement to the new mobile Internet environment with an additional able of data communication. Moreover, the type of SMS (Short Message Service), Graphic, Wep Push, and ridchmedia, which based on music, basic graphic, voice, and letters transfer by mobile terminal and the mobile character is present inevitably correlation with pixel art and animation in 2D(Two Dimensions) techniques. Thus, this research appoints the importance and its role of mobile advertisement that is core of the business marketing in new media era. To activate mobile market, the mobile companies classify the characteristic of consumers with developed commercial use of mobile character and research their behavior to meet optimal mobile character in business.

  • PDF

Construction of Printed Hangul Character Database PHD08 (한글 문자 데이터베이스 PHD08 구축)

  • Ham, Dae-Sung;Lee, Duk-Ryong;Jung, In-Suk;Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.33-40
    • /
    • 2008
  • The application of OCR moves from traditional formatted documents to the web document and natural scene images. It is usual that the new applications use not only standard fonts of Myungjo and Godic but also various fonts. The conventional databases which have mainly been constructed with standard fonts have limitations in applying to the new applications. In this paper, we generate 243 image samples for each of 2350 Hangul character classes which differs in font size, quality, and resolution. Additionally each sample was varied according to binarization threshold and rotational transformation. Through this process 2187 samples were generated for each character class. Totally 5,139,450 samples constitutes the printed Hangul character database called the PHD08. In addition, we present the characteristics and recognition performance by an commercial OCR software.

Auto Braille Translator using Matlab (Matlab을 이용한 자동 점자 변환기)

  • Kim, Hyun-JIn;Kim, Ye-Chan;Park, Chang-Jin;Oh, Se-Jong;Lee, Boong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.4
    • /
    • pp.691-700
    • /
    • 2017
  • This paper describes the design and implementation of automatic braille converter based on image processing for a person who is visually impaired. The conversion algorithm based on the image processing converts the input image obtained by the web-cam to binary image, and then calculates the cross-correlation with the stored character pattern image by labeling the character area and converts the character pattern image into the corresponding braille. The computer simulations showed that the proposed algorithm showed 95% and 91% conversion success rates for numerals and alphabets printed on A5 paper. The prototype test implemented by the servo motor using Arduino confirmed 89%, conversion performance. Therefore, we confirmed the feasibility of the automatic braille transducer.