• Title/Summary/Keyword: text image

Search Result 981, Processing Time 0.036 seconds

A Study on the Creation of Digital Self-portrait with Intertextuality (상호텍스트성을 활용한 디지털 자화상 창작)

  • Lim, Sooyeon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.427-434
    • /
    • 2022
  • The purpose of this study is to create a self-portrait that provides an immersive experience that immerses the viewer into the problem of self-awareness. We propose a method to implement an interactive self-portrait by using audio and image information obtained from viewers. The viewer's voice information is converted into text and visualized. In this case, the viewer's face image is used as pixel information composing the text. Text is the result of a mixture of one's own emotions, imaginations, and intentions based on personal experiences and memories. People have different interpretations of certain texts in different ways.The proposed digital self-portrait not only reproduces the viewer's self-consciousness in the inner aspect by utilizing the intertextuality of the text, but also expands the meanings inherent in the text. Intertextuality in a broad sense refers to the totality of all knowledge that occurs between text and text, and between subject and subject. Therefore, the self-portrait expressed in text expands and derives various relationships between the viewer and the text, the viewer and the viewer, and the text and the text. In addition, this study shows that the proposed self-portrait can confirm the formativeness of text and re-create spatial and temporality in the external aspect. This dynamic self-portrait reflects the interests of viewers in real time, and has the characteristic of being updated and created.

An Image-based CAPTCHA System with Correction of Sub-images (서브 이미지의 교정을 통한 이미지 기반의 CAPTCHA 시스템)

  • Chung, Woo-Keun;Ji, Seung-Hyun;Cho, Hwan-Gue
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.8
    • /
    • pp.873-877
    • /
    • 2010
  • CAPTCHA is a security tool that prevents the automatic sign-up by a spam or a robot. This CAPTCHA usually depends on the smart readability of humans. However, the common and plain CAPTCHA with text-based system is not difficult to be solved by intelligent web-bot and machine learning tools. In this paper, we propose a new sub-image based CAPTCHA system totally different from the text based system. Our system offers a set of cropped sub-image from a whole digital picture and asks user to identify the correct orientation. Though there are some nice machine learning tools for this job, but they are useless for a cropped sub-images, which was clearly revealed by our experiment. Experiment showed that our sub-image based CAPTCHA is easy to human solver, but very hard to all kinds of machine learning or AI tools. Also our CAPTCHA is easy to be generated automatical without any human intervention.

Image Classification Approach for Improving CBIR System Performance (콘텐트 기반의 이미지검색을 위한 분류기 접근방법)

  • Han, Woo-Jin;Sohn, Kyung-Ah
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.7
    • /
    • pp.816-822
    • /
    • 2016
  • Content-Based image retrieval is a method to search by image features such as local color, texture, and other image content information, which is different from conventional tag or labeled text-based searching. In real life data, the number of images having tags or labels is relatively small, so it is hard to search the relevant images with text-based approach. Existing image search method only based on image feature similarity has limited performance and does not ensure that the results are what the user expected. In this study, we propose and validate a machine learning based approach to improve the performance of the image search engine. We note that when users search relevant images with a query image, they would expect the retrieved images belong to the same category as that of the query. Image classification method is combined with the traditional image feature similarity method. The proposed method is extensively validated on a public PASCAL VOC dataset consisting of 11,530 images from 20 categories.

A Camera Image Authentication Using Image Information Copyright Signature for Mobile Device without Distortion (무왜곡 휴대용 단말기 영상정보 권한서명을 이용한 카메라 영상 인증)

  • Han, Chan-Ho;Moon, Kwang-Seok
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.2
    • /
    • pp.30-36
    • /
    • 2014
  • Quality and resolution of camera in mobile device is improved significantly. In this paper, we propose block-based information hide techniques without image distortion for mobile device to solve image degradation in conventional watermarking methods. Information of image is composed with text such as camera maker, model, date, time, etc. Each text is converted to $8{\times}8$ pixel blocks and is added to the bottom of image. Generally image including block based information for image authentication are compressed using JPEG in mobile device. The vertical line value in JPEG header is modified by original size of image sensor. This technique can hide the block based authentication information using general decoder. In the experimental results, JPEG file size is slightly increased within 0.1% for the proposed block based authentication information encoding. Finally proposed methods can be adopted for various embedded systems using medical image, smart phone and DSLR camera.

Effects of Presentation Modalities of Television Moving Image and Print Text on Children's and Adult's Recall (TV동영상과 신문텍스트의 정보제시특성이 어린이와 성인의 정보기억에 미치는 영향)

  • Choi, E-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.149-158
    • /
    • 2009
  • Major purpose of this study is to explore effect of presentation modalities of Television and print on children's and adult's recall. So An experiment was conducted by comparing children's and adults' recall of information stories presented in three different modalities: "television moving Image1(auditory-visual redundancy)", "television moving Image2(auditory-visual redundancy)" and "print text". Results indicated that children remembered more infornation from the television moving Image than from print versions regardless of auditory-visual redundancy. But for the adults advantage of television was only found for information that had been accompanied by redundant pictures in television moving Image, providing support for the dual-coding hypothesis.

Jointly Image Topic and Emotion Detection using Multi-Modal Hierarchical Latent Dirichlet Allocation

  • Ding, Wanying;Zhu, Junhuan;Guo, Lifan;Hu, Xiaohua;Luo, Jiebo;Wang, Haohong
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.55-67
    • /
    • 2014
  • Image topic and emotion analysis is an important component of online image retrieval, which nowadays has become very popular in the widely growing social media community. However, due to the gaps between images and texts, there is very limited work in literature to detect one image's Topics and Emotions in a unified framework, although topics and emotions are two levels of semantics that often work together to comprehensively describe one image. In this work, a unified model, Joint Topic/Emotion Multi-Modal Hierarchical Latent Dirichlet Allocation (JTE-MMHLDA) model, which extends previous LDA, mmLDA, and JST model to capture topic and emotion information at the same time from heterogeneous data, is proposed. Specifically, a two level graphical structured model is built to realize sharing topics and emotions among the whole document collection. The experimental results on a Flickr dataset indicate that the proposed model efficiently discovers images' topics and emotions, and significantly outperform the text-only system by 4.4%, vision-only system by 18.1% in topic detection, and outperforms the text-only system by 7.1%, vision-only system by 39.7% in emotion detection.

  • PDF

An Improved Method for Detecting Caption in image using DCT-coefficient and Transition-map Analysis (DCT계수와 천이지도 분석을 이용한 개선된 영상 내 자막영역 검출방법)

  • An, Kwon-Jae;Joo, Sung-Il;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.61-71
    • /
    • 2011
  • In this paper, we proposed the method for detecting text region on image using DCT-coefficient and transition-map analysis. The detecting rate of traditional method for detecting text region using DCT-coefficient analysis is high, but false positive detecting rate also is high and the method using transition-map often reject true text region in step of verification because of sticky threshold. To overcome these problems, we generated PTRmap(Promising Text Region map) through DCT-coefficient analysis and applied PTRmap to method for detecting text region using transition map. As the result, the false positive detecting rate decreased as compared with the method using DCT-coefficient analysis, and the detecting rate increased as compared with the method using transition map.

Title Extraction from Book Cover Images Using Histogram of Oriented Gradients and Color Information

  • Do, Yen;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.8 no.4
    • /
    • pp.95-102
    • /
    • 2012
  • In this paper, we present a technique to extract the title areas from book cover images. A typical book cover image may contain text, pictures, diagrams as well as complex and irregular background. In addition, the high variability of character features such as thickness, font, position, background and tilt of the text also makes the text extraction task more complicated. Therefore, we propose a two steps efficient method that uses Histogram of Oriented Gradients and color information to find the title areas. Firstly, text localization is carried out to find the title candidates. Finally, refinement process is performed to find the sufficient components of title areas. To obtain the best result, we also use other constraints about the size, ratio between the length and width of the title. We achieve encouraging results of extracted title regions from book cover images which prove the advantages and efficiency of the proposed method.

Research Trend Analysis by using Text-Mining Techniques on the Convergence Studies of AI and Healthcare Technologies (텍스트 마이닝 기법을 활용한 인공지능과 헬스케어 융·복합 분야 연구동향 분석)

  • Yoon, Jee-Eun;Suh, Chang-Jin
    • Journal of Information Technology Services
    • /
    • v.18 no.2
    • /
    • pp.123-141
    • /
    • 2019
  • The goal of this study is to review the major research trend on the convergence studies of AI and healthcare technologies. For the study, 15,260 English articles on AI and healthcare related topics were collected from Scopus for 55 years from 1963, and text mining techniques were conducted. As a result, seven key research topics were defined : "AI for Clinical Decision Support System (CDSS)", "AI for Medical Image", "Internet of Healthcare Things (IoHT)", "Big Data Analytics in Healthcare", "Medical Robotics", "Blockchain in Healthcare", and "Evidence Based Medicine (EBM)". The result of this study can be utilized to set up and develop the appropriate healthcare R&D strategies for the researchers and government. In this study, text mining techniques such as Text Analysis, Frequency Analysis, Topic Modeling on LDA (Latent Dirichlet Allocation), Word Cloud, and Ego Network Analysis were conducted.

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.