• 제목/요약/키워드: text-to-image

검색결과 904건 처리시간 0.026초

Textual Inversion을 활용한 Adversarial Prompt 생성 기반 Text-to-Image 모델에 대한 멤버십 추론 공격 (Membership Inference Attack against Text-to-Image Model Based on Generating Adversarial Prompt Using Textual Inversion)

  • 오윤주;박소희;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1111-1123
    • /
    • 2023
  • 최근 생성 모델이 발전함에 따라 생성 모델을 위협하는 연구도 활발히 진행되고 있다. 본 논문은 Text-to-Image 모델에 대한 멤버십 추론 공격을 위한 새로운 제안 방법을 소개한다. 기존의 Text-to-Image 모델에 대한 멤버십 추론 공격은 쿼리 이미지의 caption으로 단일 이미지를 생성하여 멤버십을 추론하였다. 반면, 본 논문은 Textual Inversion을 통해 쿼리 이미지에 personalization된 임베딩을 사용하고, Adversarial Prompt 생성 방법으로 여러 장의 이미지를 효과적으로 생성하는 멤버십 추론 공격을 제안한다. 또한, Text-to-Image 모델 중 주목받고 있는 Stable Diffusion 모델에 대한 멤버십 추론 공격을 최초로 진행하였으며, 최대 1.00의 Accuracy를 달성한다.

An End-to-End Sequence Learning Approach for Text Extraction and Recognition from Scene Image

  • Lalitha, G.;Lavanya, B.
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.220-228
    • /
    • 2022
  • Image always carry useful information, detecting a text from scene images is imperative. The proposed work's purpose is to recognize scene text image, example boarding image kept on highways. Scene text detection on highways boarding's plays a vital role in road safety measures. At initial stage applying preprocessing techniques to the image is to sharpen and improve the features exist in the image. Likely, morphological operator were applied on images to remove the close gaps exists between objects. Here we proposed a two phase algorithm for extracting and recognizing text from scene images. In phase I text from scenery image is extracted by applying various image preprocessing techniques like blurring, erosion, tophat followed by applying thresholding, morphological gradient and by fixing kernel sizes, then canny edge detector is applied to detect the text contained in the scene images. In phase II text from scenery image recognized using MSER (Maximally Stable Extremal Region) and OCR; Proposed work aimed to detect the text contained in the scenery images from popular dataset repositories SVT, ICDAR 2003, MSRA-TD 500; these images were captured at various illumination and angles. Proposed algorithm produces higher accuracy in minimal execution time compared with state-of-the-art methodologies.

Caption Extraction in News Video Sequence using Frequency Characteristic

  • Youglae Bae;Chun, Byung-Tae;Seyoon Jeong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -2
    • /
    • pp.835-838
    • /
    • 2000
  • Popular methods for extracting a text region in video images are in general based on analysis of a whole image such as merge and split method, and comparison of two frames. Thus, they take long computing time due to the use of a whole image. Therefore, this paper suggests the faster method of extracting a text region without processing a whole image. The proposed method uses line sampling methods, FFT and neural networks in order to extract texts in real time. In general, text areas are found in the higher frequency domain, thus, can be characterized using FFT The candidate text areas can be thus found by applying the higher frequency characteristics to neural network. Therefore, the final text area is extracted by verifying the candidate areas. Experimental results show a perfect candidate extraction rate and about 92% text extraction rate. The strength of the proposed algorithm is its simplicity, real-time processing by not processing the entire image, and fast skipping of the images that do not contain a text.

  • PDF

실내디자인의 문자 정보와 이미지 정보의 통합화에 관한 연구 (Integration of Text and Image Information of Interior Design)

  • 이현수;정선영;오수영;고경진
    • 한국실내디자인학회논문집
    • /
    • 제26호
    • /
    • pp.88-94
    • /
    • 2001
  • This paper explores idea of the integration of text and image information in interior design. In this paper, we designed a structure of text and image information. Text information includes the information about materials and projects, and image information includes images of interior design. Material information consists of such as name and price of materials. Image information involves images of interior design that have been scanned and categorized into 15 groups according to the building regulation. Project information consists of construction brief and materials relevant to the image of interior design. The interior design information that is based on cases offers various information to designer and customers. In addition, the connection between text information and image information improves the quality of interior design by decreasing the trial and error in interior design processes. Finally, we discuss the method that integrates text and image information of interior design.

  • PDF

Self-Attention을 적용한 문장 임베딩으로부터 이미지 생성 연구 (A Study on Image Generation from Sentence Embedding Applying Self-Attention)

  • 유경호;노주현;홍택은;김형주;김판구
    • 스마트미디어저널
    • /
    • 제10권1호
    • /
    • pp.63-69
    • /
    • 2021
  • 사람이 어떤 문장을 보고 그 문장에 대해 이해하는 것은 문장 안에서 주요한 단어를 이미지로 연상시켜 그 문장에 대해 이해한다. 이러한 연상과정을 컴퓨터가 할 수 있도록 하는 것을 text-to-image라고 한다. 기존 딥 러닝 기반 text-to-image 모델은 Convolutional Neural Network(CNN)-Long Short Term Memory(LSTM), bi-directional LSTM을 사용하여 텍스트의 특징을 추출하고, GAN에 입력으로 하여 이미지를 생성한다. 기존 text-to-image 모델은 텍스트 특징 추출에서 기본적인 임베딩을 사용하였으며, 여러 모듈을 사용하여 이미지를 생성하므로 학습 시간이 오래 걸린다. 따라서 본 연구에서는 자연어 처리분야에서 성능 향상을 보인 어텐션 메커니즘(Attention Mechanism)을 문장 임베딩에 사용하여 특징을 추출하고, 추출된 특징을 GAN에 입력하여 이미지를 생성하는 방법을 제안한다. 실험 결과 기존 연구에서 사용되는 모델보다 inception score가 높았으며 육안으로 판단하였을 때 입력된 문장에서 특징을 잘 표현하는 이미지를 생성하였다. 또한, 긴 문장이 입력되었을 때에도 문장을 잘 표현하는 이미지를 생성하였다.

사용자 편의성과 효율성을 증진하기 위한 신뢰도 높은 이미지-텍스트 융합 CAPTCHA (Reliable Image-Text Fusion CAPTCHA to Improve User-Friendliness and Efficiency)

  • 문광호;김유성
    • 정보처리학회논문지C
    • /
    • 제17C권1호
    • /
    • pp.27-36
    • /
    • 2010
  • 웹 서비스 신청 단계에서 신청자가 실제 인간 사용자임을 확인하기 위해 사용되는 텍스트 기반 캡차(text-based CAPTCHA)의 변형된 문자를 광학문자인식 기술로 파악하는 것이 가능하기에 캡차의 신뢰성이 떨어지는 문제가 발생하고 있다. 이 문제를 해결하기 위해 제안되었던 기존의 이미지 기반 캡차(image-based CAPTCHA)에서도 여러 문제점이 존재한다. 인공지능 프로그램을 사용하여 시스템이 보유하고 있는 제한된 수의 이미지 내용을 파악함으로써 신뢰도가 떨어지는 문제가 발생할 수 있으며, 제공된 이미지에 대해 사용자가 다른 유사한 단어를 입력하는 경우에는 오답으로 판정되어 반복적으로 캡차를 시도해야 하는 불편함이 발생 할 수 있으며 또한, 사용자에게 캡차 문제를 제공하기 위해 여러 이미지 파일을 전송해야 하기에 전송 비용의 비효율성 문제가 존재한다. 이러한 기존 이미지 기반 캡차의 문제점들을 해결하기 위해 본 논문에서는 이미지와 관련 키워드 일부를 융합하여 제공하는 이미지-텍스트 융합 캡차를 제안하였다. 본 논문에서 제안한 이미지-텍스트 융합 캡차에서는 이미지와 관련된 단어의 일부분을 힌트로 활용하여 쉽게 정답을 입력할 수 있도록 사용자 편리성을 제공하며 이미지와 텍스트를 한 이미지 파일 내에 융합시켰기 때문에 전송 비용을 절약하여 효율성을 증진할 수 있다. 또한, 캡차 시스템의 신뢰성 증진을 위해 인터넷 검색으로 캡차용 이미지를 대량으로 수집하도록 하였으며 수집되는 캡차 이미지의 정확성을 유지하기 위해 필터링 과정을 거치도록 하였다. 또한, 본 논문에서는 실제 실험을 통해 제안된 이미지-텍스트 융합 캡차가 기존 이미지 기반 캡차보다 사용자에게 편리하고 신뢰성이 증진될 수 있음을 입증하였다.

Patent Document Similarity Based on Image Analysis Using the SIFT-Algorithm and OCR-Text

  • Park, Jeong Beom;Mandl, Thomas;Kim, Do Wan
    • International Journal of Contents
    • /
    • 제13권4호
    • /
    • pp.70-79
    • /
    • 2017
  • Images are an important element in patents and many experts use images to analyze a patent or to check differences between patents. However, there is little research on image analysis for patents partly because image processing is an advanced technology and typically patent images consist of visual parts as well as of text and numbers. This study suggests two methods for using image processing; the Scale Invariant Feature Transform(SIFT) algorithm and Optical Character Recognition(OCR). The first method which works with SIFT uses image feature points. Through feature matching, it can be applied to calculate the similarity between documents containing these images. And in the second method, OCR is used to extract text from the images. By using numbers which are extracted from an image, it is possible to extract the corresponding related text within the text passages. Subsequently, document similarity can be calculated based on the extracted text. Through comparing the suggested methods and an existing method based only on text for calculating the similarity, the feasibility is achieved. Additionally, the correlation between both the similarity measures is low which shows that they capture different aspects of the patent content.

Stroke Width-Based Contrast Feature for Document Image Binarization

  • Van, Le Thi Khue;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • 제10권1호
    • /
    • pp.55-68
    • /
    • 2014
  • Automatic segmentation of foreground text from the background in degraded document images is very much essential for the smooth reading of the document content and recognition tasks by machine. In this paper, we present a novel approach to the binarization of degraded document images. The proposed method uses a new local contrast feature extracted based on the stroke width of text. First, a pre-processing method is carried out for noise removal. Text boundary detection is then performed on the image constructed from the contrast feature. Then local estimation follows to extract text from the background. Finally, a refinement procedure is applied to the binarized image as a post-processing step to improve the quality of the final results. Experiments and comparisons of extracting text from degraded handwriting and machine-printed document image against some well-known binarization algorithms demonstrate the effectiveness of the proposed method.

Web Image Clustering with Text Features and Measuring its Efficiency

  • Cho, Soo-Sun
    • 한국멀티미디어학회논문지
    • /
    • 제10권6호
    • /
    • pp.699-706
    • /
    • 2007
  • This article is an approach to improving the clustering of Web images by using high-level semantic features from text information relevant to Web images as well as low-level visual features of image itself. These high-level text features can be obtained from image URLs and file names, page titles, hyperlinks, and surrounding text. As a clustering algorithm, a self-organizing map (SOM) proposed by Kohonen is used. To evaluate the clustering efficiencies of SOMs, we propose a simple but effective measure indicating the accumulativeness of same class images and the perplexities of class distributions. Our approach is to advance the existing measures through defining and using new measures accumulativeness on the most superior clustering node and concentricity to evaluate clustering efficiencies of SOMs. The experimental results show that the high-level text features are more useful in SOM-based Web image clustering.

  • PDF

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.