• Title/Summary/Keyword: 텍스트 기반 이미지 생성

Search Result 72, Processing Time 0.026 seconds

Agricultural Applicability of AI based Image Generation (AI 기반 이미지 생성 기술의 농업 적용 가능성)

  • Seungri Yoon;Yeyeong Lee;Eunkyu Jung;Tae In Ahn
    • Journal of Bio-Environment Control
    • /
    • v.33 no.2
    • /
    • pp.120-128
    • /
    • 2024
  • Since ChatGPT was released in 2022, the generative artificial intelligence (AI) industry has seen massive growth and is expected to bring significant innovations to cognitive tasks. AI-based image generation, in particular, is leading major changes in the digital world. This study investigates the technical foundations of Midjourney, Stable Diffusion, and Firefly-three notable AI image generation tools-and compares their effectiveness by examining the images they produce. The results show that these AI tools can generate realistic images of tomatoes, strawberries, paprikas, and cucumbers, typical crops grown in greenhouse. Especially, Firefly stood out for its ability to produce very realistic images of greenhouse-grown crops. However, all tools struggled to fully capture the environmental context of greenhouses where these crops grow. The process of refining prompts and using reference images has proven effective in accurately generating images of strawberry fruits and their cultivation systems. In the case of generating cucumber images, the AI tools produced images very close to real ones, with no significant differences found in their evaluation scores. This study demonstrates how AI-based image generation technology can be applied in agriculture, suggesting a bright future for its use in this field.

A Study on Image Generation from Sentence Embedding Applying Self-Attention (Self-Attention을 적용한 문장 임베딩으로부터 이미지 생성 연구)

  • Yu, Kyungho;No, Juhyeon;Hong, Taekeun;Kim, Hyeong-Ju;Kim, Pankoo
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.63-69
    • /
    • 2021
  • When a person sees a sentence and understands the sentence, the person understands the sentence by reminiscent of the main word in the sentence as an image. Text-to-image is what allows computers to do this associative process. The previous deep learning-based text-to-image model extracts text features using Convolutional Neural Network (CNN)-Long Short Term Memory (LSTM) and bi-directional LSTM, and generates an image by inputting it to the GAN. The previous text-to-image model uses basic embedding in text feature extraction, and it takes a long time to train because images are generated using several modules. Therefore, in this research, we propose a method of extracting features by using the attention mechanism, which has improved performance in the natural language processing field, for sentence embedding, and generating an image by inputting the extracted features into the GAN. As a result of the experiment, the inception score was higher than that of the model used in the previous study, and when judged with the naked eye, an image that expresses the features well in the input sentence was created. In addition, even when a long sentence is input, an image that expresses the sentence well was created.

Research on Generative AI for Korean Multi-Modal Montage App (한국형 멀티모달 몽타주 앱을 위한 생성형 AI 연구)

  • Lim, Jeounghyun;Cha, Kyung-Ae;Koh, Jaepil;Hong, Won-Kee
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.13-26
    • /
    • 2024
  • Multi-modal generation is the process of generating results based on a variety of information, such as text, images, and audio. With the rapid development of AI technology, there is a growing number of multi-modal based systems that synthesize different types of data to produce results. In this paper, we present an AI system that uses speech and text recognition to describe a person and generate a montage image. While the existing montage generation technology is based on the appearance of Westerners, the montage generation system developed in this paper learns a model based on Korean facial features. Therefore, it is possible to create more accurate and effective Korean montage images based on multi-modal voice and text specific to Korean. Since the developed montage generation app can be utilized as a draft montage, it can dramatically reduce the manual labor of existing montage production personnel. For this purpose, we utilized persona-based virtual person montage data provided by the AI-Hub of the National Information Society Agency. AI-Hub is an AI integration platform aimed at providing a one-stop service by building artificial intelligence learning data necessary for the development of AI technology and services. The image generation system was implemented using VQGAN, a deep learning model used to generate high-resolution images, and the KoDALLE model, a Korean-based image generation model. It can be confirmed that the learned AI model creates a montage image of a face that is very similar to what was described using voice and text. To verify the practicality of the developed montage generation app, 10 testers used it and more than 70% responded that they were satisfied. The montage generator can be used in various fields, such as criminal detection, to describe and image facial features.

Data Augmentation of Shelf Product for Object Recognition in O2O Stores Based on Generative AI (O2O 상점의 객체 인식을 위한 생성 AI 기반의 진열대 상품 데이터 증강)

  • Jongwook Si;Sungyoung Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.77-78
    • /
    • 2024
  • 본 논문에서는 O2O 상점의 자동화에 필수적인 객체 인식 모델의 성능 향상을 목표로, 생성 AI 기술을 이용한 데이터 증강 방법을 제시한다. 제안하는 방법은 텍스트 프롬프트를 활용하여 진열대 상품 이미지를 포함한 다양한 고품질 이미지를 생성할 수 있음을 보인다. 또한, 실제에 더 가까운 상세한 이미지를 생성하기 위한 최적화된 프롬프트를 제안하고, Stable-Diffusion과 DALL-E2의 생성 결과를 통해 비교 분석한다. 이러한 접근 방법은 객체 인식 모델의 성능 향상에 영향을 미칠 것으로 기대된다.

  • PDF

Using similarity based image caption to aid visual question answering (유사도 기반 이미지 캡션을 이용한 시각질의응답 연구)

  • Kang, Joonseo;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.191-204
    • /
    • 2021
  • Visual Question Answering (VQA) and image captioning are tasks that require understanding of the features of images and linguistic features of text. Therefore, co-attention may be the key to both tasks, which can connect image and text. In this paper, we propose a model to achieve high performance for VQA by image caption generated using a pretrained standard transformer model based on MSCOCO dataset. Captions unrelated to the question can rather interfere with answering, so some captions similar to the question were selected to use based on a similarity to the question. In addition, stopwords in the caption could not affect or interfere with answering, so the experiment was conducted after removing stopwords. Experiments were conducted on VQA-v2 data to compare the proposed model with the deep modular co-attention network (MCAN) model, which showed good performance by using co-attention between images and text. As a result, the proposed model outperformed the MCAN model.

Text Region Detection using Edge and Regional Minima/Maxima Transformation from Natural Scene Images (에지 및 국부적 최소/최대 변환을 이용한 자연 이미지로부터 텍스트 영역 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.2
    • /
    • pp.358-363
    • /
    • 2009
  • Text region detection from the natural scene images used in a variety of applications, many research are needed in this field. Recent research methods is to detect the text region using various algorithm which it is combination of edge based and connected component based. Therefore, this paper proposes an text region detection using edge and regional minima/maxima transformation algorithm from natural scene images, and then detect the connected components of edge and regional minima/maxima, labeling edge and regional minima/maxima connected components. Analysis the labeled regions and then detect a text candidate regions, each of detected text candidates combined and create a single text candidate image, Final text region validated by comparing the similarity and adjacency of individual characters, and then as the final text regions are detected. As the results of experiments, proposed algorithm improved the correctness of text regions detection using combined edge and regional minima/maxima connected components detection methods.

Template-based Auto Social Magazine and Video Creation Service (템플릿 기반의 자동 소셜 매거진 및 영상 합성 서비스)

  • Lee, Jae-Won;Jang, Dal-Won;Kim, Mi-Ji;Kim, Ji-Su;Kim, Seo-Yul;Lee, Jong-Seol
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.129-132
    • /
    • 2019
  • 최근 자연어 처리 기술에 대한 중요도가 높아지고, 발전 속도가 빨라지면서, 산업 전반에 걸쳐 챗봇에 대한 수요가 증가하고 있다. 본 논문은 챗봇을 이용한 소셜 매거진 생성 및 배포, 그리고 이를 활용하여 사용자에게 텍스트를 음성으로 변환하여 동영상의 형태로 전달해 주는 시스템을 다루고 있다. 챗봇이 사용자 대화를 수집, 분석하여 상황에 맞는 키워드를 추출하고, 중복 콘텐츠 제거, 텍스트 요약 등 일련의 과정을 거쳐 소셜 매거진을 생성 및 배포하는 서비스와, 매거진의 각 콘텐츠를 구성하는 이미지, 텍스트 정보를 가지고 음성 합성, 자막 생성, 영상 효과 등을 이용하여 영상을 합성하는 서비스에 관한 것이다. 본 논문에서 제안한 시스템에 대한 성능은 실험을 통하여 검증하였다.

  • PDF

Study on Generation of Children's Hand Drawing Learning Model for Text-to-Image (Text-to-Image를 위한 아동 손그림 학습 모델 생성 연구)

  • Lee, Eunchae;Moon, Mikyeong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.505-506
    • /
    • 2022
  • 인공지능 기술은 점차 빠른 속도로 발전되며 응용 분야가 확대되어 창작 산업에서의 역할도 커져 예술, 영화 및 기타 창조적인 산업에도 영향을 주고 있다. 이러한 인공지능 기술을 이용하여 텍스트로 설명하면 다양한 스타일의 이미지를 생성해내는 기술이 있지만 아동이 직접 그린 손그림 스타일의 그림을 생성하지는 못한다. 본 논문에서는 아동 손그림 데이터를 통해 Text-to-Image를 학습시켜 새로운 학습 모델을 생성하는 과정에 대해서 기술한다. 이 연구를 통해 생성된 픽셀을 결합하여 텍스트를 기반으로 하나의 아동 손그림을 만들 수 있을 것으로 기대한다.

  • PDF

Automatic Target Recognition Study using Knowledge Graph and Deep Learning Models for Text and Image data (지식 그래프와 딥러닝 모델 기반 텍스트와 이미지 데이터를 활용한 자동 표적 인식 방법 연구)

  • Kim, Jongmo;Lee, Jeongbin;Jeon, Hocheol;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.145-154
    • /
    • 2022
  • Automatic Target Recognition (ATR) technology is emerging as a core technology of Future Combat Systems (FCS). Conventional ATR is performed based on IMINT (image information) collected from the SAR sensor, and various image-based deep learning models are used. However, with the development of IT and sensing technology, even though data/information related to ATR is expanding to HUMINT (human information) and SIGINT (signal information), ATR still contains image oriented IMINT data only is being used. In complex and diversified battlefield situations, it is difficult to guarantee high-level ATR accuracy and generalization performance with image data alone. Therefore, we propose a knowledge graph-based ATR method that can utilize image and text data simultaneously in this paper. The main idea of the knowledge graph and deep model-based ATR method is to convert the ATR image and text into graphs according to the characteristics of each data, align it to the knowledge graph, and connect the heterogeneous ATR data through the knowledge graph. In order to convert the ATR image into a graph, an object-tag graph consisting of object tags as nodes is generated from the image by using the pre-trained image object recognition model and the vocabulary of the knowledge graph. On the other hand, the ATR text uses the pre-trained language model, TF-IDF, co-occurrence word graph, and the vocabulary of knowledge graph to generate a word graph composed of nodes with key vocabulary for the ATR. The generated two types of graphs are connected to the knowledge graph using the entity alignment model for improvement of the ATR performance from images and texts. To prove the superiority of the proposed method, 227 documents from web documents and 61,714 RDF triples from dbpedia were collected, and comparison experiments were performed on precision, recall, and f1-score in a perspective of the entity alignment..

Relationship between Images and Text in the Visual Paradox -Focusing on Case Studies of Volkswagen Ads- (시각적 패러독스에서 이미지와 텍스트의 상관관계 -폭스바겐 광고 사례의 분석을 중심으로-)

  • Kim, Jin-Gon;Park, Young-Won
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.1
    • /
    • pp.176-184
    • /
    • 2012
  • People are exposed to various media. After the Digital Revolution, quantitative expansion of the media is at a rapid pace. Because of the expansion of the media, advertising needs efforts that induce the audiences' reaction. Rhetorical devices are used as the efforts. This study noted the visual paradox of rhetorical devices because it is an effective representation device that induced audiences' reaction by deliberate contradiction and ambiguity. This study has defined the visual paradox based on define and classification of paradox in logic. This study also tried to reveal the relationship between images and text for signification by metalanguage because it is important to the visual paradox in advertising. And analyzed cases of Volkswagen ads to prove the research process. Finally identified that images and text interact to create a new meaning.