• Title/Summary/Keyword: Image-text generation

Search Result 64, Processing Time 0.025 seconds

Research on Core patent mining methods based on key components of Generative AI (생성형 인공지능 기술의 핵심 구성 요소 기반 주요 특허 발굴 방법에 관한 연구)

  • Gayun Kim;Beom-Seok Kim;Jinhong Yang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.292-300
    • /
    • 2023
  • This paper proposes a patent discovery method and strategy for Generative AI-related patents by utilizing qualitative evaluation indicators established based on the core components of the technology. Currently, the evaluation of patent quality relies on quantitative indicators, but existing quantitative indicators cannot represent the characteristics of Generative AI technology, making it difficult to accurately evaluate. Therefore, there is a need for additional qualitative indicators that consider technical characteristics based on patent claims, which can reveal the actual strength of the patent. In this paper, we propose a new evaluation index considering the technical characteristics of Generative AI. Core patents were selected using the proposed evaluation index, and the appropriateness of the proposed index was verified through the existing quantitative evaluation method for the selected core patents.

Weaving the realities with video in multi-media theatre centering on Schaubuhne's Hamlet and Lenea de Sombra's Amarillo (멀티미디어 공연에서 비디오를 활용한 리얼리티 구축하기 - 샤우뷔네의 <햄릿>과 리니아 드 솜브라의 <아마릴로>를 중심으로 -)

  • Choi, Young-Joo
    • Journal of Korean Theatre Studies Association
    • /
    • no.53
    • /
    • pp.167-202
    • /
    • 2014
  • When video composes mise-en-scene during the performance, it reflects the aspect of contemporary image culture, where the individual as creator joins in the image culture through the device of cell phone and computer remediating the former video technology. It also closely related with the contemporary theatre culture in which 1960's and 1970's video art was weaved into the contemporary performance theatre. With these cultural background, theatre practitioners regarded media-friendly mise-en-scene as an alternative facing the cultural landscape the linear representational narrative did not correspond to the present culture. Nonetheless, it can not be ignored that video in the performance theatre is remediating its historical function: to criticize the social reality. to enrich the aesthetic or emotional reality. I focused video in the performance theatre could feature the object with the image by realizing the realtime relay, emphasizing the situation within the frame, and strengthening the reality by alluding the object as a gesutre. So I explored its two historical manuel. First, video recorded the spot, communicated the information, and arose the audience's recognition of the object to its critical function. Second, video in performance theatre could redistribute perceptual way according to the editing method like as close up, slow motion, multiple perspective, montage and collage, and transformation of the image to the aesthetic function. Reminding the historical function of video in contemporary performance theatre, I analyzed two shows, Schaubuhne's Hamlet and Lenea de Sombra's Amarillo which were introduced to Korean audiences during the 2010 Seoul Theatre Olympics. It is known to us that Ostermeir found real social reality as a text and made the play the context. In this, he used video as a vehicle to penetrate the social reality through the hero's perspective. It is also noteworthy that Ostermeir understood Hamlet's dilemma as these days' young generation's propensity. They delayed action while being involved in image culture. Besides his use of video in the piece revitalized the aesthetic function of video by hypermedial perceptual method. Amarillo combined documentary theatre method with installation, physical theatre, and video relay on the spot, and activated aesthetic function with the intermediality, its interacting co-relationship between the media. In this performance theatre, video has recorded and pursued the absent presence of the real people who died or lost in the desert. At the same time it fantasized the emotional aspect of the people at the moment of their death, which would be opaque or non prominent otherwise. As a conclusion, I found the video in contemporary performance theatre visualized the rupture between the media and perform their intermediality. It attempted to disturb the transparent immediacy to invoke the spectator's perception to the theatrical situation, to open its emotional and spiritual aspect, and to remind the realities as with Schaubuhne's Hamlet and Lenea de Sombra's Amarillo.

Web-based Medical Image Presentation (웹기반 의료영상 프레젠테이션)

  • 김동현;송승헌;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.964-971
    • /
    • 2003
  • According to the development of information processing technology and computer hardware, PACS systems have been installed in many hospitals. They can increase the efficiency and the convenience remarkably for handling medical images using digitalized data. After we compare the generation images with other cases, we can read the images correctly and decide how to treat the patients. If the results, included test method and specialist's opinion, are represented dynamically on homepage in hospital. then visitors can get their experience in directly and understand the field of examination and the area of medical treatment. In this thesis, we display the effective images such as MR of the abnormal cases according to parts and diseases, the movie and still images such as Angio image, the other multimedia materials such as the sound and text of doctor's opinions, in SMIL based on XML, concerning the problem of concurrency.

GPT-enabled SNS Sentence writing support system Based on Image Object and Meta Information (이미지 객체 및 메타정보 기반 GPT 활용 SNS 문장 작성 보조 시스템)

  • Dong-Hee Lee;Mikyeong Moon;Bong-Jun, Choi
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.160-165
    • /
    • 2023
  • In this study, we propose an SNS sentence writing assistance system that utilizes YOLO and GPT to assist users in writing texts with images, such as SNS. We utilize the YOLO model to extract objects from images inserted during writing, and also extract meta-information such as GPS information and creation time information, and use them as prompt values for GPT. To use the YOLO model, we trained it on form image data, and the mAP score of the model is about 0.25 on average. GPT was trained on 1,000 blog text data with the topic of 'restaurant reviews', and the model trained in this study was used to generate sentences with two types of keywords extracted from the images. A survey was conducted to evaluate the practicality of the generated sentences, and a closed-ended survey was conducted to clearly analyze the survey results. There were three evaluation items for the questionnaire by providing the inserted image and keyword sentences. The results showed that the keywords in the images generated meaningful sentences. Through this study, we found that the accuracy of image-based sentence generation depends on the relationship between image keywords and GPT learning contents.

Implementation of Character and Object Metadata Generation System for Media Archive Construction (미디어 아카이브 구축을 위한 등장인물, 사물 메타데이터 생성 시스템 구현)

  • Cho, Sungman;Lee, Seungju;Lee, Jaehyeon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1076-1084
    • /
    • 2019
  • In this paper, we introduced a system that extracts metadata by recognizing characters and objects in media using deep learning technology. In the field of broadcasting, multimedia contents such as video, audio, image, and text have been converted to digital contents for a long time, but the unconverted resources still remain vast. Building media archives requires a lot of manual work, which is time consuming and costly. Therefore, by implementing a deep learning-based metadata generation system, it is possible to save time and cost in constructing media archives. The whole system consists of four elements: training data generation module, object recognition module, character recognition module, and API server. The deep learning network module and the face recognition module are implemented to recognize characters and objects from the media and describe them as metadata. The training data generation module was designed separately to facilitate the construction of data for training neural network, and the functions of face recognition and object recognition were configured as an API server. We trained the two neural-networks using 1500 persons and 80 kinds of object data and confirmed that the accuracy is 98% in the character test data and 42% in the object data.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Design and Implementation of an HTML Converter Supporting Frame for the Wireless Internet (무선 인터넷을 위한 프레임 지원 HTML 변환기의 설계 및 구현)

  • Han, Jin-Seop;Park, Byung-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.6
    • /
    • pp.1-10
    • /
    • 2005
  • This paper describes the implementation of HTML converter for wireless internet access in wireless application protocol environment. The implemented HTML converter consists of the contents conversion module, the conversion rule set, the WML file generation module, and the frame contents reformatting module. Plain text contents are converted to WML contents through one by one mapping, referring to the converting rule set in the contents converting module. For frame contents, the first frameset sources are parsed and the request messages are reconstructed with all the file names, reconnecting to web server as much as the number of files to receive each documents and append to the first document. Finally, after the process of reformatting in the frame contents reformatting module, frame contents are converted to WML's table contents. For image map contents, the image map related tags are parsed and the names of html documents which are linked to any sites are extracted to be replaced with WML contents data and linked to those contents. The proposed conversion method for frame contents provides a better interface for the users convenience and interactions compared to the existing converters. Conversion of image maps in our converter is one of the features not currently supported by other converters.

Proposal for License Plate Recognition Using Synthetic Data and Vehicle Type Recognition System (가상 데이터를 활용한 번호판 문자 인식 및 차종 인식 시스템 제안)

  • Lee, Seungju;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.776-788
    • /
    • 2020
  • In this paper, a vehicle type recognition system using deep learning and a license plate recognition system are proposed. In the existing system, the number plate area extraction through image processing and the character recognition method using DNN were used. These systems have the problem of declining recognition rates as the environment changes. Therefore, the proposed system used the one-stage object detection method YOLO v3, focusing on real-time detection and decreasing accuracy due to environmental changes, enabling real-time vehicle type and license plate character recognition with one RGB camera. Training data consists of actual data for vehicle type recognition and license plate area detection, and synthetic data for license plate character recognition. The accuracy of each module was 96.39% for detection of car model, 99.94% for detection of license plates, and 79.06% for recognition of license plates. In addition, accuracy was measured using YOLO v3 tiny, a lightweight network of YOLO v3.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.