• Title/Summary/Keyword: 캡션 추출

Search Result 21, Processing Time 0.022 seconds

Efficient Object Classification Scheme for Scanned Educational Book Image (교육용 도서 영상을 위한 효과적인 객체 자동 분류 기술)

  • Choi, Young-Ju;Kim, Ji-Hae;Lee, Young-Woon;Lee, Jong-Hyeok;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1323-1331
    • /
    • 2017
  • Despite the fact that the copyright has grown into a large-scale business, there are many constant problems especially in image copyright. In this study, we propose an automatic object extraction and classification system for the scanned educational book image by combining document image processing and intelligent information technology like deep learning. First, the proposed technology removes noise component and then performs a visual attention assessment-based region separation. Then we carry out grouping operation based on extracted block areas and categorize each block as a picture or a character area. Finally, the caption area is extracted by searching around the classified picture area. As a result of the performance evaluation, it can be seen an average accuracy of 83% in the extraction of the image and caption area. For only image region detection, up-to 97% of accuracy is verified.

A Study on the Indexing Method of Moving Image for TV News (TV뉴스 영상 색인에 관한 연구)

  • 장재화
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 1999.08a
    • /
    • pp.35-38
    • /
    • 1999
  • TV뉴스 영상은 다루고 있는 대상의 범위가 넓어 뉴스의 주제를 나타냄과 동시에 그 이외의 내용도 포함한다. 또한 소재의 재이용이 가능하기 때문에 뉴스의 내용에서뿐만 아니라 영상 자체가 표현하는 내용에서도 검색이 가능해야 한다. 본 연구에서는 TV뉴스의 영상을 색인하기 위해 종래 사용되던 뉴스의 나레이션, 캡션뿐만 다니라 영상 자체를 대상으로 하여 색인어를 추출하는 방안에 대해 논하였다. 연구에서는 $\ulcorner$KBS 뉴스 9$\lrcorner$을 예로 들어 영상내용정보, 나레이션과 캡션정보, 방송정보. 촬영정보로 나누어 색인어를 부여하였다.

  • PDF

Determing intensity value of characters and backgrounds on caption (캡션 내 문자와 배경의 명암값 결정)

  • An, Kwon-Jae;Kim, Gye-Young
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.125-127
    • /
    • 2010
  • 본 논문에서는 동영상에서 비교적 단일 색상의 배경과 문자를 갖는 캡션을 문자인식을 위하여 문자와 배경간의 명암값 결정에 관한 내용이다. 먼저 캡션에 대해 그레이 스케일로 전환을 한 후, Otsu 방법[1]을 이용하여 이진화를 수행한다. 이 후 이진화 영상에서 흰색영역 검은색영역에 대해 각각 최대 내접 정사각형을 산출한다. 다음으로 각각의 영역에서 산출된 최대 내접 정사각형의 분산의 대소를 비교하여 문자영역과 배경영역을 결정한다. 이후 전역적인 잡음을 제거하기 문자영역에 대해 Otsu 방법을 이용하여 최종 문자영역을 결정한다. 제안된 방법의 문자영역의 명암값 결정 정확도는 약 99%로 매우 우수한 성능을 보였다.

  • PDF

Image captioning and video captioning using Transformer (Transformer를 사용한 이미지 캡셔닝 및 비디오 캡셔닝)

  • Gi-Duk Kim;Geun-Hoo Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.303-305
    • /
    • 2023
  • 본 논문에서는 트랜스포머를 사용한 이미지 캡셔닝 방법과 비디오 캡셔닝 방법을 제안한다. 트랜스포머의 입력으로 사전 학습된 이미지 클래스 분류모델을 거쳐 추출된 특징을 트랜스포머의 입력으로 넣고 인코더-디코더를 통해 이미지와 비디오의 캡션을 출력한다. 이미지 캡셔닝의 경우 한글 데이터 세트를 학습하여 한글 캡션을 출력하도록 학습하였으며 비디오 캡셔닝의 경우 MSVD 데이터 세트를 학습하여 학습 후 출력 캡션의 성능을 다른 비디오 캡셔닝 모델의 성능과 비교하였다. 비디오 캡셔닝에서 성능향상을 위해 트랜스포머의 디코더를 변형한 GPT-2를 사용하였을 때 BLEU-1 점수가 트랜스포머의 경우 0.62, GPT-2의 경우 0.80으로 성능이 향상됨을 확인하였다

  • PDF

Design and Implementation of Multimedia Data Retrieval System using Image Caption Information (영상 캡션 정보를 이용한 멀티미디어 데이터 검색 시스템의 설계 및 구현)

  • 이현창;배상현
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.630-636
    • /
    • 2004
  • According to the increase of audio and video data utilization, the presentation of multimedia data contents and the work of retrieving, storing and manipulating a multimedia data have been the focus of recent work. The display for multimedia data should retrieve and access the contents easily that users want to present. This study is about the design and implementation of a system to retrieve multimedia data based on the contents of documentation or the caption information of a multimedia data for retrieving documentation including multimedia data. It intends to develop an filtering step to retrieve all of keyword within the caption information of multimedia data and text of a documentation. Also, the system is designed to retrieve a large amount of data quickly using an inverted file structure available for B+ tree.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

A Method for Recovering Text Regions in Video using Extended Block Matching and Region Compensation (확장적 블록 정합 방법과 영역 보상법을 이용한 비디오 문자 영역 복원 방법)

  • 전병태;배영래
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.767-774
    • /
    • 2002
  • Conventional research on image restoration has focused on restoring degraded images resulting from image formation, storage and communication, mainly in the signal processing field. Related research on recovering original image information of caption regions includes a method using BMA(block matching algorithm). The method has problem with frequent incorrect matching and propagating the errors by incorrect matching. Moreover, it is impossible to recover the frames between two scene changes when scene changes occur more than twice. In this paper, we propose a method for recovering original images using EBMA(Extended Block Matching Algorithm) and a region compensation method. To use it in original image recovery, the method extracts a priori knowledge such as information about scene changes, camera motion and caption regions. The method decides the direction of recovery using the extracted caption information(the start and end frames of a caption) and scene change information. According to the direction of recovery, the recovery is performed in units of character components using EBMA and the region compensation method. Experimental results show that EBMA results in good recovery regardless of the speed of moving object and complexity of background in video. The region compensation method recovered original images successfully, when there is no information about the original image to refer to.

Sports Highlight Abstraction (스포츠 하이라이트 생성)

  • Kim, Mi-Ho;Shin, Seong-Yoon;Jeon, Keun-Hwan;Rhee, Yang-Weon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04b
    • /
    • pp.1233-1236
    • /
    • 2001
  • 다양한 장르의 비디오 데이터에서 하이라이트를 생성하는 것은 짧게 요약된 하이라이트 비디오 신을 생성하고자 하는 멀티미디어 컨텐츠의 제작자나 사용자에게 중요한 역할을 제공한다. 본 논문에서는 새로운 비디오 하이라이트의 생성 방법과 내용 기반, 즉 이벤트 기반의 비디오 인덱싱 방법을 제시한다. 경기 종목으로는 골을 넣어 득점하는 축구, 농구 그리고 핸드볼을 대상으로 하였으며 골을 넣어 득점하는 하이라이트 샷을 추출하기 위해 이벤트 규칙을 사용하였다. 비디오 인덱싱에서는 비디오 데이터 자체의 시각적 정보와 캡션 정보를 모두 이용하였다.

  • PDF

Design and Implementation of MPEG-2 Compressed Video Information Management System (MPEG-2 압축 동영상 정보 관리 시스템의 설계 및 구현)

  • Heo, Jin-Yong;Kim, In-Hong;Bae, Jong-Min;Kang, Hyun-Syug
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1431-1440
    • /
    • 1998
  • Video data are retrieved and stored in various compressed forms according to their characteristics, In this paper, we present a generic data model that captures the structure of a video document and that provides a means for indexing a video stream, Using this model, we design and implement CVIMS (the MPEG-2 Compressed Video Information Management System) to store and retrieve video documents, CVIMS extracts I-frames from MPEG-2 files, selects key-frames from the I -frames, and stores in database the index information such as thumbnails, captions, and picture descriptors of the key-frames, And also, CVIMS retrieves MPEG- 2 video data using the thumbnails of key-frames and v31ious labels of queries.

  • PDF

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.