• Title/Summary/Keyword: contents based image retrieval

Search Result 120, Processing Time 0.026 seconds

An XML-based Multimedia News Management System (XML 기반 멀티미디어 뉴스 관리 시스템)

  • Kim Hyon Hee;Park Seung Soo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.7 s.96
    • /
    • pp.785-792
    • /
    • 2004
  • With recent progress of related multimedia computing technologies, it is necessay to retrieve diverse types of multimedia data based on multi-media content and their relationships. However, different from alphanumeric data, it is difficult to provide relevant multimedia information, be-cause multimedia contents and their relationships are implied in multimedia data. Therefore, in case of a multimedia news service system that is a representative multimedia application, most of new services provide relevant news about text articles and retrieval of multimedia news such as video news or image news are provided independently. In this paper, we present an XML-based multimedia news management system, which provides integrating, retrieval, and delivery of relevant multimedia news. Our data model composed of media object, relationship object, and view object represents diverse types of multimedia news content and semantically related multimedia news. In addition, a proposed view mechanism makes it possible to customize multimedia news, and therefore provides multimedia news efficiently.

The Character Recognition System of Mobile Camera Based Image (모바일 이미지 기반의 문자인식 시스템)

  • Park, Young-Hyun;Lee, Hyung-Jin;Baek, Joong-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.5
    • /
    • pp.1677-1684
    • /
    • 2010
  • Recently, due to the development of mobile phone and supply of smart phone, many contents have been developed. Especially, since the small-sized cameras are equiped in mobile devices, people are interested in the image based contents development, and it also becomes important part in their practical use. Among them, the character recognition system can be widely used in the applications such as blind people guidance systems, automatic robot navigation systems, automatic video retrieval and indexing systems, automatic text translation systems. Therefore, this paper proposes a system that is able to extract text area from the natural images captured by smart phone camera. The individual characters are recognized and result is output in voice. Text areas are extracted using Adaboost algorithm and individual characters are recognized using error back propagated neural network.

Content-Based Image Retrieval Using Visual Features and Fuzzy Integral (시각 특징과 퍼지 적분을 이용한 내용기반 영상 검색)

  • Song Young-Jun;Kim Nam;Kim Mi-Hye;Kim Dong-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.5
    • /
    • pp.20-28
    • /
    • 2006
  • This paper proposes visual-feature extraction for each band in wavelet domain with both spatial frequency features and multi resolution features, and the combination of visual features using fuzzy integral. In addition, it uses color feature expression method taking advantage of the frequency of the same color after color quantization for reducing quantization error, a disadvantage of the existing color histogram intersection method. Also, it is found that the final similarity can be represented in a linear combination of the respective factors(Homogram, color, energy) when each factor is independent one another. With respect to the combination patterns the fuzzy measurement is defined and the fuzzy integral is taken. Experiments are peformed on a database containing 1,000 color images. The proposed method gives better performance than the conventional method in both objective and subjective performance evaluation.

  • PDF

Efficient Representation and Matching of Object Movement using Shape Sequence Descriptor (모양 시퀀스 기술자를 이용한 효과적인 동작 표현 및 검색 방법)

  • Choi, Min-Seok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.391-396
    • /
    • 2008
  • Motion of object in a video clip often plays an important role in characterizing the content of the clip. A number of methods have been developed to analyze and retrieve video contents using motion information. However, most of these methods focused more on the analysis of direction or trajectory of motion but less on the analysis of the movement of an object itself. In this paper, we propose the shape sequence descriptor to describe and compare the movement based on the shape deformation caused by object motion along the time. A movement information is first represented a sequence of 2D shape of object extracted from input image sequence, and then 2D shape information is converted 1D shape feature using the shape descriptor. The shape sequence descriptor is obtained from the shape descriptor sequence by frequency transform along the time. Our experiment results show that the proposed method can be very simple and effective to describe the object movement and can be applicable to semantic applications such as content-based video retrieval and human movement recognition.

FE-CBIRS Using Color Distribution for Cut Retrieval in IPTV (IPTV에서 컷 검색을 위한 색 분포정보를 이용한 FE-CBIRS)

  • Koo, Gun-Seo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.1
    • /
    • pp.91-97
    • /
    • 2009
  • This paper proposes novel FE-CBIRS that finds best position of a cut to be retrieved based on color feature distribution in digital contents of IPTV. Conventional CBIRS have used a method that utilizes both color and shape information together to classify images, as well as a method that utilizes both feature information of the entire region and feature information of a partial region that is extracted by segmentation for searching. Also, in the algorithm, average, standard deviation and skewness values are used in case of color features for each hue, saturation and intensity values respectively. Furthermore, in case of using partial regions, only a few major colors are used and in case of shape features, the invariant moment is mainly used on the extracted partial regions. Due to these reasons, some problems have been issued in CBIRS in processing time and accuracy so far. Therefore, in order to tackle these problems, this paper proposes the FE-CBIRS that makes searching speed faster by classifying and indexing the extracted color information by each class and by using several cuts that are restricted in range as comparative images.

Comparison of Fine-Tuned Convolutional Neural Networks for Clipart Style Classification

  • Lee, Seungbin;Kim, Hyungon;Seok, Hyekyoung;Nang, Jongho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.1-7
    • /
    • 2017
  • Clipart is artificial visual contents that are created using various tools such as Illustrator to highlight some information. Here, the style of the clipart plays a critical role in determining how it looks. However, previous studies on clipart are focused only on the object recognition [16], segmentation, and retrieval of clipart images using hand-craft image features. Recently, some clipart classification researches based on the style similarity using CNN have been proposed, however, they have used different CNN-models and experimented with different benchmark dataset so that it is very hard to compare their performances. This paper presents an experimental analysis of the clipart classification based on the style similarity with two well-known CNN-models (Inception Resnet V2 [13] and VGG-16 [14] and transfers learning with the same benchmark dataset (Microsoft Style Dataset 3.6K). From this experiment, we find out that the accuracy of Inception Resnet V2 is better than VGG for clipart style classification because of its deep nature and convolution map with various sizes in parallel. We also find out that the end-to-end training can improve the accuracy more than 20% in both CNN models.

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.

Detecting near-duplication Video Using Motion and Image Pattern Descriptor (움직임과 영상 패턴 서술자를 이용한 중복 동영상 검출)

  • Jin, Ju-Kyong;Na, Sang-Il;Jenong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.107-115
    • /
    • 2011
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

Similar Movie Contents Retrieval Using Peak Features from Audio (오디오의 Peak 특징을 이용한 동일 영화 콘텐츠 검색)

  • Chung, Myoung-Bum;Sung, Bo-Kyung;Ko, Il-Ju
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.11
    • /
    • pp.1572-1580
    • /
    • 2009
  • Combing through entire video files for the purpose of recognizing and retrieving matching movies requires much time and memory space. Instead, most current similar movie-matching methods choose to analyze only a part of each movie's video-image information. Yet, these methods still share a critical problem of erroneously recognizing as being different matching videos that have been altered only in resolution or converted merely with a different codecs. This paper proposes an audio-information-based search algorithm by which similar movies can be identified. The proposed method prepares and searches through a database of movie's spectral peak information that remains relatively steady even with changes in the bit-rate, codecs, or sample-rate. The method showed a 92.1% search success rate, given a set of 1,000 video files whose audio-bit-rate had been altered or were purposefully written in a different codec.

  • PDF

A Study on Frame of MSE Comparison for Scene Chang Detection Retrieval (장면 전환점 검출을 위한 프레임의 평균오차 비교에 관한 연구)

  • 김단환;김형균;오무송
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.638-642
    • /
    • 2002
  • User in video data utilization of high-capacity can grasp whole video data at a look. Offer frame list that summarize information of video data to do so that can remake video from branch that want when need. Need index process of video data for effective video retrieval. This treatise wishes to propose effective method about scene change point detection of video that is been based on contents base index. Proposed method video data so that can grasp whole structure of video detection color value of schedule pixel for diagonal line direction in image sampling do. Data that get into sampling could grasp scene change point on one eye. Color value of pixel that detection in each frame is i frame number by i$\times$j procession to procession A, j stores to reflex height of frame. Introduce MSE and calculate mean error of each frame. If exceed mean error and schedule critical value, wish to detect the frame for scene change point.

  • PDF