• Title/Summary/Keyword: Semantic Image Annotation

Search Result 36, Processing Time 0.024 seconds

KNN-based Image Annotation by Collectively Mining Visual and Semantic Similarities

  • Ji, Qian;Zhang, Liyan;Li, Zechao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4476-4490
    • /
    • 2017
  • The aim of image annotation is to determine labels that can accurately describe the semantic information of images. Many approaches have been proposed to automate the image annotation task while achieving good performance. However, in most cases, the semantic similarities of images are ignored. Towards this end, we propose a novel Visual-Semantic Nearest Neighbor (VS-KNN) method by collectively exploring visual and semantic similarities for image annotation. First, for each label, visual nearest neighbors of a given test image are constructed from training images associated with this label. Second, each neighboring subset is determined by mining the semantic similarity and the visual similarity. Finally, the relevance between the images and labels is determined based on maximum a posteriori estimation. Extensive experiments were conducted using three widely used image datasets. The experimental results show the effectiveness of the proposed method in comparison with state-of-the-arts methods.

Semantic Image Annotation and Retrieval in Mobile Environments (모바일 환경에서 의미 기반 이미지 어노테이션 및 검색)

  • No, Hyun-Deok;Seo, Kwang-won;Im, Dong-Hyuk
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1498-1504
    • /
    • 2016
  • The progress of mobile computing technology is bringing a large amount of multimedia contents such as image. Thus, we need an image retrieval system which searches semantically relevant image. In this paper, we propose a semantic image annotation and retrieval in mobile environments. Previous mobile-based annotation approaches cannot fully express the semantics of image due to the limitation of current form (i.e., keyword tagging). Our approach allows mobile devices to annotate the image automatically using the context-aware information such as temporal and spatial data. In addition, since we annotate the image using RDF(Resource Description Framework) model, we are able to query SPARQL for semantic image retrieval. Our system implemented in android environment shows that it can more fully represent the semantics of image and retrieve the images semantically comparing with other image annotation systems.

Extending Semantic Image Annotation using User- Defined Rules and Inference in Mobile Environments (모바일 환경에서 사용자 정의 규칙과 추론을 이용한 의미 기반 이미지 어노테이션의 확장)

  • Seo, Kwang-won;Im, Dong-Hyuk
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.158-165
    • /
    • 2018
  • Since a large amount of multimedia image has dramatically increased, it is important to search semantically relevant image. Thus, several semantic image annotation methods using RDF(Resource Description Framework) model in mobile environment are introduced. Earlier studies on annotating image semantically focused on both the image tag and the context-aware information such as temporal and spatial data. However, in order to fully express their semantics of image, we need more annotations which are described in RDF model. In this paper, we propose an annotation method inferencing with RDFS entailment rules and user defined rules. Our approach implemented in Moment system shows that it can more fully represent the semantics of image with more annotation triples.

Deep Image Annotation and Classification by Fusing Multi-Modal Semantic Topics

  • Chen, YongHeng;Zhang, Fuquan;Zuo, WanLi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.392-412
    • /
    • 2018
  • Due to the semantic gap problem across different modalities, automatically retrieval from multimedia information still faces a main challenge. It is desirable to provide an effective joint model to bridge the gap and organize the relationships between them. In this work, we develop a deep image annotation and classification by fusing multi-modal semantic topics (DAC_mmst) model, which has the capacity for finding visual and non-visual topics by jointly modeling the image and loosely related text for deep image annotation while simultaneously learning and predicting the class label. More specifically, DAC_mmst depends on a non-parametric Bayesian model for estimating the best number of visual topics that can perfectly explain the image. To evaluate the effectiveness of our proposed algorithm, we collect a real-world dataset to conduct various experiments. The experimental results show our proposed DAC_mmst performs favorably in perplexity, image annotation and classification accuracy, comparing to several state-of-the-art methods.

A Semantic-based Video Retrieval System using Method of Automatic Annotation Update and Multi-Partition Color Histogram (자동 주석 갱신 및 멀티 분할 색상 히스토그램 기법을 이용한 의미기반 비디오 검색 시스템)

  • 이광형;전문석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1133-1141
    • /
    • 2004
  • In order to process video data effectively, it is required that the content information of video data is loaded in database and semantic-based retrieval method can be available for various query of users. In this paper, we propose semantic-based video retrieval system which support semantic retrieval of various users by feature-based retrieval and annotation-based retrieval of massive video data. By user's fundamental query and selection of image for key frame that extracted from query, the agent gives the detail shape for annotation of extracted key frame. Also, key frame selected by user become query image and searches the most similar key frame through feature based retrieval method that propose. From experiment, the designed and implemented system showed high precision ratio in performance assessment more than 90 percents.

Multi-cue Integration for Automatic Annotation (자동 주석을 위한 멀티 큐 통합)

  • Shin, Seong-Yoon;Rhee, Yang-Won
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.151-152
    • /
    • 2010
  • WWW images locate in structural, networking documents, so the importance of a word can be indicated by its location, frequency. There are two patterns for multi-cues ingegration annotation. The multi-cues integration algorithm shows initial promise as an indicator of semantic keyphrases of the web images. The latent semantic automatic keyphrase extraction that causes the improvement with the usage of multi-cues is expected to be preferable.

  • PDF

Collaborative Similarity Metric Learning for Semantic Image Annotation and Retrieval

  • Wang, Bin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.5
    • /
    • pp.1252-1271
    • /
    • 2013
  • Automatic image annotation has become an increasingly important research topic owing to its key role in image retrieval. Simultaneously, it is highly challenging when facing to large-scale dataset with large variance. Practical approaches generally rely on similarity measures defined over images and multi-label prediction methods. More specifically, those approaches usually 1) leverage similarity measures predefined or learned by optimizing for ranking or annotation, which might be not adaptive enough to datasets; and 2) predict labels separately without taking the correlation of labels into account. In this paper, we propose a method for image annotation through collaborative similarity metric learning from dataset and modeling the label correlation of the dataset. The similarity metric is learned by simultaneously optimizing the 1) image ranking using structural SVM (SSVM), and 2) image annotation using correlated label propagation, with respect to the similarity metric. The learned similarity metric, fully exploiting the available information of datasets, would improve the two collaborative components, ranking and annotation, and sequentially the retrieval system itself. We evaluated the proposed method on Corel5k, Corel30k and EspGame databases. The results for annotation and retrieval show the competitive performance of the proposed method.

Using Context Information to Improve Retrieval Accuracy in Content-Based Image Retrieval Systems

  • Hejazi, Mahmoud R.;Woo, Woon-Tack;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.926-930
    • /
    • 2006
  • Current image retrieval techniques have shortcomings that make it difficult to search for images based on a semantic understanding of what the image is about. Since an image is normally associated with multiple contexts (e.g. when and where a picture was taken,) the knowledge of these contexts can enhance the quantity of semantic understanding of an image. In this paper, we present a context-aware image retrieval system, which uses the context information to infer a kind of metadata for the captured images as well as images in different collections and databases. Experimental results show that using these kinds of information can not only significantly increase the retrieval accuracy in conventional content-based image retrieval systems but decrease the problems arise by manual annotation in text-based image retrieval systems as well.

  • PDF

Images Automatic Annotation: Multi-cues Integration (영상의 자동 주석: 멀티 큐 통합)

  • Shin, Seong-Yoon;Ahn, Eun-Mi;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.589-590
    • /
    • 2010
  • All these images consist a considerable database. What's more, the semantic meanings of images are well presented by the surrounding text and links. But only a small minority of these images have precise assigned keyphrases, and manually assigning keyphrases to existing images is very laborious. Therefore it is highly desirable to automate the keyphrases extraction process. In this paper, we first introduce WWW image annotation methods, based on low level features, page tags, overall word frequency and local word frequency. Then we put forward our method of multi-cues integration image annotation. Also, show multi-cue image annotation method is more superior than other method through an experiment.

  • PDF

A Semantic-based Video Retrieval System using Design of Automatic Annotation Update and Categorizing (자동 주석 갱신 및 카테고라이징 기법을 이용한 의미기반 동영상 검색 시스템)

  • 김정재;이창수;이종희;전문석
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.2
    • /
    • pp.203-216
    • /
    • 2004
  • In order to process video data effectively, it is required that the content information of video data is loaded in database and semantic- based retrieval method can be available for various query of users. Currently existent contents-based video retrieval systems search by single method such as annotation-based or feature-based retrieval, and show low search efficiency and requires many efforts of system administrator or annotator form less perfect automatic processing. In this paper, we propose semantic-based video retrieval system which support semantic retrieval of various users by feature-based retrieval and annotation-based retrieval of massive video data. By user's fundamental query and selection of image for key frame that extracted from query, the agent gives the detail shape for annotation of extracted key frame. Also, key frame selected by user become query image and searches the most similar key frame through feature based retrieval method that propose. Therefore, we design the system that can heighten retrieval efficiency of video data through semantic-based retrieval.

  • PDF