• Title/Summary/Keyword: image content

Search Result 1,838, Processing Time 0.034 seconds

A Service Framework to Digital Fulltext Image for Copyright Protection (저작권 보호를 위한 디지털 원문 서비스 프레임워크)

  • Kim Sang-Kuk;Shin Sung-Ho;Yoon Hee-Jun;Kim Tae-Jung
    • Journal of Korea Technology Innovation Society
    • /
    • v.8 no.spc1
    • /
    • pp.323-336
    • /
    • 2005
  • Digital content industry is growing rapidly because of the property of high-speed networking and greater demand for digital fulltext-image. However, we know the fact that it is many difficulty in production and supply for good quality of content. Hereupon, we suggest digital fulltext-image service framework to protect copyright. More concretely, we propose integrated model and reference model to securely serve digital fulltext-image by recompositing core objects and reconstructing the value-chain structure of digital content industry to framework including the process(from its production (creators or copyrighters) to consumption (users or consumers)). Also, we construct the digital fulltext- image service system based on reference model and reconstruct its interface that occurs between core subjects.

  • PDF

Deep Hashing for Semi-supervised Content Based Image Retrieval

  • Bashir, Muhammad Khawar;Saleem, Yasir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3790-3803
    • /
    • 2018
  • Content-based image retrieval is an approach used to query images based on their semantics. Semantic based retrieval has its application in all fields including medicine, space, computing etc. Semantically generated binary hash codes can improve content-based image retrieval. These semantic labels / binary hash codes can be generated from unlabeled data using convolutional autoencoders. Proposed approach uses semi-supervised deep hashing with semantic learning and binary code generation by minimizing the objective function. Convolutional autoencoders are basis to extract semantic features due to its property of image generation from low level semantic representations. These representations of images are more effective than simple feature extraction and can preserve better semantic information. Proposed activation and loss functions helped to minimize classification error and produce better hash codes. Most widely used datasets have been used for verification of this approach that outperforms the existing methods.

A Novel Image Classification Method for Content-based Image Retrieval via a Hybrid Genetic Algorithm and Support Vector Machine Approach

  • Seo, Kwang-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.75-81
    • /
    • 2011
  • This paper presents a novel method for image classification based on a hybrid genetic algorithm (GA) and support vector machine (SVM) approach which can significantly improve the classification performance for content-based image retrieval (CBIR). Though SVM has been widely applied to CBIR, it has some problems such as the kernel parameters setting and feature subset selection of SVM which impact the classification accuracy in the learning process. This study aims at simultaneously optimizing the parameters of SVM and feature subset without degrading the classification accuracy of SVM using GA for CBIR. Using the hybrid GA and SVM model, we can classify more images in the database effectively. Experiments were carried out on a large-size database of images and experiment results show that the classification accuracy of conventional SVM may be improved significantly by using the proposed model. We also found that the proposed model outperformed all the other models such as neural network and typical SVM models.

Associative Interactive play Contents for Infant Imagination

  • Jang, Eun-Jung;Lee, Chankyu;Lim, Chan
    • International journal of advanced smart convergence
    • /
    • v.8 no.1
    • /
    • pp.126-132
    • /
    • 2019
  • Creative thinking appears even before it is expressed in language, and its existence is revealed through emotion, intuition, image and body feeling before logic or linguistics rules work. In this study, Lego is intended to present experimental child interactive content that is applied with a computer vision based on image processing techniques. In the case of infants, the main purpose of this content is the development of hand muscles and the ability to implement imagination. The purpose of the analysis algorithm of the OpenCV library and the image processing using the 'VVVV' that is implemented as a 'Node' in the midst of perceptual changes in image processing technology that are representative of object recognition, and the objective is to use a webcam to film, recognize, derive results that match the analysis and produce interactive content that is completed by the user participating. Research shows what Lego children have made, and children can create things themselves and develop creativity. Furthermore, we expect to be able to infer a diverse and individualistic person's thinking based on more data.

An Approach for the Cross Modality Content-Based Image Retrieval between Different Image Modalities

  • Jeong, Inseong;Kim, Gihong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_2
    • /
    • pp.585-592
    • /
    • 2013
  • CBIR is an effective tool to search and extract image contents in a large remote sensing image database queried by an operator or end user. However, as imaging principles are different by sensors, their visual representation thus varies among image modality type. Considering images of various modalities archived in the database, image modality difference has to be tackled for the successful CBIR implementation. However, this topic has been seldom dealt with and thus still poses a practical challenge. This study suggests a cross modality CBIR (termed as the CM-CBIR) method that transforms given query feature vector by a supervised procedure in order to link between modalities. This procedure leverages the skill of analyst in training steps after which the transformed query vector is created for the use of searching in target images with different modalities. Current initial results show the potential of the proposed CM-CBIR method by delivering the image content of interest from different modality images. Despite its retrieval capability is outperformed by that of same modality CBIR (abbreviated as the SM-CBIR), the lack of retrieval performance can be compensated by employing the user's relevancy feedback, a conventional technique for retrieval enhancement.

Content-based Image Retrieval System (내용기반 영상검색 시스템)

  • Yoo, Hun-Woo;Jang, Dong-Sik;Jung, She-Hwan;Park, Jin-Hyung;Song, Kwang-Seop
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.26 no.4
    • /
    • pp.363-375
    • /
    • 2000
  • In this paper we propose a content-based image retrieval method that can search large image databases efficiently by color, texture, and shape content. Quantized RGB histograms and the dominant triple (hue, saturation, and value), which are extracted from quantized HSV joint histogram in the local image region, are used for representing global/local color information in the image. Entropy and maximum entry from co-occurrence matrices are used for texture information and edge angle histogram is used for representing shape information. Relevance feedback approach, which has coupled proposed features, is used for obtaining better retrieval accuracy. Simulation results illustrate the above method provides 77.5 percent precision rate without relevance feedback and increased precision rate using relevance feedback for overall queries. We also present a new indexing method that supports fast retrieval in large image databases. Tree structures constructed by k-means algorithm, along with the idea of triangle inequality, eliminate candidate images for similarity calculation between query image and each database image. We find that the proposed method reduces calculation up to average 92.9 percent of the images from direct comparison.

  • PDF

A Study on the Performance Analysis of Content-based Image & Video Retrieval Systems (내용기반 이미지 및 비디오 검색 시스템 성능분석에 관한 연구)

  • Kim, Seong-Hee
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.15 no.2
    • /
    • pp.97-115
    • /
    • 2004
  • The paper examined the concepts and features of content-based Image and Video retrieval systems. It then analyzed the retrieval performance of on five content_based retrieval systems in terms of usability and retrieval features. The results showed that the combination of content_based retrieval techniques and meta-data based retrieval will be able to improve the retrieval effectiveness.

  • PDF

Content Description on a Mobile Image Sharing Service: Hashtags on Instagram

  • Dorsch, Isabelle
    • Journal of Information Science Theory and Practice
    • /
    • v.6 no.2
    • /
    • pp.46-61
    • /
    • 2018
  • The mobile social networking application Instagram is a well-known platform for sharing photos and videos. Since it is folksonomy-oriented, it provides the possibility for image indexing and knowledge representation through the assignment of hashtags to posted content. The purpose of this study is to analyze how Instagram users tag their pictures regarding different kinds of picture and hashtag categories. For such a content analysis, a distinction is made between Food, Pets, Selfies, Friends, Activity, Art, Fashion, Quotes (captioned photos), Landscape, and Architecture image categories as well as Content-relatedness (ofness, aboutness, and iconology), Emotiveness, Isness, Performativeness, Fakeness, "Insta"-Tags, and Sentences as hashtag categories. Altogether, 14,649 hashtags of 1,000 Instagram images were intellectually analyzed (100 pictures for each image category). Research questions are stated as follows: RQ1: Are there any differences in relative frequencies of hashtags in the picture categories? On average the number of hashtags per picture is 15. Lowest average values received the categories Selfie (average 10.9 tags per picture) and Friends (average 11.7 tags per picture); for highest, the categories Pet (average 18.6 tags), Fashion (average 17.6 tags), and Landscape (average 16.8 tags). RQ2: Given a picture category, what is the distribution of hashtag categories; and given a hashtag category, what is the distribution of picture categories? 60.20% of all hashtags were classified into the category Content-relatedness. Categories Emotiveness (about 4.38%) and Sentences (0.99%) were less often frequent. RQ3: Is there any association between image categories and hashtag categories? A statistically significant association between hashtag categories and image categories on Instagram exists, as a chi-square test of independence shows. This study enables a first broad overview on the tagging behavior of Instagram users and is not limited to a specific hashtag or picture motive, like previous studies.

Using Context Information to Improve Retrieval Accuracy in Content-Based Image Retrieval Systems

  • Hejazi, Mahmoud R.;Woo, Woon-Tack;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.926-930
    • /
    • 2006
  • Current image retrieval techniques have shortcomings that make it difficult to search for images based on a semantic understanding of what the image is about. Since an image is normally associated with multiple contexts (e.g. when and where a picture was taken,) the knowledge of these contexts can enhance the quantity of semantic understanding of an image. In this paper, we present a context-aware image retrieval system, which uses the context information to infer a kind of metadata for the captured images as well as images in different collections and databases. Experimental results show that using these kinds of information can not only significantly increase the retrieval accuracy in conventional content-based image retrieval systems but decrease the problems arise by manual annotation in text-based image retrieval systems as well.

  • PDF

GAN-based Image-to-image Translation using Multi-scale Images (다중 스케일 영상을 이용한 GAN 기반 영상 간 변환 기법)

  • Chung, Soyoung;Chung, Min Gyo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.767-776
    • /
    • 2020
  • GcGAN is a deep learning model to translate styles between images under geometric consistency constraint. However, GcGAN has a disadvantage that it does not properly maintain detailed content of an image, since it preserves the content of the image through limited geometric transformation such as rotation or flip. Therefore, in this study, we propose a new image-to-image translation method, MSGcGAN(Multi-Scale GcGAN), which improves this disadvantage. MSGcGAN, an extended model of GcGAN, performs style translation between images in a direction to reduce semantic distortion of images and maintain detailed content by learning multi-scale images simultaneously and extracting scale-invariant features. The experimental results showed that MSGcGAN was better than GcGAN in both quantitative and qualitative aspects, and it translated the style more naturally while maintaining the overall content of the image.