• Title/Summary/Keyword: Image Annotation

Search Result 114, Processing Time 0.023 seconds

Implementation of a Video Retrieval System Using Annotation and Comparison Area Learning of Key-Frames (키 프레임의 주석과 비교 영역 학습을 이용한 비디오 검색 시스템의 구현)

  • Lee Keun-Wang;Kim Hee-Sook;Lee Jong-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.2
    • /
    • pp.269-278
    • /
    • 2005
  • In order to process video data effectively, it is required that the content information of video data is loaded in database and semantics-based retrieval method can be available for various queries of users. In this paper, we propose a video retrieval system which support semantics retrieval of various users for massive video data by user's keywords and comparison area learning based on automatic agent. By user's fundamental query and selection of image for key frame that extracted from query, the agent gives the detail shape for annotation of extracted key frame. Also, key frame selected by user becomes a query image and searches the most similar key frame through color histogram comparison and comparison area learning method that proposed. From experiment, the designed and implemented system showed high precision ratio in performance assessment more than 93 percents.

  • PDF

Single Shot Detector for Detecting Clickable Object in Mobile Device Screen (모바일 디바이스 화면의 클릭 가능한 객체 탐지를 위한 싱글 샷 디텍터)

  • Jo, Min-Seok;Chun, Hye-won;Han, Seong-Soo;Jeong, Chang-Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.29-34
    • /
    • 2022
  • We propose a novel network architecture and build dataset for recognizing clickable objects on mobile device screens. The data was collected based on clickable objects on the mobile device screen that have numerous resolution, and a total of 24,937 annotation data were subdivided into seven categories: text, edit text, image, button, region, status bar, and navigation bar. We use the Deconvolution Single Shot Detector as a baseline, the backbone network with Squeeze-and-Excitation blocks, the Single Shot Detector layer structure to derive inference results and the Feature pyramid networks structure. Also we efficiently extract features by changing the input resolution of the existing 1:1 ratio of the network to a 1:2 ratio similar to the mobile device screen. As a result of experimenting with the dataset we have built, the mean average precision was improved by up to 101% compared to baseline.

An active learning method with difficulty learning mechanism for crack detection

  • Shu, Jiangpeng;Li, Jun;Zhang, Jiawei;Zhao, Weijian;Duan, Yuanfeng;Zhang, Zhicheng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.195-206
    • /
    • 2022
  • Crack detection is essential for inspection of existing structures and crack segmentation based on deep learning is a significant solution. However, datasets are usually one of the key issues. When building a new dataset for deep learning, laborious and time-consuming annotation of a large number of crack images is an obstacle. The aim of this study is to develop an approach that can automatically select a small portion of the most informative crack images from a large pool in order to annotate them, not to label all crack images. An active learning method with difficulty learning mechanism for crack segmentation tasks is proposed. Experiments are carried out on a crack image dataset of a steel box girder, which contains 500 images of 320×320 size for training, 100 for validation, and 190 for testing. In active learning experiments, the 500 images for training are acted as unlabeled image. The acquisition function in our method is compared with traditional acquisition functions, i.e., Query-By-Committee (QBC), Entropy, and Core-set. Further, comparisons are made on four common segmentation networks: U-Net, DeepLabV3, Feature Pyramid Network (FPN), and PSPNet. The results show that when training occurs with 200 (40%) of the most informative crack images that are selected by our method, the four segmentation networks can achieve 92%-95% of the obtained performance when training takes place with 500 (100%) crack images. The acquisition function in our method shows more accurate measurements of informativeness for unlabeled crack images compared to the four traditional acquisition functions at most active learning stages. Our method can select the most informative images for annotation from many unlabeled crack images automatically and accurately. Additionally, the dataset built after selecting 40% of all crack images can support crack segmentation networks that perform more than 92% when all the images are used.

Design and Implementation of Deep-Learning-Based Image Tag for Semantic Image Annotation in Mobile Environment (모바일 환경에서 딥러닝을 활용한 의미기반 이미지 어노테이션을 위한 이미지 태그 설계 및 구현)

  • Shin, YoonMi;Ahn, Jinhyun;Im, Dong-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.895-897
    • /
    • 2019
  • 모바일의 기술 발전과 소셜미디어 사용의 증가로 수없이 많은 멀티미디어 콘텐츠들이 생성되고 있다. 이러한 많은 양의 콘텐츠 중에서 사용자가 원하는 이미지를 효율적으로 찾기 위해 의미 기반 이미지 검색을 이용한다. 이 검색 기법은 이미지에 의미 있는 정보들을 이용하여 사용자가 찾고 자하는 이미지를 정확하게 찾을 수 있다. 본 연구에서는 모바일 환경에서 이미지가 가질 수 있는 의미적 정보를 어노테이션 하고 이와 더불어 모바일에 있는 이미지에 풍성한 어노테이션을 위해 딥러닝 기술을 이용하여 다양한 태그들을 자동 생성하도록 구현하였다. 이렇게 생성된 어노테이션 정보들은 의미적 기반 태그를 통해 RDF 트리플로 확장된다. SPARQL 질의어를 이용하여 의미 기반 이미지 검색을 할 수 있다.

Automatic Annotation of Image using its Content (내용 정보를 이용한 이미지 자동 태깅)

  • Jang, Hyun-Woong;Cho, Soosun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.841-844
    • /
    • 2015
  • 이미지 인식과 내용분석은 이미지 검색과 멀티미디어 데이터 활용 분야에서 핵심기술이라 할 수 있다. 특히 최근 스마트폰, 디지털 카메라, 블랙박스 등에서 수집되는 영상 데이터 양이 급격히 증가하고 있다. 이에 따라 이미지를 인식하고 내용을 분석하여 활용할 수 있는 기술에 대한 요구가 점차 증대되고 있다. 본 논문에서는 이미지 내용정보를 이용하여 자몽으로 이미지로부터 태그정보를 추출하는 방법을 제안한다. 이 방법은 기계학습 기법인 CNN(Convolutional Neural Network)에 ImageNet의 이미지 데이터와 라벨을 학습시킨 후, 새로운 이미지로부터 라벨정보를 추출하는 것이다. 추출된 라벨을 태그로 간주하고 검색에 활용한다면 기존 검색시스템의 정확도를 향상시킬 수 있다는 것을 실험을 통하여 확인하였다.

Design of Ontology-based Interactive Image Annotation System using Social Image Database (소셜 이미지 데이터베이스를 이용한 온툴로지 기반 대화형 이미지 어노테이션 시스템의 설계)

  • Jeong, Jin-Woo;Lee, Dong-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.300-303
    • /
    • 2011
  • 이미지 어노테이션 기법은 효과적인 이마지 공유 및 검색을 위하여 활발하게 연구되고 있는 연구분야 중 하나로서, 최근에는 사용자들에 의하여 제작되는 방대한 양의 이미지 데이터 및 태그 정보를 제공하는 Flick와 같은 소셜 이마지 데이터베이스를 활용함으로써 이미지 어노테이션 및 이미지 검색을 효과적으로 수행하고자 하는 다양한 연구들이 시도되고 있다. 본 논문에서는 이미지 지식정보의 관리 및 공유를 위한 온톨로지와 소셜 이마지 데이터베이스를 활용하여 이미지 어노테이션을 수행하기 위한 시스템을 제안한다. 본 논문에서 제안하는 시스템은 소셜 이미지 데이터베이스를 활용하여 의미 있는 개념들을 이미지 어노테이션에 활용하며, 지식 관리 체계인 온툴로지를 이용하여 이미지 데이터베이스 내의 이미지 및 개념간에 존재하는 의미적 관련성을 기반으로 보다 효율적인 이미지 검색을 수행하고자 한다.

Mask Region-Based Convolutional Neural Network (R-CNN) Based Image Segmentation of Rays in Softwoods

  • Hye-Ji, YOO;Ohkyung, KWON;Jeong-Wook, SEO
    • Journal of the Korean Wood Science and Technology
    • /
    • v.50 no.6
    • /
    • pp.490-498
    • /
    • 2022
  • The current study aimed to verify the image segmentation ability of rays in tangential thin sections of conifers using artificial intelligence technology. The applied model was Mask region-based convolutional neural network (Mask R-CNN) and softwoods (viz. Picea jezoensis, Larix gmelinii, Abies nephrolepis, Abies koreana, Ginkgo biloba, Taxus cuspidata, Cryptomeria japonica, Cedrus deodara, Pinus koraiensis) were selected for the study. To take digital pictures, thin sections of thickness 10-15 ㎛ were cut using a microtome, and then stained using a 1:1 mixture of 0.5% astra blue and 1% safranin. In the digital images, rays were selected as detection objects, and Computer Vision Annotation Tool was used to annotate the rays in the training images taken from the tangential sections of the woods. The performance of the Mask R-CNN applied to select rays was as high as 0.837 mean average precision and saving the time more than half of that required for Ground Truth. During the image analysis process, however, division of the rays into two or more rays occurred. This caused some errors in the measurement of the ray height. To improve the image processing algorithms, further work on combining the fragments of a ray into one ray segment, and increasing the precision of the boundary between rays and the neighboring tissues is required.

Image retrieval based on a combination of deep learning and behavior ontology for reducing semantic gap (시맨틱 갭을 줄이기 위한 딥러닝과 행위 온톨로지의 결합 기반 이미지 검색)

  • Lee, Seung;Jung, Hye-Wuk
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.9 no.11
    • /
    • pp.1133-1144
    • /
    • 2019
  • Recently, the amount of image on the Internet has rapidly increased, due to the advancement of smart devices and various approaches to effective image retrieval have been researched under these situation. Existing image retrieval methods simply detect the objects in a image and carry out image retrieval based on the label of each object. Therefore, the semantic gap occurs between the image desired by a user and the image obtained from the retrieval result. To reduce the semantic gap in image retrievals, we connect the module for multiple objects classification based on deep learning with the module for human behavior classification. And we combine the connected modules with a behavior ontology. That is to say, we propose an image retrieval system considering the relationship between objects by using the combination of deep learning and behavior ontology. We analyzed the experiment results using walking and running data to take into account dynamic behaviors in images. The proposed method can be extended to the study of automatic annotation generation of images that can improve the accuracy of image retrieval results.

Efficient Image Retrieval using Minimal Spatial Relationships (최소 공간관계를 이용한 효율적인 이미지 검색)

  • Lee, Soo-Cheol;Hwang, Een-Jun;Byeon, Kwang-Jun
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.383-393
    • /
    • 2005
  • Retrieval of images from image databases by spatial relationship can be effectively performed through visual interface systems. In these systems, the representation of image with 2D strings, which are derived from symbolic projections, provides an efficient and natural way to construct image index and is also an ideal representation for the visual query. With this approach, retrieval is reduced to matching two symbolic strings. However, using 2D-string representations, spatial relationships between the objects in the image might not be exactly specified. Ambiguities arise for the retrieval of images of 3D scenes. In order to remove ambiguous description of object spatial relationships, in this paper, images are referred by considering spatial relationships using the spatial location algebra for the 3D image scene. Also, we remove the repetitive spatial relationships using the several reduction rules. A reduction mechanism using these rules can be used in query processing systems that retrieve images by content. This could give better precision and flexibility in image retrieval.

A Categorization Scheme of Tag-based Folksonomy Images for Efficient Image Retrieval (효과적인 이미지 검색을 위한 태그 기반의 폭소노미 이미지 카테고리화 기법)

  • Ha, Eunji;Kim, Yongsung;Hwang, Eenjun
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.6
    • /
    • pp.290-295
    • /
    • 2016
  • Recently, folksonomy-based image-sharing sites where users cooperatively make and utilize tags of image annotation have been gaining popularity. Typically, these sites retrieve images for a user request using simple text-based matching and display retrieved images in the form of photo stream. However, these tags are personal and subjective and images are not categorized, which results in poor retrieval accuracy and low user satisfaction. In this paper, we propose a categorization scheme for folksonomy images which can improve the retrieval accuracy in the tag-based image retrieval systems. Consequently, images are classified by the semantic similarity using text-information and image-information generated on the folksonomy. To evaluate the performance of our proposed scheme, we collect folksonomy images and categorize them using text features and image features. And then, we compare its retrieval accuracy with that of existing systems.