• Title/Summary/Keyword: annotation

Search Result 738, Processing Time 0.025 seconds

A Study on Display Technique of Sketch Annotation in 3D Virtual Space for Collaboration (3D 가상공간에서 의사표현을 위한 Sketch Annotation 제시기법에 관한 연구)

  • Sin, Eun-Joo;Choy, Yoon-Chul;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.10
    • /
    • pp.1466-1477
    • /
    • 2009
  • With development of 3D virtual space technology, diverse studies on 3D virtual space are being conducted because virtual space is being recognized as an appropriate technology for the field of urban and architectural design. However, while the process of urban and architectural design is done based on collaboration of various interested parties, there is lack of studies that support such collaboration. Communication within the virtual space must be provided for 3D virtual space to support collaboration, and a quick and direct technique of expressing one's intentions is required. sketch technique is effective. Ambiguous lines of sketch are extremely effective in that the range of thoughts in the idea stage can be broadened. However, such traditional sketch technique alone is difficult to support collaboration in 3D virtual space. That is because diverse shapes of 3D space require input of sketch. Accordingly in this study, sketch annotation that can effectively deliver intentions in 3D space was studied. Annotation technique using sketch-box was examined.

  • PDF

Collaborative Similarity Metric Learning for Semantic Image Annotation and Retrieval

  • Wang, Bin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.5
    • /
    • pp.1252-1271
    • /
    • 2013
  • Automatic image annotation has become an increasingly important research topic owing to its key role in image retrieval. Simultaneously, it is highly challenging when facing to large-scale dataset with large variance. Practical approaches generally rely on similarity measures defined over images and multi-label prediction methods. More specifically, those approaches usually 1) leverage similarity measures predefined or learned by optimizing for ranking or annotation, which might be not adaptive enough to datasets; and 2) predict labels separately without taking the correlation of labels into account. In this paper, we propose a method for image annotation through collaborative similarity metric learning from dataset and modeling the label correlation of the dataset. The similarity metric is learned by simultaneously optimizing the 1) image ranking using structural SVM (SSVM), and 2) image annotation using correlated label propagation, with respect to the similarity metric. The learned similarity metric, fully exploiting the available information of datasets, would improve the two collaborative components, ranking and annotation, and sequentially the retrieval system itself. We evaluated the proposed method on Corel5k, Corel30k and EspGame databases. The results for annotation and retrieval show the competitive performance of the proposed method.

An Image-Based Annotation for DICOM Standard Image (DICOM 표준 영샹을 위한 이미지 기반의 주석)

  • Jang Seok-Hwan;Kim Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.9
    • /
    • pp.1321-1328
    • /
    • 2004
  • In this article, we present a new DICOM object able to create image-based annotations in DICOM image. Since the proposed image-based annotation uses images themselves of annotation, various types like character, sketch, and scanning image, etc., can be imported into annotation easily. The proposed annotation is inserted into DICOM image directly but they do not influence original DICOM image quality by using independent data channel. The proposed annotation is expected to be very useful to medium and small clinics that cannot afford picture archiving and communication systems or electronic medical record.

  • PDF

Development of Python-based Annotation Tool Program for Constructing Object Recognition Deep-Learning Model (물체인식 딥러닝 모델 구성을 위한 파이썬 기반의 Annotation 툴 개발)

  • Lim, Song-Won;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.386-398
    • /
    • 2020
  • We developed an integrative annotation program that can perform data labeling process for deep learning models in object recognition. The program utilizes the basic GUI library of Python and configures crawler functions that allow data collection in real time. Retinanet was used to implement an automatic annotation function. In addition, different data labeling formats for Pascal-VOC, YOLO and Retinanet were generated. Through the experiment of the proposed method, a domestic vehicle image dataset was built, and it is applied to Retinanet and YOLO as the training and test set. The proposed system classified the vehicle model with the accuracy of about 94%.

A Voice-Annotation Technique in Mobile E-book for Reading-disabled People (독서장애인용 디지털음성도서를 위한 음성 어노테이션 기법)

  • Lee, Kyung-Hee;Lee, Jong-Woo;Lim, Soon-Bum
    • Journal of Digital Contents Society
    • /
    • v.12 no.3
    • /
    • pp.329-337
    • /
    • 2011
  • Digital talking book has been developed to enhance reading experiences for reading-disabled people. In the existing digital talking book, however, annotations can be created only through the screen interfaces. Screen annotation interfaces is of no use for reading-disabled people because they need reader's eyesight. In this paper, we suggest a voice annotation technique can create notes and highlights at any playing time by using hearing sense and voice command. We design a location determination technique that pinpoints where a voice annotation should be placed in the playing sentences. To verify the effectiveness of our voice annotation technique, we implement a prototype in an android platform. We can find out by the black-blindfolded users testing that our system can perfectly locate the exact position that a voice annotation should be placed into.

Towards Improved Performance on Plant Disease Recognition with Symptoms Specific Annotation

  • Dong, Jiuqing;Fuentes, Alvaro;Yoon, Sook;Kim, Taehyun;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.11 no.4
    • /
    • pp.38-45
    • /
    • 2022
  • Object detection models have become the current tool of choice for plant disease detection in precision agriculture. Most existing research improves the performance by ameliorating networks and optimizing the loss function. However, the data-centric part of a whole project also needs more investigation. In this paper, we proposed a systematic strategy with three different annotation methods for plant disease detection: local, semi-global, and global label. Experimental results on our paprika disease dataset show that a single class annotation with semi-global boxes may improve accuracy. In addition, we also studied the noise factor during the labeling process. An ablation study shows that annotation noise within 10% is acceptable for keeping good performance. Overall, this data-centric numerical analysis helps us to understand the significance of annotation methods, which provides practitioners a way to obtain higher performance and reduce annotation costs on plant disease detection tasks. Our work encourages researchers to pay more attention to label quality and the essential issues of labeling methods.

On Developing a Semantic Annotation Tool for Managing Metadata of Web Documents based on XMP and Ontology (웹 문서의 메타데이터 관리를 위한 XMP 및 온톨로지 기반의 시맨틱 어노테이션 지원도구 개발)

  • Yang, Kyoung-Mo;Hwang, Suk-Hyung;Choi, Sung-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.7
    • /
    • pp.1585-1600
    • /
    • 2009
  • The goal of Semantic Web is to provide efficient and effective semantic search and web services based on the machine-processable semantic information of web resources. Therefore, the process of creating and adding computer-understandable metadata for a variety of web contents, namely, semantic annotation is one of the fundamental technologies for the semantic web. Recently, in order to manage annotation metadata, direct approach for embedding metadata into the document is mainly used in semantic annotation. However, many semantic annotation tools for web documents have been mainly worked with HTML documents, and most of these tools do not support semantic search functionalities using the metadata. In this paper, based on these problems and previous works, we propose the Ontology-based Semantic Annotation tool(OSA) to efficiently support semantic annotation for web documents(such as HTML, PDF). We define a semantic annotation model that represents ontological-semantic information by using RDFS(RDF Schema). Based on XMP(eXtensible Metadata Platform) standard, the model is encoded directly into the document. By using OSA with XMP, user can perform semantic annotation on web documents which are able to keep compatibility for managing annotation metadata. Eventually, the integrated semantic annotation metadata can be used effectively in semantic search for a variety of web contents.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.

Rough Computational Annotation and Hierarchical Conserved Area Viewing Tool for Genomes Using Multiple Relation Graph. (다중 관계 그래프를 이용한 유전체 보존영역의 계층적 시각화와 개략적 전사 annotation 도구)

  • Lee, Do-Hoon
    • Journal of Life Science
    • /
    • v.18 no.4
    • /
    • pp.565-571
    • /
    • 2008
  • Due to rapid development of bioinformatics technologies, various biological data have been produced in silico. So now days complicated and large scale biodata are used to accomplish requirement of researcher. Developing visualization and annotation tool using them is still hot issues although those have been studied for a decade. However, diversity and various requirements of users make us hard to develop general purpose tool. In this paper, I propose a novel system, Genome Viewer and Annotation tool (GenoVA), to annotate and visualize among genomes using known information and multiple relation graph. There are several multiple alignment tools but they lose conserved area for complexity of its constrains. The GenoVA extracts all associated information between all pair genomes by extending pairwise alignment. High frequency conserved area and high BLAST score make a block node of relation graph. To represent multiple relation graph, the system connects among associated block nodes. Also the system shows the known information, COG, gene and hierarchical path of block node. In this case, the system can annotates missed area and unknown gene by navigating the special block node's clustering. I experimented ten bacteria genomes for extracting the feature to visualize and annotate among them. GenoVA also supports simple and rough computational annotation of new genome.

A study on the Application of XML based Annotation for National Base Digital Map (XML기반 국가공간데이터의 주석 활용에 관한 연구)

  • Kwon, Gu-Ho;Seok, Hyun-Jeong;Kim, Young-Sup
    • Journal of Korea Spatial Information System Society
    • /
    • v.4 no.1 s.7
    • /
    • pp.15-25
    • /
    • 2002
  • The OGC(OpenGIS Consortium), which is standardization organization of geographic data, have been studied standard for geographic data such as GML and GML based annotation for image and map. Annotation for map is applicable in various ways, understanding about geographic data, decision making and exchange of communication. For instance, Map annotation can be used for highlighting tour-course as symbols or explaining it as text on a map in tourism. This study suggests some annotation methodology for national digital map and presents a simple implementation of it. Firstly, this study suggests a way of updating OGC annotation schema which corresponds with DXF format and creating a GML application schema using the updated OGC annotation schema. Also it suggest a way of converting instance documents of annotated map to VML document with XSLT and VML for display. Later, it is needed to study for supporting another formats as well as DXF format. In addition, it is needed to study for managing the history of updated map entity with annotation in Local governments UIS(Urban Information System) in practical aspects.

  • PDF