• Title/Summary/Keyword: Position of Viewers

Search Result 23, Processing Time 0.017 seconds

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.

A Review Examining the Dating, Analysis of the Painting Style, Identification of the Painter, and Investigation of the Documentary Records of Samsaebulhoedo at Yongjusa Temple (용주사(龍珠寺) <삼세불회도(三世佛會圖)> 연구의 연대 추정과 양식 분석, 작가 비정, 문헌 해석의 검토)

  • Kang, Kwanshik
    • MISULJARYO - National Museum of Korea Art Journal
    • /
    • v.97
    • /
    • pp.14-54
    • /
    • 2020
  • The overall study of Samsaebulhoedo (painting of the Assembly of Buddhas of Three Ages) at Yongjusa Temple has focused on dating it, analyzing the painting style, identifying its painter, and scrutinizing the related documents. However, its greater coherence could be achieved through additional support from empirical evidence and logical consistency. Recent studies on Samsaebulhoedo at Yongjusa Temple that postulate that the painting could have been produced by a monk-painter in the late nineteenth century and that an original version produced in 1790 could have been retouched by a painter in the 1920s using a Western painting style lack such empirical proof and logic. Although King Jeongjo's son was not yet installed as crown prince, the Samsaebulhoedo at Yongjusa Temple contained a conventional written prayer wishing for a long life for the king, queen, and crown prince: "May his majesty the King live long / May her majesty the Queen live long / May his highness the Crown Prince live long" (主上殿下壽萬歲, 王妃殿下壽萬歲, 世子邸下壽萬歲). Later, this phrase was erased using cinnabar and revised to include unusual content in an exceptional order: "May his majesty the King live long / May his highness the King's Affectionate Mother (Jagung) live long / May her majesty the Queen live long / May his highness the Crown Prince live long" (主上殿下壽萬歲, 慈宮邸下壽萬歲, 王妃殿下壽萬歲, 世子邸下壽萬歲). A comprehensive comparison of the formats and contents in written prayers found on late Joseon Buddhist paintings and a careful analysis of royal liturgy during the reign of King Jeongjo reveal Samsaebulhoedo at Yongjusa Temple to be an original version produced at the time of the founding of Yongjusa Temple in 1790. According to a comparative analysis of formats, iconography, styles, aesthetic sensibilities, and techniques found in Buddhist paintings and paintings by Joseon court painters from the eighteenth and nineteenth centuries, Samsaebulhoedo at Yongjusa Temple bears features characteristic of paintings produced around 1790, which corresponds to the result of analysis on the written prayer. Buddhist paintings created up to the early eighteenth century show deities with their sizes determined by their religious status and a two-dimensional conceptual composition based on the traditional perspective of depicting close objects in the lower section and distant objects above. This Samsaebulhoedo, however, systematically places the Buddhist deities within a threedimensional space constructed by applying a linear perspective. Through the extensive employment of chiaroscuro as found in Western painting, it expresses white highlights and shadows, evoking a feeling that the magnificent world of the Buddhas of the Three Ages actually unfolds in front of viewers. Since the inner order of a linear perspective and the outer illusion of chiaroscuro shading are intimately related to each other, it is difficult to believe that the white highlights were a later addition. Moreover, the creative convergence of highly-developed Western painting style and techniques that is on display in this Samsaebulhoedo could only have been achieved by late-Joseon court painters working during the reign of King Jeongjo, including Kim Hongdo, Yi Myeong-gi, and Kim Deuksin. Deungun, the head monk of Yongjusa Temple, wrote Yongjusa sajeok (History of Yongjusa Temple) by compiling the historical records on the temple that had been transmitted since its founding. In Yongjusa sajeok, Deungun recorded that Kim Hongdo painted Samsaebulhoedo as if it were a historical fact. The Joseon royal court's official records, Ilseongnok (Daily Records of the Royal Court and Important Officials) and Suwonbu jiryeong deungnok (Suwon Construction Records), indicate that Kim Hongdo, Yi Myeong-gi, and Kim Deuksin all served as a supervisor (gamdong) for the production of Buddhist paintings. Since within Joseon's hierarchical administrative system it was considered improper to allow court painters of government position to create Buddhist paintings which had previously been produced by monk-painters, they were appointed as gamdong in name only to avoid a political liability. In reality, court painters were ordered to create Buddhist paintings. During their reigns, King Yeongjo and King Jeongjo summoned the literati painters Jo Yeongseok and Kang Sehwang to serve as gamdong for the production of royal portraits and requested that they paint these portraits as well. Thus, the boundary between the concept of supervision and that of painting occasionally blurred. Supervision did not completely preclude painting, and a gamdong could also serve as a painter. In this light, the historical records in Yongjusa sajeok are not inconsistent with those in Ilseongnok, Suwonbu jiryeong deungnok, and a prayer written by Hwang Deok-sun, which was found inside the canopy in Daeungjeon Hall at Yongjusa Temple. These records provided the same content in different forms as required for their purposes and according to the context. This approach to the Samsaebulhoedo at Yongjusa Temple will lead to a more coherent explanation of dating the painting, analyzing its style, identifying its painter, and interpreting the relevant documents based on empirical grounds and logical consistency.