• 제목/요약/키워드: Visual Image Content

검색결과 283건 처리시간 0.028초

영상콘텐츠에서 테마공간으로의 전환 양상:각색을 통한 재현을 중심으로 (Conversional Aspect of The Theme Space Based on Visual Image Content:A Focus on Representation through Adaptation)

  • 신동희;김희경
    • 한국콘텐츠학회논문지
    • /
    • 제12권4호
    • /
    • pp.186-197
    • /
    • 2012
  • 본 논문은 영상콘텐츠에서 공간콘텐츠로의 전환의 핵심을 각색(adaptation)에 두고, 테마공간의 원형콘텐츠인 영상콘텐츠가 어떻게 각색되어 공간으로 재현되어야 하는가를 밝히는 것을 목적으로 한다. 또한 스토리텔링에 대한 연구는 무수히 많이 쏟아져 나오고 있고, 소설의 영화화나 TV드라마화, 영화의 게임화나 그 반대와 관련한 각색 연구는 많지만 영상콘텐츠에서 테마공간으로 전환 시의 각색의 방법과 스토리텔링의 전단계라는 측면에서의 각색에 대한 연구는 부족한 편이다. 우선 각색의 정의를 내리고, 자네티와 더들리의 각색방법을 차용하여 영상콘텐츠의 테마공간화에 적용해 보았다. 이어서 테마공간의 특징을 살펴본 후 테마공간은 결국 영상콘텐츠의 이야기 이미지 행위의 재현 장소임을 사례분석과 함께 밝히고, 각각 어떤 각색을 통해 재현되고 있는가를 분석하였다. 연구결과 공간의 스토리텔링보다 각색이 선행작업이어야 하고, 3인칭 시점의 영상콘텐츠와는 달리 1인칭 시점의 테마공간 재현의 핵심은 관람이 아닌 체험임을 확인하였다. 따라서 본 논문에서는 영상콘텐츠에서 테마공간의 전환은 단순한 모방이 아니라 새로운 콘텐츠로 재창조되었음을 밝혔다. 향후 다양한 테마공간의 구체적인 각론적 층위에서의 분석을 통해 양자간의 전환이 효과적으로 작용하고, 다양한 각색 방법이 실질적으로 적용되기를 기대한다.

내용기반 영상정보 검색기술에 관한 이론적 고찰 (A Study on Content-based Image Information Retrieval Technique)

  • 노진구
    • 한국도서관정보학회지
    • /
    • 제31권1호
    • /
    • pp.229-258
    • /
    • 2000
  • The growth of digital image an video archives is increasing the need for tools that efficiently search through large amount of visual dta. Retrieval of visual data is important issue in multimedia database. We are using contented-based visual data retrieval method for efficient retrieval of visual data. In this paper, we introduced fundamental techniques using characteristic values of image data and indexing techniques required for content-based visual retrieval. In addition we introduced content-based visual retrieval system for use of digital library.

  • PDF

스마트 센서와 시각적 기술자를 결합한 사진 검색 시스템 (Photo Retrieval System using Combination of Smart Sensor and Visual Descriptor)

  • 이용환;김흥준
    • 반도체디스플레이기술학회지
    • /
    • 제13권2호
    • /
    • pp.45-52
    • /
    • 2014
  • This paper proposes an efficient photo retrieval system that automatically indexes for searching of relevant images, using a combination of geo-coded information, direction/location of image capture device and content-based visual features. A photo image is labeled with its GPS (Global Positioning System) coordinates and direction of the camera view at the moment of capture, and the label leads to generate a geo-spatial index with three core elements of latitude, longitude and viewing direction. Then, content-based visual features are extracted and combined with the geo-spatial information, for indexing and retrieving the photo images. For user's querying process, the proposed method adopts two steps as a progressive approach, filtering the relevant subset prior to use a content-based ranking function. To evaluate the performance of the proposed scheme, we assess the simulation performance in terms of average precision and F-score, using a natural photo collection. Comparing the proposed approach to retrieve using only visual features, an improvement of 20.8% was observed. The experimental results show that the proposed method exhibited a significant enhancement of around 7.2% in retrieval effectiveness, compared to previous work. These results reveal that a combination of context and content analysis is markedly more efficient and meaningful that using only visual feature for image search.

A study on the expansibility of sound-responsive visual art contents

  • Jiang, Qianqian;Chung, Jean-Hun
    • International journal of advanced smart convergence
    • /
    • 제11권2호
    • /
    • pp.88-94
    • /
    • 2022
  • The relationship between sound and vision was experimentally confirmed by physicist Ernst Florens Friedrich Chladni as early as the 18th century and formally entered into systematic research. With the development of emerging media technology, sound reactive type visual content is not limited to a single visual interaction based on the vibration of sound, and its visual content shows a diversified and scalable development trend according to different purposes in many fields. This study analyzes the development and changes of sound visual art contents from early stage to modernization, and analyzes the development characteristic of sound visual art content in different fields and scene environments influence by interactive media, new media technologies and devices by means of case analysis. Through this research, it is expected that the sound reactive type visual art content can continue to develop and extend in the existing fields, while explore the scalability of the application of sound reactive type visual art content in more fields.

Metadata Processing Technique for Similar Image Search of Mobile Platform

  • Seo, Jung-Hee
    • Journal of information and communication convergence engineering
    • /
    • 제19권1호
    • /
    • pp.36-41
    • /
    • 2021
  • Text-based image retrieval is not only cumbersome as it requires the manual input of keywords by the user, but is also limited in the semantic approach of keywords. However, content-based image retrieval enables visual processing by a computer to solve the problems of text retrieval more fundamentally. Vision applications such as extraction and mapping of image characteristics, require the processing of a large amount of data in a mobile environment, rendering efficient power consumption difficult. Hence, an effective image retrieval method on mobile platforms is proposed herein. To provide the visual meaning of keywords to be inserted into images, the efficiency of image retrieval is improved by extracting keywords of exchangeable image file format metadata from images retrieved through a content-based similar image retrieval method and then adding automatic keywords to images captured on mobile devices. Additionally, users can manually add or modify keywords to the image metadata.

유아의 시지각 인지기능 개선을 위한 이미지 블록 연동형 콘텐츠 구성과 구현 (Implementation of Image Block Linked Contents to Improve Children's Visual Perception and Cognitive Function)

  • 곽창섭;이영순
    • 한국콘텐츠학회논문지
    • /
    • 제22권9호
    • /
    • pp.76-84
    • /
    • 2022
  • 본 논문에서는 스마트폰의 사진과 영상을 활용하는 상호작용형 콘텐츠 디바이스인 아이퍼즐 이미지 블록과 연동 가능한 시지각 인지기능 훈련 콘텐츠를 구성하였다. 이를 위해 시각기억, 시각연속성, 공간관계, 시각구별의 4개 영역을 도출하고 콘텐츠 동작, 활용방법과 시나리오를 작성하였다. 콘텐츠 이미지를 디자인하고 기존 학습지형 시각 및 지각 인지기능 훈련 자료를 모바일 미니 게임형으로 개발하여 유아의 훈련참여 욕구를 지속부여, 유도하고자 하였다. 개발된 콘텐츠를 활용하여 일반 아동과 보호자를 대상으로 체험활동을 수행하였으며 기본 퍼즐 완구 대비 높은 집중도와 유익성, 효과성에서 의미있는 결과를 확인하였다. 본 논문을 통해 디지털 완구와 콘텐츠를 기반으로 하는 인지기능 개선활동 연구에 의미있는 자료가 되기를 기대한다.

의미적 연관태그와 이미지 내용정보를 이용한 웹 이미지 분류 (Web Image Classification using Semantically Related Tags and Image Content)

  • 조수선
    • 인터넷정보학회논문지
    • /
    • 제11권3호
    • /
    • pp.15-24
    • /
    • 2010
  • 본 논문에서는 대용량 온라인 이미지 공유 사이트를 적용 도메인으로 하여 이미지 검색의 만족도를 높이고자 태그의 의미적 연관성과 이미지 자체의 내용 정보를 결합하는 이미지 분류 방법을 제안한다. 이미지 검색 및 분류 알고리즘이 플리커와 같은 대용량 이미지 공유 사이트에서 활용될 수 있으려면 실제 웹상의 태깅된 이미지를 대상으로 한 적용이 가능해야 한다. 제안된 알고리즘은 'bag of visual word'기반의 이미지 내용으로 웹 이미지를 분류하기 위한 것으로서, 의미적 연관태그를 이용해 일차 검색된 이미지들을 훈련 데이터로 사용하여 카테고리 모델을 훈련하고, PLSA를 적용하여 평가 이미지들을 분류하는 것이다. 제안된 방법으로 플리커의 웹 이미지들을 대상으로 실험한 결과, 태그 정보를 이용한 기존의 방법에 비해 우수한 검색 정확도 및 재현율을 확인할 수 있었다.

Image classification and captioning model considering a CAM-based disagreement loss

  • Yoon, Yeo Chan;Park, So Young;Park, Soo Myoung;Lim, Heuiseok
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.67-77
    • /
    • 2020
  • Image captioning has received significant interest in recent years, and notable results have been achieved. Most previous approaches have focused on generating visual descriptions from images, whereas a few approaches have exploited visual descriptions for image classification. This study demonstrates that a good performance can be achieved for both description generation and image classification through an end-to-end joint learning approach with a loss function, which encourages each task to reach a consensus. When given images and visual descriptions, the proposed model learns a multimodal intermediate embedding, which can represent both the textual and visual characteristics of an object. The performance can be improved for both tasks by sharing the multimodal embedding. Through a novel loss function based on class activation mapping, which localizes the discriminative image region of a model, we achieve a higher score when the captioning and classification model reaches a consensus on the key parts of the object. Using the proposed model, we established a substantially improved performance for each task on the UCSD Birds and Oxford Flowers datasets.

동영상에서의 내용기반 메쉬를 이용한 모션 예측 (Content Based Mesh Motion Estimation in Moving Pictures)

  • 김형진;이동규;이두수
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.35-38
    • /
    • 2000
  • The method of Content-based Triangular Mesh Image representation in moving pictures makes better performance in prediction error ratio and visual efficiency than that of classical block matching. Specially if background and objects can be separated from image, the objects are designed by Irregular mesh. In this case this irregular mesh design has an advantage of increasing video coding efficiency. This paper presents the techniques of mesh generation, motion estimation using these mesh, uses image warping transform such as Affine transform for image reconstruction, and evaluates the content based mesh design through computer simulation.

  • PDF

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • 한국멀티미디어학회논문지
    • /
    • 제11권6호
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF