• 제목/요약/키워드: Image Features

검색결과 3,372건 처리시간 0.031초

양방향 사진트리 기반 변이 추정을 이용한 중간 시점 영상 합성 (IVS using disparity estimation based on bidirectional quadtree)

  • 김재환;임정은;손광훈
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2295-2298
    • /
    • 2003
  • The correspondence problem for stereo image matching plays an important role in expanding view points as multi view video applications become more popular. The conventional disparity estimation algorithms have limitation to find exact disparities because they consider not image features but similiar intensity points. Thus we propose an efficient disparity estimation algorithm considering features of stereo image pairs. As simulation results, our proposed method confirms better intermediate views than the existing block-matching methods.

  • PDF

바다-$IV/I^2R$: 고차원 이미지 색인 구조를 이용한 효율적인 내용 기반 이미지 검색 시스템의 설계와 구현 (BADA-$IV/I^2R$: Design & Implementation of an Efficient Content-based Image Retrieval System using a High-Dimensional Image Index Structure)

  • 김영균;이장선;이훈순;김완석;김명준
    • 한국정보처리학회논문지
    • /
    • 제7권2S호
    • /
    • pp.678-691
    • /
    • 2000
  • A variety of multimedia applications require multimedia database management systems to manage multimedia data, such as text, image, and video, as well as t support content-based image or video retrieval. In this paper we design and implement a content-based image retrieval system, BADA-IV/I$^2$R(Image Information Retrieval), which is developed based on BADA-IV multimedia database management system. In this system image databases can be efficiently constructed and retrieved with the visual features, such as color, shape, and texture, of image. we extend SQL statements to define image query based on both annotations and visual features of image together. A high-dimensional index structure, called CIR-tree, is also employed in the system to provide an efficient access method to image databases. We show that BADA-IV/I$^2$R provides a flexible way to define query for image retrieval and retrieves image data fast and effectively: the effectiveness and performance of image retrieval are shown by BEP(Bull's Eye Performance) that is used to measure the retrieval effectiveness in MPEG-7 and comparing the performance of CIR-tree with those of X-tree and TV-tree, respectively.

  • PDF

여성의류 매장 공간의 구도에 나타난 공간구성의 주의집중 특성 - 백화점 매장의 순회동선을 대상으로 - (Features of Attention to Space Structure of Spacial Composition in Women's Shop - Targeting the Circulation Line of Department Store -)

  • 최계영;손광호
    • 한국실내디자인학회논문집
    • /
    • 제26권2호
    • /
    • pp.3-12
    • /
    • 2017
  • This study has analyzed the features of attention to spacial composition seen in "Seeing ${\leftrightarrow}$ Seen" Correlation of continuous move in the space. The eye-tracking was employed for collecting the data of attention features to the space so that the correlation between visual perception and space could be estimated through the attention features to the difference between spacial composition and display. First, it was confirmed that the attention features varied according to the structure of shops and the exposure degree of selling space, which revealed that, while causing the customers' less attention to both sides of shops, the vanishing-point structure characteristically made their eyes focused on the central part. Second, their initial observation activities were found to be active at the height of their eyes. Third, 10 images were selected as objects for continuous experiment. There was a concern that the central part of each image would be paid intense attention to during the initial observation, but only two of those were found to be so. Fourth, there had been a study result of eye-tracking experiment that the attention had been concentrated on the central part of the image first seen. This study, however, revealed that such phenomenon is limited to the first image. Accordingly, it is necessary to draw up such method for ensuring reliability in order to use the data acquired from any eye-tracking experiment as exclusion of the initial attention time to the first image or of unemployment of the initial image-experiment to analysis.

인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정 (Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction)

  • 박성기;박민용;이태근
    • 제어로봇시스템학회논문지
    • /
    • 제11권1호
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

연령 변화에 따른 치조골의 디지탈 방사선학적 특성비교 (Comparison of digitized radiographic alveolar features with age)

  • 이건일
    • 치과방사선
    • /
    • 제27권1호
    • /
    • pp.17-24
    • /
    • 1997
  • The purpose of the present study was to use digital profile image features and digital image analysis of fixed-dimension bone regions, extracted from standardized periapical radiographs of the maxilla, to determine whether differences exist in alveolar bone of younger women(mean age: 59.23±7.34 years) and just menopaused women(mean age: 59.23±7.34). Periapical films were used from two groups of 20 randomly selected women. None of the subjects had a remarkable medical history. To simplify protocol, we chose one interproximal bone area between the maxillary right canine and lateral incisor for study. Ech film was digitized into a 1312 x 1024 pixel x 8 bit depth matrix by means of a Nikon 35 mm film scanner(LS-35lOAF, Japan) with fixed gain and internal dark current correction to maintain constant illumination. The scanner was interfaced to a Macintosh LC III computer(Apple Computer, Charlotte, N.C.). Area and profile orientation were selected with a NIMH Image 1.37(NIH Research Services Branch, Bethesda, Md.). Histogram features were extracted from each profile and area. The results of this study indicate that mean pixel intensities didn't differ significantly between two groups and there was a high correlarion-coefficient between digitized radiographic profile features and area features.

  • PDF

다양한 형태의 지문 이미지 분류를 위한 영역별 방향특징 추출 방법 (A Directional Feature Extraction Method of Each Region for the Classification of Fingerprint Images with Various Shapes)

  • 정혜욱;이지형
    • 제어로봇시스템학회논문지
    • /
    • 제18권9호
    • /
    • pp.887-893
    • /
    • 2012
  • In this paper, we propose a new approach to extract directional features based on directional patterns of each region in fingerprint images. The proposed approach computes the center of gravity to extract features from fingerprint images of various shapes. According to it, we divide a fingerprint image into four regions and compute the directional values of each region. To extract directional features of each region from a fingerprint image, we spilt direction values of ridges in a region into 18 classes and compute frequency distribution of each region. Through the result of our experiment using FVC2002 DB database acquired by electronic devices, we show that directional features are effectively extracted from various fingerprint images of exceptional inputs which lost all or part of singularities. To verify the performance of the proposed approach, we explained the process to model Arch, Left, Right and Whorl class using the extracted directional features of four regions and analyzed the classification result.

Facial Recognition Algorithm Based on Edge Detection and Discrete Wavelet Transform

  • Chang, Min-Hyuk;Oh, Mi-Suk;Lim, Chun-Hwan;Ahmad, Muhammad-Bilal;Park, Jong-An
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제3권4호
    • /
    • pp.283-288
    • /
    • 2001
  • In this paper, we proposed a method for extracting facial characteristics of human being in an image. Given a pair of gray level sample images taken with and without human being, the face of human being is segmented from the image. Noise in the input images is removed with the help of Gaussian filters. Edge maps are found of the two input images. The binary edge differential image is obtained from the difference of the two input edge maps. A mask for face detection is made from the process of erosion followed by dilation on the resulting binary edge differential image. This mask is used to extract the human being from the two input image sequences. Features of face are extracted from the segmented image. An effective recognition system using the discrete wave let transform (DWT) is used for recognition. For extracting the facial features, such as eyebrows, eyes, nose and mouth, edge detector is applied on the segmented face image. The area of eye and the center of face are found from horizontal and vertical components of the edge map of the segmented image. other facial features are obtained from edge information of the image. The characteristic vectors are extrated from DWT of the segmented face image. These characteristic vectors are normalized between +1 and -1, and are used as input vectors for the neural network. Simulation results show recognition rate of 100% on the learned system, and about 92% on the test images.

  • PDF

하이패션의 스누드(Snood) 코디네이션에 나타난 이미지 (The Image Expressions of High-Fashion Snood Coordination)

  • 양아랑;이효진
    • 복식문화연구
    • /
    • 제18권6호
    • /
    • pp.1153-1164
    • /
    • 2010
  • The objective of this study is to analyze the coordination images seen in comprehensive fashion items and their features. This will be done from the viewpoint of both the creativity as well as functional characteristics of items and images. For the method of study, I explored the idea of "snood" style, analyzing its features of 42 pictures appearing from the 2006 S/S to 2010 F/W collection. It is important to note that "snood" style has the characteristics of both a muffler and turtleneck. With looping design connected at both ends, it can be placed around the neck or head, creating the image of wearing a hood. After having examined the selected data and pictures, one can largely divide the exclusive high-fashion image categories into three types: feminine, avant-garde, and finally, active & functional sportive image. First, the orthodox image is widely accepted by most as it has forever evolved within the original tradition of the practically functional muffler(scarf). Second, since the metamorphic image tends to lend itself to free ideas, you can wear a snood around the shoulders like a collar. Worn together with the same type of clothing, the snood can be seen as an effective suit. Third, the aim of image emphasis is to highlight certain points, or make some features more noticeable, as a means of possibly attracting more interest and attention. The image of snood arises out of the use of shapes, colors, and other accessory parts. As mentioned earlier, snood stands out as an independent item instead of just being an accessory to clothing. Its primary function as a style coordinator is emphasized in order to create more distinctive fashion images. Through this study, I thereby intend to provide fashion style data on the latest trends, and high-fashion codes of snood coordination.

영상편집효과를 고려한 내용기반 영상 검색의 개선에 관한 연구 (Improvement of Content-based Image Retrieval by Considering Image Editing Effect)

  • 강석준;배태면;김기현;한승완;정치윤;남택용;노용만
    • 한국멀티미디어학회논문지
    • /
    • 제9권5호
    • /
    • pp.564-575
    • /
    • 2006
  • 멀티미디어 컨텐츠가 급격히 증가함에 따라 사용자들은 다양한 유통 경로를 통하여 많은 멀티미디어 컨텐츠를 이용할 수 있게 되었다. 내용기반 영상 검색시스템은 영상 데이터의 내용을 다양한 시각적 특정 값들로 표현하여, 수많은 영상 중에서 사용자가 원하는 영상을 검색하고 원하지 않는 영상을 필터링 하도록 한다. 그러나 멀티미디어 데이터의 편집은 영상 데이터의 고유한 시각적 특정 값들을 왜곡시켜 잘못된 검색 결과나 필터링 결과를 제공하여 내용기반 영상 검색시스템의 성능을 저하시킨다. 본 논문에서는 이러한 영상편집효과 가운데 글자삽입, 프레임의 삽입, 그리고 여러 영상으로의 구성과 같은 편집효과에 대해 분석하고 이러한 편집효과를 제거하는 알고리즘을 고려한 내용기반 검색시스템을 제안하였으며, 실험을 통해 향상된 검색 결과를 확인할 수 있었다.

  • PDF

Managing and Modeling Strategy of Geo-features in Web-based 3D GIS

  • Kim, Kyong-Ho;Choe, Seung-Keol;Lee, Jong-Hun;Yang, Young-Kyu
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 1999년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.75-79
    • /
    • 1999
  • Geo-features play a key role in object-oriented or feature-based geo-processing system. So the strategy for how-to-model and how-to-manage the geo-features builds the main architecture of the entire system and also supports the efficiency and functionality of the system. Unlike the conventional 2D geo-processing system, geo-features in 3B GIS have lots to be considered to model regarding the efficient manipulation and analysis and visualization. When the system is running on the Web, it should also be considered that how to leverage the level of detail and the level of automation of modeling in addition to the support for client side data interoperability. We built a set of 3D geo-features, and each geo-feature contains a set of aspatial data and 3D geo-primitives. The 3D geo-primitives contain the fundamental modeling data such as the height of building and the burial depth of gas pipeline. We separated the additional modeling data on the geometry and appearance of the model from the fundamental modeling data to make the table in database more concise and to allow the users more freedom to represent the geo-object. To get the users to build and exchange their own data, we devised a file format called VGFF 2.0 which stands for Virtual GIS File Format. It is to describe the three dimensional geo-information in XML(eXtensible Markup Language). The DTD(Document Type Definition) of VGFF 2.0 is parsed using the DOM(Document Object Model). We also developed the authoring tools for. users can make their own 3D geo-features and model and save the data to VGFF 2.0 format. We are now expecting the VGFF 2.0 evolve to the 3D version of SVG(Scalable Vector Graphics) especially for 3D GIS on the Web.

  • PDF