• 제목/요약/키워드: Spatial feature

검색결과 824건 처리시간 0.024초

A Probabilistic Network for Facial Feature Verification

  • Choi, Kyoung-Ho;Yoo, Jae-Joon;Hwang, Tae-Hyun;Park, Jong-Hyun;Lee, Jong-Hoon
    • ETRI Journal
    • /
    • 제25권2호
    • /
    • pp.140-143
    • /
    • 2003
  • In this paper, we present a probabilistic approach to determining whether extracted facial features from a video sequence are appropriate for creating a 3D face model. In our approach, the distance between two feature points selected from the MPEG-4 facial object is defined as a random variable for each node of a probability network. To avoid generating an unnatural or non-realistic 3D face model, automatically extracted 2D facial features from a video sequence are fed into the proposed probabilistic network before a corresponding 3D face model is built. Simulation results show that the proposed probabilistic network can be used as a quality control agent to verify the correctness of extracted facial features.

  • PDF

복식조형의 공간적 특질에 관한 연구-I (A Study on the Spatial Property of Dress Modeling-I)

  • 김혜연
    • 복식
    • /
    • 제38권
    • /
    • pp.31-49
    • /
    • 1998
  • This study is the primary basic study about the spatial feature of modeling of Fashion Design. Then, this researcher lays significance in establishing the basic system about the character of dress and its ornaments as modeling in spatial-formal, dimension, examining the feature of modeling closely through perception principle and offering the basic principle to plan and organize the modeling space for dress and its ornaments on the basis of it. To generalize the findings is as follows : First, the spatial system of modeling for dress and its ornaments is made with 3 elements such as space, human beings and dress and its ornaments. Second, the form of dress and its ornaments and the spatial organization start from the structural basis which is human body, and the sensible system of body is made through inter-action, but the aesthetic expression is complet-ed by the moment of body. Third, the characteristic principle of model-ing for dress and its ornaments which was suggested in Chapter IV is based on the visuo-per-ceptional modeling experience, and these thinking contents are inputted in cognition course as the invisible in formation in the new space plan and organization and activate the apperception course and aim at the action about aesthetic judgement.

  • PDF

수치지도의 활용을 위한 단일식별자 (Unique Feature Identifier for Utilizing Digital Map)

  • 조우석
    • 대한공간정보학회지
    • /
    • 제6권1호
    • /
    • pp.27-34
    • /
    • 1998
  • 단일식별자(Unique feature IDentifier, UFID)란 실세계fl서 존재하는 실체가 있는 지형지물을 참조하는 한가지 방법으로, 데이터베이스에 저장된 지형지물들을 유일한 방법으로 지정하며, 두 개 혹은 그 이상의 데이터베이스를 연결하는데 사용한다. 본 연구에서는 국립지리원의 내부적인 목적의 단일 식별자와 더불어 국가지리정보체계 내에서 정보를 공유할 수 있는 즉 사용자 중심인 외부목적의 단일식별자를 함께 충족시킬 수 있는 단일식별자의 포맷을 제안하였다 제안된 수치지도 단일식별자는 단일식별자의 구성요소에 행정구역코드와 지형지물코드를 사용함으로써, 직접적인 공간자료 색인을 제공하는 의미형 식별자이다. 또한 제안된 checksum 알고리즘의 특징은 단일식별자에 대한 불확실성을 제거하며, 수동으로 입력하거나, 전송 및 처리과정에서 발생할 수 있는 오류를 쉽게 발견할 수 있도록 고안되었다.

  • PDF

영상 식별을 위한 전역 특징 추출 기술과 그 성능 비교 (A Comparison of Global Feature Extraction Technologies and Their Performance for Image Identification)

  • 양원근;조아영;정동석
    • 한국멀티미디어학회논문지
    • /
    • 제14권1호
    • /
    • pp.1-14
    • /
    • 2011
  • 영상의 유통이 활발해 지면서 증가하는 데이터베이스를 효율적으로 관리하기 위한 다양한 요구들이 생겨났다. 내용 기반 기술은 이런 요구들을 충족시켜 줄 기술 중 하나이다. 내용 기반 기술에서는 다양한 특징 방법을 이용해 영상을 표현할 수 있지만, 그 중 전역 특정 방법은 추출된 특정 벡터가 규격화 되어 빠른 정합 속도를 확보할 수 있다는 장점이 있다. 전역 특정 방법은 크게 공간적 특성을 이용한 방법과 통계적 특성을 이용한 방법으로 분류할 수 있고, 각각은 다시 컬러 성분을 이용한 방법과 밝기 성분을 이용한 방법으로 분류된다. 본 논문에서는 이와 같은 분류 방법에 따라 다양한 전역 특정 방법들을 살펴보고, 정확성 실험, 재현율-정확도 그래프, ANMRR, 특징 벡터 크기-정합시간 등을 이용해 개별 전역 특정들의 성능을 비교하였다. 실험 결과 공간적 특성을 이용한 전역 특징은 비기하학적 변형에서 특히 뛰어난 성능을 보였으며, 컬러 성분과 히스토그램을 이용한 전역 특정 방법이 가장 좋은 성능을 보였다.

온라인 결함계측용 협대역 제거형 공간필터의 최적설계 및 제작 (Optical Design and Construction of Narrow Band Eliminating Spatial Filter for On-line Defect Detection)

  • 전승환
    • 한국항해학회지
    • /
    • 제22권4호
    • /
    • pp.59-67
    • /
    • 1998
  • A quick and automatic detection with no harm to the goods is very important task for improving quality control, process control and labour reduction. In real fields of industry, defect detection is mostly accomplished by skillful workers. A narrow band eliminating spatial filter having characteristics of removing the specified spatial frequency is developed by the author, and it is proved that the filter has an excellent ability for on-line and real time detection of surface defect. By the way,. this spatial filter shows a ripple phenominum in filtering characteristics. So, it is necessary to remove the ripple component for the improvement of filter gain, moreover efficiency of defect detection. The spatial filtering method has a remarkable feature which means that it is able to set up weighting function for its own sake, and which can to obtain the best signal relating to the purpose of the measurement. Hence, having an eye on such feature, theoretical analysis is carried out at first for optimal design of narrow band eliminating spatial filter, and secondly, on the basis of above results spatial filter is manufactured, and finally advanced effectiveness of spatial filter is evaluated experimentally.

  • PDF

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템 (A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map)

  • 최경주
    • 한국멀티미디어학회논문지
    • /
    • 제18권4호
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

INTERACTIVE FEATURE EXTRACTION FOR IMAGE REGISTRATION

  • Kim Jun-chul;Lee Young-ran;Shin Sung-woong;Kim Kyung-ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.641-644
    • /
    • 2005
  • This paper introduces an Interactive Feature Extraction (!FE) approach for the registration of satellite imagery by matching extracted point and line features. !FE method contains both point extraction by cross-correlation matching of singular points and line extraction by Hough transform. The purpose of this study is to minimize user's intervention in feature extraction and easily apply the extracted features for image registration. Experiments with these imagery dataset proved the feasibility and the efficiency of the suggested method.

  • PDF

센서 융합을 통한 환경지도 기반의 강인한 전역 위치추정 (Robust Global Localization based on Environment map through Sensor Fusion)

  • 정민국;송재복
    • 로봇학회논문지
    • /
    • 제9권2호
    • /
    • pp.96-103
    • /
    • 2014
  • Global localization is one of the essential issues for mobile robot navigation. In this study, an indoor global localization method is proposed which uses a Kinect sensor and a monocular upward-looking camera. The proposed method generates an environment map which consists of a grid map, a ceiling feature map from the upward-looking camera, and a spatial feature map obtained from the Kinect sensor. The method selects robot pose candidates using the spatial feature map and updates sample poses by particle filter based on the grid map. Localization success is determined by calculating the matching error from the ceiling feature map. In various experiments, the proposed method achieved a position accuracy of 0.12m and a position update speed of 10.4s, which is robust enough for real-world applications.

LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성 (LFFCNN: Multi-focus Image Synthesis in Light Field Camera)

  • 김형식;남가빈;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF