• Title/Summary/Keyword: feature transformation

Search Result 391, Processing Time 0.027 seconds

Face Identification Method Using Face Shape Independent of Lighting Conditions

  • Takimoto, H.;Mitsukura, Y.;Akamatsu, N.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2213-2216
    • /
    • 2003
  • In this paper, we propose the face identification method which is robust for lighting based on the feature points method. First of all, the proposed method extracts an edge of facial feature. Then, by the hough transform, it determines ellipse parameters of each facial feature from the extracted edge. Finally, proposed method performs the face identification by using parameters. Even if face image is taken under various lighting condition, it is easy to extract the facial feature edge. Moreover, it is possible to extract a subject even if the object has not appeared enough because this method extracts approximately the parameters by the hough transformation. Therefore, proposed method is robust for the lighting condition compared with conventional method. In order to show the effectiveness of the proposed method, computer simulations are done by using the real images.

  • PDF

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF

Content-Based Image Retrieval System using Feature Extraction of Image Objects (영상 객체의 특징 추출을 이용한 내용 기반 영상 검색 시스템)

  • Jung Seh-Hwan;Seo Kwang-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.27 no.3
    • /
    • pp.59-65
    • /
    • 2004
  • This paper explores an image segmentation and representation method using Vector Quantization(VQ) on color and texture for content-based image retrieval system. The basic idea is a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture space. These schemes are used for object-based image retrieval. Features for image retrieval are three color features from HSV color model and five texture features from Gray-level co-occurrence matrices. Once the feature extraction scheme is performed in the image, 8-dimensional feature vectors represent each pixel in the image. VQ algorithm is used to cluster each pixel data into groups. A representative feature table based on the dominant groups is obtained and used to retrieve similar images according to object within the image. The proposed method can retrieve similar images even in the case that the objects are translated, scaled, and rotated.

Global Covariance based Principal Component Analysis for Speaker Identification (화자식별을 위한 전역 공분산에 기반한 주성분분석)

  • Seo, Chang-Woo;Lim, Young-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.69-73
    • /
    • 2009
  • This paper proposes an efficient global covariance-based principal component analysis (GCPCA) for speaker identification. Principal component analysis (PCA) is a feature extraction method which reduces the dimension of the feature vectors and the correlation among the feature vectors by projecting the original feature space into a small subspace through a transformation. However, it requires a larger amount of training data when performing PCA to find the eigenvalue and eigenvector matrix using the full covariance matrix by each speaker. The proposed method first calculates the global covariance matrix using training data of all speakers. It then finds the eigenvalue matrix and the corresponding eigenvector matrix from the global covariance matrix. Compared to conventional PCA and Gaussian mixture model (GMM) methods, the proposed method shows better performance while requiring less storage space and complexity in speaker identification.

  • PDF

A Study on Feature Selection and Feature Extraction for Hyperspectral Image Classification Using Canonical Correlation Classifier (정준상관분류에 의한 하이퍼스펙트럴영상 분류에서 유효밴드 선정 및 추출에 관한 연구)

  • Park, Min-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3D
    • /
    • pp.419-431
    • /
    • 2009
  • The core of this study is finding out the efficient band selection or extraction method discovering the optimal spectral bands when applying canonical correlation classifier (CCC) to hyperspectral data. The optimal efficient bands grounded on each separability decision technique are selected using Multispec$^{(C)}$ software developed by Purdue university of USA. Total 6 separability decision techniques are used, which are Divergence, Transformed Divergence, Bhattacharyya, Mean Bhattacharyya, Covariance Bhattacharyya, Noncovariance Bhattacharyya. For feature extraction, PCA transformation and MNF transformation are accomplished by ERDAS Imagine and ENVI software. For the comparison and assessment on the effect of feature selection and feature extraction, land cover classification is performed by CCC. The overall accuracy of CCC using the firstly selected 60 bands is 71.8%, the highest classification accuracy acquired by CCC is 79.0% as the case that executes CCC after appling Noncovariance Bhattacharyya. In conclusion, as a matter of fact, only Noncovariance Bhattacharyya separability decision method was valuable as feature selection algorithm for hyperspectral image classification depended on CCC. The lassification accuracy using other feature selection and extraction algorithms except Divergence rather declined in CCC.

Voice Personality Transformation Using an Optimum Classification and Transformation (최적 분류 변환을 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.400-409
    • /
    • 2004
  • In this paper. a voice personality transformation method is proposed. which makes one person's voice sound like another person's voice. To transform the voice personality. vocal tract transfer function is used as a transformation parameter. Comparing with previous methods. the proposed method makes transformed speech closer to target speaker's voice in both subjective and objective points of view. Conversion between vocal tract transfer functions is implemented by classification of entire vector space followed by linear transformation for each cluster. LPC cepstrum is used as a feature parameter. A joint classification and transformation method is proposed, where optimum clusters and transformation matrices are simultaneously estimated in the sense of a minimum mean square error criterion. To evaluate the performance of the proposed method. transformation rules are generated from 150 sentences uttered by three male and on female speakers. These rules are then applied to another 150 sentences uttered by the same speakers. and objective evaluation and subjective listening tests are performed.

Comparative Study of GDPA and Hough Transformation for Linear Feature Extraction using Space-borne Imagery (위성 영상정보를 이용한 선형 지형지물 추출에서의 GDPA와 Hough 변환 처리결과 비교연구)

  • Lee Kiwon;Ryu Hee-Young;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.4
    • /
    • pp.261-274
    • /
    • 2004
  • The feature extraction using remotely sensed imagery has been recognized one of the important tasks in remote sensing applications. As the high-resolution imagery are widely used to the engineering purposes, need of more accurate feature information also is increasing. Especially, in case of the automatic extraction of linear feature such as road using mid or low-resolution imagery, several techniques was developed and applied in the mean time. But quantitatively comparative analysis of techniques and case studies for high-resolution imagery is rare. In this study, we implemented a computer program to perform and compare GDPA (Gradient Direction Profile Analysis) algorithm and Hough transformation. Also the results of applying two techniques to some images were compared with road centerline layers and boundary layers of digital map and presented. For quantitative comparison, the ranking method using commission error and omission error was used. As results, Hough transform had high accuracy over 20% on the average. As for execution speed, GDPA shows main advantage over Hough transform. But the accuracy was not remarkable difference between GDPA and Hough transform, when the noise removal was app]ied to the result of GDPA. In conclusion, it is expected that GDPA have more advantage than Hough transform in the application side.

Voice Personality Transformation Using a Multiple Response Classification and Regression Tree (다중 응답 분류회귀트리를 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.253-261
    • /
    • 2004
  • In this paper, a new voice personality transformation method is proposed. which modifies speaker-dependent feature variables in the speech signals. The proposed method takes the cepstrum vectors and pitch as the transformation paremeters, which represent vocal tract transfer function and excitation signals, respectively. To transform these parameters, a multiple response classification and regression tree (MR-CART) is employed. MR-CART is the vector extended version of a conventional CART, whose response is given by the vector form. We evaluated the performance of the proposed method by comparing with a previously proposed codebook mapping method. We also quantitatively analyzed the performance of voice transformation and the complexities according to various observations. From the experimental results for 4 speakers, the proposed method objectively outperforms a conventional codebook mapping method. and we also observed that the transformed speech sounds closer to target speech.

Filtering Feature Mismatches using Multiple Descriptors (다중 기술자를 이용한 잘못된 특징점 정합 제거)

  • Kim, Jae-Young;Jun, Heesung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.1
    • /
    • pp.23-30
    • /
    • 2014
  • Feature matching using image descriptors is robust method used recently. However, mismatches occur in 3D transformed images, illumination-changed images and repetitive-pattern images. In this paper, we observe that there are a lot of mismatches in the images which have repetitive patterns. We analyze it and propose a method to eliminate these mismatches. MDMF(Multiple Descriptors-based Mismatch Filtering) eliminates mismatches by using descriptors of nearest several features of one specific feature point. In experiments, for geometrical transformation like scale, rotation, affine, we compare the match ratio among SIFT, ASIFT and MDMF, and we show that MDMF can eliminate mismatches successfully.

Feature Matching Algorithm Robust To Viewpoint Change (시점 변화에 강인한 특징점 정합 기법)

  • Jung, Hyun-jo;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2363-2371
    • /
    • 2015
  • In this paper, we propose a new feature matching algorithm which is robust to the viewpoint change by using the FAST(Features from Accelerated Segment Test) feature detector and the SIFT(Scale Invariant Feature Transform) feature descriptor. The original FAST algorithm unnecessarily results in many feature points along the edges in the image. To solve this problem, we apply the principal curvatures for refining it. We use the SIFT descriptor to describe the extracted feature points and calculate the homography matrix through the RANSAC(RANdom SAmple Consensus) with the matching pairs obtained from the two different viewpoint images. To make feature matching robust to the viewpoint change, we classify the matching pairs by calculating the Euclidean distance between the transformed coordinates by the homography transformation with feature points in the reference image and the coordinates of the feature points in the different viewpoint image. Through the experimental results, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load.