• Title/Summary/Keyword: ASIFT

Search Result 4, Processing Time 0.018 seconds

Affine Invariant Local Descriptors for Face Recognition (얼굴인식을 위한 어파인 불변 지역 서술자)

  • Gao, Yongbin;Lee, Hyo Jong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.375-380
    • /
    • 2014
  • Under controlled environment, such as fixed viewpoints or consistent illumination, the performance of face recognition is usually high enough to be acceptable nowadays. Face recognition is, however, a still challenging task in real world. SIFT(Scale Invariant Feature Transformation) algorithm is scale and rotation invariant, which is powerful only in the case of small viewpoint changes. However, it often fails when viewpoint of faces changes in wide range. In this paper, we use Affine SIFT (Scale Invariant Feature Transformation; ASIFT) to detect affine invariant local descriptors for face recognition under wide viewpoint changes. The ASIFT is an extension of SIFT algorithm to solve this weakness. In our scheme, ASIFT is applied only to gallery face, while SIFT algorithm is applied to probe face. ASIFT generates a series of different viewpoints using affine transformation. Therefore, the ASIFT allows viewpoint differences between gallery face and probe face. Experiment results showed our framework achieved higher recognition accuracy than the original SIFT algorithm on FERET database.

Improvement of ASIFT for Object Matching Based on Optimized Random Sampling

  • Phan, Dung;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.9 no.2
    • /
    • pp.1-7
    • /
    • 2013
  • This paper proposes an efficient matching algorithm based on ASIFT (Affine Scale-Invariant Feature Transform) which is fully invariant to affine transformation. In our approach, we proposed a method of reducing similar measure matching cost and the number of outliers. First, we combined the Manhattan and Chessboard metrics replacing the Euclidean metric by a linear combination for measuring the similarity of keypoints. These two metrics are simple but really efficient. Using our method the computation time for matching step was saved and also the number of correct matches was increased. By applying an Optimized Random Sampling Algorithm (ORSA), we can remove most of the outlier matches to make the result meaningful. This method was experimented on various combinations of affine transform. The experimental result shows that our method is superior to SIFT and ASIFT.

Filtering Feature Mismatches using Multiple Descriptors (다중 기술자를 이용한 잘못된 특징점 정합 제거)

  • Kim, Jae-Young;Jun, Heesung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.1
    • /
    • pp.23-30
    • /
    • 2014
  • Feature matching using image descriptors is robust method used recently. However, mismatches occur in 3D transformed images, illumination-changed images and repetitive-pattern images. In this paper, we observe that there are a lot of mismatches in the images which have repetitive patterns. We analyze it and propose a method to eliminate these mismatches. MDMF(Multiple Descriptors-based Mismatch Filtering) eliminates mismatches by using descriptors of nearest several features of one specific feature point. In experiments, for geometrical transformation like scale, rotation, affine, we compare the match ratio among SIFT, ASIFT and MDMF, and we show that MDMF can eliminate mismatches successfully.

Marker Detection by Using Affine-SIFT Matching Points for Marker Occlusion of Augmented Reality (증강현실에서 가려진 마커를 위한 Affine-SIFT 정합 점들을 이용한 마커 검출 기법)

  • Kim, Yong-Min;Park, Chan-Woo;Park, Ki-Tae;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.2
    • /
    • pp.55-65
    • /
    • 2011
  • In this paper, a novel method of marker detection robust against marker occlusion in augmented reality is proposed. the proposed method consists of four steps. In the first step, in order to effectively detect an occluded marker, we first utilize the Affine-SIFT (ASIFT, Affine-Scale Invariant Features Transform) for detecting matching points between an enrolled marker and an input images with an occluded marker. In the second step, we apply the Principal Component Analysis (PCA) for eliminating outlier of the matching points in the enrolled marker. And then matching points are projected to the first and second axis for longest value and the shortest value of an ellipse are determined by average distance between the projected points and a center of the points. In the third step, Convex-hull vertices including matching points are considered as polygon vertices for estimating a geometric affine transformation. In the final step, by estimating the geometric affine transformation of the points, a marker robust against a marker occlusion is detected. Experimental results have shown that the proposed method effectively detects occlude markers.