• Title/Summary/Keyword: image feature descriptor

Search Result 140, Processing Time 0.027 seconds

Blur-Invariant Feature Descriptor Using Multidirectional Integral Projection

  • Lee, Man Hee;Park, In Kyu
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.502-509
    • /
    • 2016
  • Feature detection and description are key ingredients of common image processing and computer vision applications. Most existing algorithms focus on robust feature matching under challenging conditions, such as inplane rotations and scale changes. Consequently, they usually fail when the scene is blurred by camera shake or an object's motion. To solve this problem, we propose a new feature description algorithm that is robust to image blur and significantly improves the feature matching performance. The proposed algorithm builds a feature descriptor by considering the integral projection along four angular directions ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$, and $135^{\circ}$) and by combining four projection vectors into a single highdimensional vector. Intensive experiment shows that the proposed descriptor outperforms existing descriptors for different types of blur caused by linear motion, nonlinear motion, and defocus. Furthermore, the proposed descriptor is robust to intensity changes and image rotation.

Efficient Image Stitching Using Fast Feature Descriptor Extraction and Matching (빠른 특징점 기술자 추출 및 정합을 이용한 효율적인 이미지 스티칭 기법)

  • Rhee, Sang-Burm
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.65-70
    • /
    • 2013
  • Recently, the field of computer vision has been actively researched through digital image which can be easily generated as the development and expansion of digital camera technology. Especially, research that extracts and utilizes the feature in image has been actively carried out. The image stitching is a method that creates the high resolution image using features extract and match. Image stitching can be widely used in military and medical purposes as well as in variety fields of real life. In this paper, we have proposed efficient image stitching method using fast feature descriptor extraction and matching based on SURF algorithm. It can be accurately, and quickly found matching point by reduction of dimension of feature descriptor. The feature descriptor is generated by classifying of unnecessary minutiae in extracted features. To reduce the computational time and efficient match feature, we have reduced dimension of the descriptor and expanded orientation window. In our results, the processing time of feature matching and image stitching are faster than previous algorithms, and also that method can make natural-looking stitched image.

PPD: A Robust Low-computation Local Descriptor for Mobile Image Retrieval

  • Liu, Congxin;Yang, Jie;Feng, Deying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.3
    • /
    • pp.305-323
    • /
    • 2010
  • This paper proposes an efficient and yet powerful local descriptor called phase-space partition based descriptor (PPD). This descriptor is designed for the mobile image matching and retrieval. PPD, which is inspired from SIFT, also encodes the salient aspects of the image gradient in the neighborhood around an interest point. However, without employing SIFT's smoothed gradient orientation histogram, we apply the region based gradient statistics in phase space to the construction of a feature representation, which allows to reduce much computation requirements. The feature matching experiments demonstrate that PPD achieves favorable performance close to that of SIFT and faster building and matching. We also present results showing that the use of PPD descriptors in a mobile image retrieval application results in a comparable performance to SIFT.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

Experimental Optimal Choice Of Initial Candidate Inliers Of The Feature Pairs With Well-Ordering Property For The Sample Consensus Method In The Stitching Of Drone-based Aerial Images

  • Shin, Byeong-Chun;Seo, Jeong-Kweon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1648-1672
    • /
    • 2020
  • There are several types of image registration in the sense of stitching separated images that overlap each other. One of these is feature-based registration by a common feature descriptor. In this study, we generate a mosaic of images using feature-based registration for drone aerial images. As a feature descriptor, we apply the scale-invariant feature transform descriptor. In order to investigate the authenticity of the feature points and to have the mapping function, we employ the sample consensus method; we consider the sensed image's inherent characteristic such as the geometric congruence between the feature points of the images to propose a novel hypothesis estimation of the mapping function of the stitching via some optimally chosen initial candidate inliers in the sample consensus method. Based on the experimental results, we show the efficiency of the proposed method compared with benchmark methodologies of random sampling consensus method (RANSAC); the well-ordering property defined in the context and the extensive stitching examples have supported the utility. Moreover, the sample consensus scheme proposed in this study is uncomplicated and robust, and some fatal miss stitching by RANSAC is remarkably reduced in the measure of the pixel difference.

An approach for improving the performance of the Content-Based Image Retrieval (CBIR)

  • Jeong, Inseong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.665-672
    • /
    • 2012
  • Amid rapidly increasing imagery inputs and their volume in a remote sensing imagery database, Content-Based Image Retrieval (CBIR) is an effective tool to search for an image feature or image content of interest a user wants to retrieve. It seeks to capture salient features from a 'query' image, and then to locate other instances of image region having similar features elsewhere in the image database. For a CBIR approach that uses texture as a primary feature primitive, designing a texture descriptor to better represent image contents is a key to improve CBIR results. For this purpose, an extended feature vector combining the Gabor filter and co-occurrence histogram method is suggested and evaluated for quantitywise and qualitywise retrieval performance criterion. For the better CBIR performance, assessing similarity between high dimensional feature vectors is also a challenging issue. Therefore a number of distance metrics (i.e. L1 and L2 norm) is tried to measure closeness between two feature vectors, and its impact on retrieval result is analyzed. In this paper, experimental results are presented with several CBIR samples. The current results show that 1) the overall retrieval quantity and quality is improved by combining two types of feature vectors, 2) some feature is better retrieved by a specific feature vector, and 3) retrieval result quality (i.e. ranking of retrieved image tiles) is sensitive to an adopted similarity metric when the extended feature vector is employed.

Improved Feature Descriptor Extraction and Matching Method for Efficient Image Stitching on Mobile Environment (모바일 환경에서 효율적인 영상 정합을 위한 향상된 특징점 기술자 추출 및 정합 기법)

  • Park, Jin-Yang;Ahn, Hyo Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.39-46
    • /
    • 2013
  • Recently, the mobile industries grow up rapidly and their performances are improved. So the usage of mobile devices is increasing in our life. Also mobile devices equipped with a high-performance camera, so the image stitching can carry out on the mobile devices instead of the desktop. However the mobile devices have limited hardware to perform the image stitching which has a lot of computational complexity. In this paper, we have proposed improved feature descriptor extraction and matching method for efficient image stitching on mobile environment. Our method can reduce computational complexity using extension of orientation window and reduction of dimension feature descriptor when feature descriptor is generated. In addition, the computational complexity of image stitching is reduced through the classification of matching points. In our results, our method makes to improve the computational time of image stitching than the previous method. Therefore our method is suitable for the mobile environment and also that method can make natural-looking stitched image.

Design of Block-based Image Descriptor using Local Color and Texture (지역 칼라와 질감을 활용한 블록 기반 영상 검색 기술자 설계)

  • Park, Sung-Hyun;Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.4
    • /
    • pp.33-38
    • /
    • 2013
  • Image retrieval is one of the most exciting and fastest growing research fields in the area of multimedia technology. As the amount of digital contents continues to grow users are experiencing increasing difficulty in finding specific images in their image libraries. This paper proposes an efficient image descriptor which uses a local color and texture in the non-overlapped block images. To evaluate the performance of the proposed method, we assessed the retrieval efficiency in terms of ANMRR with common image dataset. The experimental trials revealed that the proposed algorithm exhibited a significant improvement in ANMRR, compared to Dominant Color Descriptor and Edge Histogram Descriptor.

Feature Matching Algorithm Robust To Noise (잡음에 강인한 특징점 정합 기법)

  • Jung, Hyunjo;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.9-12
    • /
    • 2015
  • In this paper, we propose a new feature matching algorithm by modifying and combining the FAST(Features from Accelerated Segment Test) feature detector and SURF feature descriptor which is robust to the distortion of the given image. Scale space is generated to consider the variation of the scale and determine the candidate of features in the image robust to the noise. The original FAST algorithm results in many feature points along edges. To solve this problem, we apply the principal curvatures for refining it. We also use SURF descriptor to make it robust against the variations in the image by rotation. Through the experiments, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load. Especially, it shows a strength for noisy images.

  • PDF

Real-Time Feature Point Matching Using Local Descriptor Derived by Zernike Moments (저니키 모멘트 기반 지역 서술자를 이용한 실시간 특징점 정합)

  • Hwang, Sun-Kyoo;Kim, Whoi-Yul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.116-123
    • /
    • 2009
  • Feature point matching, which is finding the corresponding points from two images with different viewpoint, has been used in various vision-based applications and the demand for the real-time operation of the matching is increasing these days. This paper presents a real-time feature point matching method by using a local descriptor derived by Zernike moments. From an input image, we find a set of feature points by using an existing fast corner detection algorithm and compute a local descriptor derived by Zernike moments at each feature point. The local descriptor based on Zernike moments represents the properties of the image patch around the feature points efficiently and is robust to rotation and illumination changes. In order to speed up the computation of Zernike moments, we compute the Zernike basis functions with fixed size in advance and store them in lookup tables. The initial matching results are acquired by an Approximate Nearest Neighbor (ANN) method and false matchings are eliminated by a RANSAC algorithm. In the experiments we confirmed that the proposed method matches the feature points in images with various transformations in real-time and outperforms existing methods.