• Title/Summary/Keyword: Feature Point Extraction and Matching

Search Result 59, Processing Time 0.026 seconds

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

The Target Detection and Classification Method Using SURF Feature Points and Image Displacement in Infrared Images (적외선 영상에서 변위추정 및 SURF 특징을 이용한 표적 탐지 분류 기법)

  • Kim, Jae-Hyup;Choi, Bong-Joon;Chun, Seung-Woo;Lee, Jong-Min;Moon, Young-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.43-52
    • /
    • 2014
  • In this paper, we propose the target detection method using image displacement, and classification method using SURF(Speeded Up Robust Features) feature points and BAS(Beam Angle Statistics) in infrared images. The SURF method that is a typical correspondence matching method in the area of image processing has been widely used, because it is significantly faster than the SIFT(Scale Invariant Feature Transform) method, and produces a similar performance. In addition, in most SURF based object recognition method, it consists of feature point extraction and matching process. In proposed method, it detects the target area using the displacement, and target classification is performed by using the geometry of SURF feature points. The proposed method was applied to the unmanned target detection/recognition system. The experimental results in virtual images and real images, we have approximately 73~85% of the classification performance.

Scene Change Detection and Filtering Technology Using SIFT (SIFT를 이용한 장면전환 검출 및 필터링 기술)

  • Moon, Won-Jun;Yoo, In-Jae;Lee, Jae-Chung;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.939-947
    • /
    • 2019
  • With the revitalization of the media market, the necessity of compression, searching, editing and copyright protection of videos is increasing. In this paper, we propose a method to detect scene change in all these fields. We propose a pre-processing, feature point extraction using SIFT, and matching algorithm for detecting the same scene change even if distortions such as resolution change, subtitle insertion, compression, and flip are added in the distribution process. Also, it is applied to filtering technology and it is confirmed that it is effective for all transformations other than considering transform.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Fast Fingerprint Alignment Method and Weighted Feature Vector Extraction Method in Filterbank-Based Fingerprint Matching (필터뱅크 기반 지문정합에서 빠른 지문 정렬 방법 및 가중치를 부여한 특징 벡터 추출 방법)

  • 정석재;김동윤
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.1
    • /
    • pp.71-81
    • /
    • 2004
  • Minutiae-based fingerprint identification systems use minutiae points, which cannot completely characterize local ridge structures. Further, this method requires many methods for matching two fingerprint images containing different number of minutiae points. Therefore, to represent the fired length information for one fingerprint image, the filterbank-based method was proposed as an alternative to minutiae-based fingerprint representation. However, it has two shortcomings. One shortcoming is that similar feature vectors are extracted from the different fingerprints which have the same fingerprint type. Another shortcoming is that this method has overload to reduce the rotation error in the fingerprint image acquisition. In this paper, we propose the minutia-weighted feature vector extraction method that gives more weight in extracting feature value, if the region has minutiae points. Also, we Propose new fingerprint alignment method that uses the average local orientations around the reference point. These methods improve the fingerprint system's Performance and speed, respectively. Experimental results indicate that the proposed methods can reduce the FRR of the filterbank-based fingerprint matcher by approximately 0.524% at a FAR of 0.967%, and improve the matching performance by 5% in ERR. The system speed is over 1.28 times faster.

Camera Extrinsic Parameter Estimation using 2D Homography and LM Method based on PPIV Recognition (PPIV 인식기반 2D 호모그래피와 LM방법을 이용한 카메라 외부인수 산출)

  • Cha Jeong-Hee;Jeon Young-Min
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.2 s.308
    • /
    • pp.11-19
    • /
    • 2006
  • In this paper, we propose a method to estimate camera extrinsic parameter based on projective and permutation invariance point features. Because feature informations in previous research is variant to c.:men viewpoint, extraction of correspondent point is difficult. Therefore, in this paper, we propose the extracting method of invariant point features, and new matching method using similarity evaluation function and Graham search method for reducing time complexity and finding correspondent points accurately. In the calculation of camera extrinsic parameter stage, we also propose two-stage motion parameter estimation method for enhancing convergent degree of LM algorithm. In the experiment, we compare and analyse the proposed method with existing method by using various indoor images to demonstrate the superiority of the proposed algorithms.

Evaluation on Tie Point Extraction Methods of WorldView-2 Stereo Images to Analyze Height Information of Buildings (건물의 높이 정보 분석을 위한 WorldView-2 스테레오 영상의 정합점 추출방법 평가)

  • Yeji, Kim;Yongil, Kim
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.407-414
    • /
    • 2015
  • Interest points are generally located at the pixels where height changes occur. So, interest points can be the significant pixels for DSM generation, and these have the important role to generate accurate and reliable matching results. Manual operation is widely used to extract the interest points and to match stereo satellite images using these for generating height information, but it causes economic and time consuming problems. Thus, a tie point extraction method using Harris-affine technique and SIFT(Scale Invariant Feature Transform) descriptors was suggested to analyze height information of buildings in this study. Interest points on buildings were extracted by Harris-affine technique, and tie points were collected efficiently by SIFT descriptors, which is invariant for scale. Searching window for each interest points was used, and direction of tie points pairs were considered for more efficient tie point extraction method. Tie point pairs estimated by proposed method was used to analyze height information of buildings. The result had RMSE values less than 2m comparing to the height information estimated by manual method.

Virtual core point detection and ROI extraction for finger vein recognition (지정맥 인식을 위한 가상 코어점 검출 및 ROI 추출)

  • Lee, Ju-Won;Lee, Byeong-Ro
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.3
    • /
    • pp.249-255
    • /
    • 2017
  • The finger vein recognition technology is a method to acquire a finger vein image by illuminating infrared light to the finger and to authenticate a person through processes such as feature extraction and matching. In order to recognize a finger vein, a 2D mask-based two-dimensional convolution method can be used to detect a finger edge but it takes too much computation time when it is applied to a low cost micro-processor or micro-controller. To solve this problem and improve the recognition rate, this study proposed an extraction method for the region of interest based on virtual core points and moving average filtering based on the threshold and absolute value of difference between pixels without using 2D convolution and 2D masks. To evaluate the performance of the proposed method, 600 finger vein images were used to compare the edge extraction speed and accuracy of ROI extraction between the proposed method and existing methods. The comparison result showed that a processing speed of the proposed method was at least twice faster than those of the existing methods and the accuracy of ROI extraction was 6% higher than those of the existing methods. From the results, the proposed method is expected to have high processing speed and high recognition rate when it is applied to inexpensive microprocessors.

Mosaic image generation of AISA Eagle hyperspectral sensor using SIFT method (SIFT 기법을 이용한 AISA Eagle 초분광센서의 모자이크영상 생성)

  • Han, You Kyung;Kim, Yong Il;Han, Dong Yeob;Choi, Jae Wan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.2
    • /
    • pp.165-172
    • /
    • 2013
  • In this paper, high-quality mosaic image is generated by high-resolution hyperspectral strip images using scale-invariant feature transform (SIFT) algorithm, which is one of the representative image matching methods. The experiments are applied to AISA Eagle images geo-referenced by using GPS/INS information acquired when it was taken on flight. The matching points between three strips of hyperspectral images are extracted using SIFT method, and the transformation models between images are constructed from the points. Mosaic image is, then, generated using the transformation models constructed from corresponding images. Optimal band appropriate for the matching point extraction is determined by selecting representative bands of hyperspectral data and analyzing the matched results based on each band. Mosaic image generated by proposed method is visually compared with the mosaic image generated from initial geo-referenced AISA hyperspectral images. From the comparison, we could estimate geometrical accuracy of generated mosaic image and analyze the efficiency of our methodology.

Image alignment method based on CUDA SURF for multi-spectral machine vision application (다중 스펙트럼 머신비전 응용을 위한 CUDA SURF 기반의 영상 정렬 기법)

  • Maeng, Hyung-Yul;Kim, Jin-Hyung;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1041-1051
    • /
    • 2014
  • In this paper, we propose a new image alignment technique based on CUDA SURF in order to solve the initial image alignment problem that frequently occurs in machine vision applications. Machine vision systems using multi-spectral images have recently become more common for solving various decision problems that cannot be performed by the human vision system. These machine vision systems mostly use markers for the initial image alignment. However, there are some applications where the markers cannot be used and the alignment techniques have to be changed whenever their markers are changed. In order to solve these problems, we propose a new image alignment method for multi-spectral machine vision applications based on SURF extracting image features without depending on markers. In this paper, we propose an image alignment method that obtains a sufficient number of feature points from multi-spectral images using SURF and removes outlier iteratively based on a least squares method. We further propose an effective preliminary scheme for removing mismatched feature point pairs that may affect the overall performance of the alignment. In addition, we reduce the execution time by implementing the proposed method using CUDA based on GPGPU in order to guarantee real-time operation. Simulation results show that the proposed method is able to align images effectively in applications where markers cannot be used.