• Title/Summary/Keyword: Feature-based Matching

Search Result 539, Processing Time 0.022 seconds

3D Line Segment Detection using a New Hybrid Stereo Matching Technique (새로운 하이브리드 스테레오 정합기법에 의한 3차원 선소추출)

  • 이동훈;우동민;정영기
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.4
    • /
    • pp.277-285
    • /
    • 2004
  • We present a new hybrid stereo matching technique in terms of the co-operation of area-based stereo and feature-based stereo. The core of our technique is that feature matching is carried out by the reference of the disparity evaluated by area-based stereo. Since the reference of the disparity can significantly reduce the number of feature matching combinations, feature matching error can be drastically minimized. One requirement of the disparity to be referenced is that it should be reliable to be used in feature matching. To measure the reliability of the disparity, in this paper, we employ the self-consistency of the disunity Our suggested technique is applied to the detection of 3D line segments by 2D line matching using our hybrid stereo matching, which can be efficiently utilized in the generation of the rooftop model from urban imagery. We carry out the experiments on our hybrid stereo matching scheme. We generate synthetic images by photo-realistic simulation on Avenches data set of Ascona aerial images. Experimental results indicate that the extracted 3D line segments have an average error of 0.5m and verify our proposed scheme. In order to apply our method to the generation of 3D model in urban imagery, we carry out Preliminary experiments for rooftop generation. Since occlusions are occurred around the outlines of buildings, we experimentally suggested multi-image hybrid stereo system, based on the fusion of 3D line segments. In terms of the simple domain-specific 3D grouping scheme, we notice that an accurate 3D rooftop model can be generated. In this context, we expect that an extended 3D grouping scheme using our hybrid technique can be efficiently applied to the construction of 3D models with more general types of building rooftops.

CNN-based Opti-Acoustic Transformation for Underwater Feature Matching (수중에서의 특징점 매칭을 위한 CNN기반 Opti-Acoustic변환)

  • Jang, Hyesu;Lee, Yeongjun;Kim, Giseop;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • In this paper, we introduce the methodology that utilizes deep learning-based front-end to enhance underwater feature matching. Both optical camera and sonar are widely applicable sensors in underwater research, however, each sensor has its own weaknesses, such as light condition and turbidity for the optic camera, and noise for sonar. To overcome the problems, we proposed the opti-acoustic transformation method. Since feature detection in sonar image is challenging, we converted the sonar image to an optic style image. Maintaining the main contents in the sonar image, CNN-based style transfer method changed the style of the image that facilitates feature detection. Finally, we verified our result using cosine similarity comparison and feature matching against the original optic image.

CLASSIFIED ELGEN BLOCK: LOCAL FEATURE EXTRACTION AND IMAGE MATCHING ALGORITHM

  • Hochul Shin;Kim, Seong-Dae
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2108-2111
    • /
    • 2003
  • This paper introduces a new local feature extraction method and image matching method for the localization and classification of targets. Proposed method is based on the block-by-block projection associated with directional pattern of blocks. Each pattern has its own eigen-vertors called as CEBs(Classified Eigen-Blocks). Also proposed block-based image matching method is robust to translation and occlusion. Performance of proposed feature extraction and matching method is verified by the face localization and FLIR-vehicle-image classification test.

  • PDF

Speed-up of Image Matching Using Feature Strength Information (특징 강도 정보를 이용한 영상 정합 속도 향상)

  • Kim, Tae-Woo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.63-69
    • /
    • 2013
  • A feature-based image recognition method, using features of an object, can be performed faster than a template matching technique. Invariant feature-based panoramic image generation, an application of image recognition, requires large amount of time to match features between two images. This paper proposes a speed-up method of feature matching using feature strength information. Our algorithm extracts features in images, computes their feature strength information, and selects strong features points which are used to match the selected features. The strong features can be referred to as meaningful ones than the weak features. In the experiments, it was shown that our method speeded up over 40% of processing time than the technique without using feature strength information.

Study of Feature Based Algorithm Performance Comparison for Image Matching between Virtual Texture Image and Real Image (가상 텍스쳐 영상과 실촬영 영상간 매칭을 위한 특징점 기반 알고리즘 성능 비교 연구)

  • Lee, Yoo Jin;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1057-1068
    • /
    • 2022
  • This paper compares the combination performance of feature point-based matching algorithms as a study to confirm the matching possibility between image taken by a user and a virtual texture image with the goal of developing mobile-based real-time image positioning technology. The feature based matching algorithm includes process of extracting features, calculating descriptors, matching features from both images, and finally eliminating mismatched features. At this time, for matching algorithm combination, we combined the process of extracting features and the process of calculating descriptors in the same or different matching algorithm respectively. V-World 3D desktop was used for the virtual indoor texture image. Currently, V-World 3D desktop is reinforced with details such as vertical and horizontal protrusions and dents. In addition, levels with real image textures. Using this, we constructed dataset with virtual indoor texture data as a reference image, and real image shooting at the same location as a target image. After constructing dataset, matching success rate and matching processing time were measured, and based on this, matching algorithm combination was determined for matching real image with virtual image. In this study, based on the characteristics of each matching technique, the matching algorithm was combined and applied to the constructed dataset to confirm the applicability, and performance comparison was also performed when the rotation was additionally considered. As a result of study, it was confirmed that the combination of Scale Invariant Feature Transform (SIFT)'s feature and descriptor detection had the highest matching success rate, but matching processing time was longest. And in the case of Features from Accelerated Segment Test (FAST)'s feature detector and Oriented FAST and Rotated BRIEF (ORB)'s descriptor calculation, the matching success rate was similar to that of SIFT-SIFT combination, while matching processing time was short. Furthermore, in case of FAST-ORB, it was confirmed that the matching performance was superior even when 10° rotation was applied to the dataset. Therefore, it was confirmed that the matching algorithm of FAST-ORB combination could be suitable for matching between virtual texture image and real image.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.

An Approach to Target Tracking Using Region-Based Similarity of the Image Segmented by Least-Eigenvalue (최소고유치로 분할된 영상의 영역기반 유사도를 이용한 목표추적)

  • Oh, Hong-Gyun;Sohn, Yong-Jun;Jang, Dong-Sik;Kim, Mun-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.4
    • /
    • pp.327-332
    • /
    • 2002
  • The main problems of computational complexity in object tracking are definition of objects, segmentations and identifications in non-structured environments with erratic movements and collisions of objects. The object's information as a region that corresponds to objects without discriminating among objects are considered. This paper describes the algorithm that, automatically and efficiently, recognizes and keeps tracks of interest-regions selected by users in video or camera image sequences. The block-based feature matching method is used for the region tracking. This matching process considers only dominant feature points such as corners and curved-edges without requiring a pre-defined model of objects. Experimental results show that the proposed method provides above 96% precision for correct region matching and real-time process even when the objects undergo scaling and 3-dimen-sional movements In successive image sequences.

Image Registration of Aerial Image Sequences (연속 항공영상에서의 Image Registration)

  • 강민석;김준식;박래홍;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.4
    • /
    • pp.48-57
    • /
    • 1992
  • This paper addresses the estimation of the shift vector from aerial image sequences. The conventional feature-based and area-based matching methods are simulated for determining the suitable image registration scheme. Computer simulations show that the feature-based matching schemes based on the co-occurrence matrix, autoregressive model, and edge information do not give a reliable matching for aerial image sequences which do not have a suitable statistical model or significant features. In area-based matching methods we try various similarity functions for a matching measure and discuss the factors determining the matching accuracy. To reduce the estimation error of the shift vector we propose the reference window selection scheme. We also discuss the performance of the proposed algorithm based on the simulation results.

  • PDF

Panoramic Image Stitching Using Feature Extracting and Matching on Embedded System

  • Lee, June-Hwan
    • Transactions on Electrical and Electronic Materials
    • /
    • v.18 no.5
    • /
    • pp.273-278
    • /
    • 2017
  • Recently, one of the areas where research is being actively conducted is the Internet of Things (IoT). The field of using the Internet of Things system is increasing, coupled with a remarkable increase of the use of the camera. However, general cameras used in the Internet of Things have limited viewing angles as compared to those available to the human eye. Also, cameras restrict observation of objects and the performance of observation. Therefore, in this paper, we propose a panoramic image stitching method using feature extraction and matching based on an embedded system. After extracting the feature of the image, the speed of image stitching is improved by reducing the amount of computation using the necessary information so that it can be used in the embedded system. Experimental results show that it is possible to improve the speed of feature matching and panoramic image stitching while generating a smooth image.

Comparison of Match Candidate Pair Constitution Methods for UAV Images Without Orientation Parameters (표정요소 없는 다중 UAV영상의 대응점 추출 후보군 구성방법 비교)

  • Jung, Jongwon;Kim, Taejung;Kim, Jaein;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.647-656
    • /
    • 2016
  • Growth of UAV technology leads to expansion of UAV image applications. Many UAV image-based applications use a method called incremental bundle adjustment. However, incremental bundle adjustment produces large computation overhead because it attempts feature matching from all image pairs. For efficient feature matching process we have to confine matching only for overlapping pairs using exterior orientation parameters. When exterior orientation parameters are not available, we cannot determine overlapping pairs. We need another methods for feature matching candidate constitution. In this paper we compare matching candidate constitution methods without exterior orientation parameters, including partial feature matching, Bag-of-keypoints, image intensity method. We use the overlapping pair determination method based on exterior orientation parameter as reference. Experiment results showed the partial feature matching method in the one with best efficiency.