• Title/Summary/Keyword: Surf feature points

Search Result 50, Processing Time 0.038 seconds

Fast Vehicle Detection based on Haarlike and Vehicle Tracking using SURF Method (Haarlike 기반의 고속 차량 검출과 SURF를 이용한 차량 추적 알고리즘)

  • Yu, Jae-Hyoung;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.71-80
    • /
    • 2012
  • This paper proposes vehicle detection and tracking algorithm using a CCD camera. The proposed algorithm uses Haar-like wavelet edge detector to detect features of vehicle and estimates vehicle's location using calibration information of an image. After that, extract accumulated vehicle information in continuous k images to improve reliability. Finally, obtained vehicle region becomes a template image to find same object in the next continuous image using SURF(Speeded Up Robust Features). The template image is updated in the every frame. In order to reduce SURF processing time, ROI(Region of Interesting) region is limited on expended area of detected vehicle location in the previous frame image. This algorithm repeats detection and tracking progress until no corresponding points are found. The experimental result shows efficiency of proposed algorithm using images obtained on the road.

Invariant-Feature Based Object Tracking Using Discrete Dynamic Swarm Optimization

  • Kang, Kyuchang;Bae, Changseok;Moon, Jinyoung;Park, Jongyoul;Chung, Yuk Ying;Sha, Feng;Zhao, Ximeng
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.151-162
    • /
    • 2017
  • With the remarkable growth in rich media in recent years, people are increasingly exposed to visual information from the environment. Visual information continues to play a vital role in rich media because people's real interests lie in dynamic information. This paper proposes a novel discrete dynamic swarm optimization (DDSO) algorithm for video object tracking using invariant features. The proposed approach is designed to track objects more robustly than other traditional algorithms in terms of illumination changes, background noise, and occlusions. DDSO is integrated with a matching procedure to eliminate inappropriate feature points geographically. The proposed novel fitness function can aid in excluding the influence of some noisy mismatched feature points. The test results showed that our approach can overcome changes in illumination, background noise, and occlusions more effectively than other traditional methods, including color-tracking and invariant feature-tracking methods.

Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images (달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석)

  • Hong, Sungchul;Shin, Hyu-Soung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.4
    • /
    • pp.437-444
    • /
    • 2020
  • A lunar rover's optical camera is used to provide navigation and terrain information in an exploration zone. However, due to the scant presence of atmosphere, the Moon has homogeneous terrain with dark soil. Also, in extreme environments, the rover has limited data storage with low computation capability. Thus, for successful exploration, it is required to examine feature detection and matching methods which are robust to lunar terrain and environmental characteristics. In this research, SIFT, SURF, BRISK, ORB, and AKAZE are comparatively analyzed with lunar terrain images from a lunar rover. Experimental results show that SIFT and AKAZE are most robust for lunar terrain characteristics. AKAZE detects less quantity of feature points than SIFT, but feature points are detected and matched with high precision and the least computational cost. AKAZE is adequate for fast and accurate navigation information. Although SIFT has the highest computational cost, the largest quantity of feature points are stably detected and matched. The rover periodically sends terrain images to Earth. Thus, SIFT is suitable for global 3D terrain map construction in that a large amount of terrain images can be processed on Earth. Study results are expected to provide a guideline to utilize feature detection and matching methods for future lunar exploration rovers.

Development of Hybrid Image Stabilization System for a Mobile Robot (이동 로봇을 위한 하이브리드 이미지 안정화 시스템의 개발)

  • Choi, Yun-Won;Kang, Tae-Hun;Saitov, Dilshat;Lee, Dong-Chun;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.157-163
    • /
    • 2011
  • This paper proposes a hybrid image stabilizing system which uses both optical image stabilizing system based on EKF (Extended Kalman Filter) and digital image stabilization based on SURF (Speeded Up Robust Feature). Though image information is one of the most efficient data for object recognition, it is susceptible to noise which results from internal vibration as well as external factors. The blurred image obtained by the camera mounted on a robot makes it difficult for the robot to recognize its environment. The proposed system estimates shaking angle through EKF based on the information from inclinometer and gyro sensor to stabilize the image. In addition, extracting the feature points around rotation axis using SURF which is robust to change in scale or rotation enhances processing speed by removing unnecessary operations using Hessian matrix. The experimental results using the proposed hybrid system shows its effectiveness in extended frequency range.

Pose Estimation of Leader Aircraft for Vision-based Formation Flight (영상기반 편대비행을 위한 선도기 자세예측 알고리즘)

  • Heo, Jin-Woo;Kim, Jeong-Ho;Han, Dong-In;Lee, Dae-Woo;Cho, Kyeum-Rae;Hur, Gi-Bong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.7
    • /
    • pp.532-538
    • /
    • 2013
  • This paper describes a vision-based only attitude estimation technique for the leader in the formation flight. The feature points in image obtained from the X-PLANE simulator are extracted by the SURF(Speed Up Robust Features) algorithm. We use POSIT(Pose from Orthography and Scaling with Iteration) algorithm to estimate attitude. Finally we verify that attitude estimation using vision only can yield small estimated error of $1.1{\sim}1.76^{\circ}$.

An Implementation of the Real-time Image Stitching Algorithm Based on ROI (ROI 기반 실시간 이미지 정합 알고리즘 구현)

  • Kwak, Jae Chang
    • Journal of IKEEE
    • /
    • v.19 no.4
    • /
    • pp.460-464
    • /
    • 2015
  • This paper proposes a panoramic image stitching that operates in real time at the embedded environment by applying ROI and PROSAC algorithm. The conventional panoramic image stitching applies SURF or SIFT algorithm which contains complicated operations and a lots of data, at the overall image to detect feature points. Also it applies RANSAC algorithm to remove outliers, so that an additional verification time is required due to its randomness. In this paper, unnecessary data are eliminated by setting ROI based on the characteristics of panorama images, and PROSAC algorithm is applied for removing outliers to reduce verification time. The proposed method was implemented on the ORDROID-XU board with ARM Cortex-A15. The result shows an improvement of about 54% in the processing time compared to the conventional method.

Speed-up of Image Matching Using Feature Strength Information (특징 강도 정보를 이용한 영상 정합 속도 향상)

  • Kim, Tae-Woo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.63-69
    • /
    • 2013
  • A feature-based image recognition method, using features of an object, can be performed faster than a template matching technique. Invariant feature-based panoramic image generation, an application of image recognition, requires large amount of time to match features between two images. This paper proposes a speed-up method of feature matching using feature strength information. Our algorithm extracts features in images, computes their feature strength information, and selects strong features points which are used to match the selected features. The strong features can be referred to as meaningful ones than the weak features. In the experiments, it was shown that our method speeded up over 40% of processing time than the technique without using feature strength information.

BoF based Action Recognition using Spatio-Temporal 2D Descriptor (시공간 2D 특징 설명자를 사용한 BOF 방식의 동작인식)

  • KIM, JinOk
    • Journal of Internet Computing and Services
    • /
    • v.16 no.3
    • /
    • pp.21-32
    • /
    • 2015
  • Since spatio-temporal local features for video representation have become an important issue of modeless bottom-up approaches in action recognition, various methods for feature extraction and description have been proposed in many papers. In particular, BoF(bag of features) has been promised coherent recognition results. The most important part for BoF is how to represent dynamic information of actions in videos. Most of existing BoF methods consider the video as a spatio-temporal volume and describe neighboring 3D interest points as complex volumetric patches. To simplify these complex 3D methods, this paper proposes a novel method that builds BoF representation as a way to learn 2D interest points directly from video data. The basic idea of proposed method is to gather feature points not only from 2D xy spatial planes of traditional frames, but from the 2D time axis called spatio-temporal frame as well. Such spatial-temporal features are able to capture dynamic information from the action videos and are well-suited to recognize human actions without need of 3D extensions for the feature descriptors. The spatio-temporal BoF approach using SIFT and SURF feature descriptors obtains good recognition rates on a well-known actions recognition dataset. Compared with more sophisticated scheme of 3D based HoG/HoF descriptors, proposed method is easier to compute and simpler to understand.

Laser Image SLAM based on Image Matching for Navigation of a Mobile Robot (이동 로봇 주행을 위한 이미지 매칭에 기반한 레이저 영상 SLAM)

  • Choi, Yun Won;Kim, Kyung Dong;Choi, Jung Won;Lee, Suk Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.2
    • /
    • pp.177-184
    • /
    • 2013
  • This paper proposes an enhanced Simultaneous Localization and Mapping (SLAM) algorithm based on matching laser image and Extended Kalman Filter (EKF). In general, laser information is one of the most efficient data for localization of mobile robots and is more accurate than encoder data. For localization of a mobile robot, moving distance information of a robot is often obtained by encoders and distance information from the robot to landmarks is estimated by various sensors. Though encoder has high resolution, it is difficult to estimate current position of a robot precisely because of encoder error caused by slip and backlash of wheels. In this paper, the position and angle of the robot are estimated by comparing laser images obtained from laser scanner with high accuracy. In addition, Speeded Up Robust Features (SURF) is used for extracting feature points at previous laser image and current laser image by comparing feature points. As a result, the moving distance and heading angle are obtained based on information of available points. The experimental results using the proposed laser slam algorithm show effectiveness for the SLAM of robot.

Evaluation of Marker Images based on Analysis of Feature Points for Effective Augmented Reality (효과적인 증강현실 구현을 위한 특징점 분석 기반의 마커영상 평가 방법)

  • Lee, Jin-Young;Kim, Jongho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.49-55
    • /
    • 2019
  • This paper presents a marker image evaluation method based on analysis of object distribution in images and classification of images with repetitive patterns for effective marker-based augmented reality (AR) system development. We measure the variance of feature point coordinates to distinguish marker images that are vulnerable to occlusion, since object distribution affects object tracking performance according to partial occlusion in the images. Moreover, we propose a method to classify images suitable for object recognition and tracking based on the fact that the distributions of descriptor vectors among general images and repetitive-pattern images are significantly different. Comprehensive experiments for marker images confirm that the proposed marker image evaluation method distinguishes images vulnerable to occlusion and repetitive-pattern images very well. Furthermore, we suggest that scale-invariant feature transform (SIFT) is superior to speeded up robust features (SURF) in terms of object tracking in marker images. The proposed method provides users with suitability information for various images, and it helps AR systems to be realized more effectively.