• Title/Summary/Keyword: Invariant feature

Search Result 433, Processing Time 0.035 seconds

Hardware Accelerated Design on Bag of Words Classification Algorithm

  • Lee, Chang-yong;Lee, Ji-yong;Lee, Yong-hwan
    • Journal of Platform Technology
    • /
    • v.6 no.4
    • /
    • pp.26-33
    • /
    • 2018
  • In this paper, we propose an image retrieval algorithm for real-time processing and design it as hardware. The proposed method is based on the classification of BoWs(Bag of Words) algorithm and proposes an image search algorithm using bit stream. K-fold cross validation is used for the verification of the algorithm. Data is classified into seven classes, each class has seven images and a total of 49 images are tested. The test has two kinds of accuracy measurement and speed measurement. The accuracy of the image classification was 86.2% for the BoWs algorithm and 83.7% the proposed hardware-accelerated software implementation algorithm, and the BoWs algorithm was 2.5% higher. The image retrieval processing speed of BoWs is 7.89s and our algorithm is 1.55s. Our algorithm is 5.09 times faster than BoWs algorithm. The algorithm is largely divided into software and hardware parts. In the software structure, C-language is used. The Scale Invariant Feature Transform algorithm is used to extract feature points that are invariant to size and rotation from the image. Bit streams are generated from the extracted feature point. In the hardware architecture, the proposed image retrieval algorithm is written in Verilog HDL and designed and verified by FPGA and Design Compiler. The generated bit streams are stored, the clustering step is performed, and a searcher image databases or an input image databases are generated and matched. Using the proposed algorithm, we can improve convenience and satisfaction of the user in terms of speed if we search using database matching method which represents each object.

Object Detection and Classification Using Extended Descriptors for Video Surveillance Applications (비디오 감시 응용에서 확장된 기술자를 이용한 물체 검출과 분류)

  • Islam, Mohammad Khairul;Jahan, Farah;Min, Jae-Hong;Baek, Joong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.12-20
    • /
    • 2011
  • In this paper, we propose an efficient object detection and classification algorithm for video surveillance applications. Previous researches mainly concentrated either on object detection or classification using particular type of feature e.g., Scale Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) etc. In this paper we propose an algorithm that mutually performs object detection and classification. We combinedly use heterogeneous types of features such as texture and color distribution from local patches to increase object detection and classification rates. We perform object detection using spatial clustering on interest points, and use Bag of Words model and Naive Bayes classifier respectively for image representation and classification. Experimental results show that our combined feature is better than the individual local descriptor in object classification rate.

The Target Detection and Classification Method Using SURF Feature Points and Image Displacement in Infrared Images (적외선 영상에서 변위추정 및 SURF 특징을 이용한 표적 탐지 분류 기법)

  • Kim, Jae-Hyup;Choi, Bong-Joon;Chun, Seung-Woo;Lee, Jong-Min;Moon, Young-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.43-52
    • /
    • 2014
  • In this paper, we propose the target detection method using image displacement, and classification method using SURF(Speeded Up Robust Features) feature points and BAS(Beam Angle Statistics) in infrared images. The SURF method that is a typical correspondence matching method in the area of image processing has been widely used, because it is significantly faster than the SIFT(Scale Invariant Feature Transform) method, and produces a similar performance. In addition, in most SURF based object recognition method, it consists of feature point extraction and matching process. In proposed method, it detects the target area using the displacement, and target classification is performed by using the geometry of SURF feature points. The proposed method was applied to the unmanned target detection/recognition system. The experimental results in virtual images and real images, we have approximately 73~85% of the classification performance.

Camera Extrinsic Parameter Estimation using 2D Homography and Nonlinear Minimizing Method based on Geometric Invariance Vector (기하학적 불변벡터 기탄 2D 호모그래피와 비선형 최소화기법을 이용한 카메라 외부인수 측정)

  • Cha, Jeong-Hee
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.187-197
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features, Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time, The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum, In order to complement these shortfalls, we, first proposed constructing feature models using invariant vector of geometry, Secondly, we proposed a two-stage calculation method to improve accuracy and convergence by using 2D homography and LM method, In the experiment, we compared and analyzed the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

  • PDF

Region-based Image Retrieval Algorithm Using Image Segmentation and Multi-Feature (영상분할과 다중 특징을 이용한 영역기반 영상검색 알고리즘)

  • Noh, Jin-Soo;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.57-63
    • /
    • 2009
  • The rapid growth of computer-based image database, necessity of a system that can manage an image information is increasing. This paper presents a region-based image retrieval method using the combination of color(autocorrelogram), texture(CWT moments) and shape(Hu invariant moments) features. As a color feature, a color autocorrelogram is chosen by extracting from the hue and saturation components of a color image(HSV). As a texture, shape and position feature are extracted from the value component. For efficient similarity confutation, the extracted features(color autocorrelogram, Hu invariant moments, and CWT moments) are combined and then precision and recall are measured. Experiment results for Corel and VisTex DBs show that the proposed image retrieval algorithm has 94.8% Precision, 90.7% recall and can successfully apply to image retrieval system.

Registration Method between High Resolution Optical and SAR Images (고해상도 광학영상과 SAR 영상 간 정합 기법)

  • Jeon, Hyeongju;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.739-747
    • /
    • 2018
  • Integration analysis of multi-sensor satellite images is becoming increasingly important. The first step in integration analysis is image registration between multi-sensor. SIFT (Scale Invariant Feature Transform) is a representative image registration method. However, optical image and SAR (Synthetic Aperture Radar) images are different from sensor attitude and radiation characteristics during acquisition, making it difficult to apply the conventional method, such as SIFT, because the radiometric characteristics between images are nonlinear. To overcome this limitation, we proposed a modified method that combines the SAR-SIFT method and shape descriptor vector DLSS(Dense Local Self-Similarity). We conducted an experiment using two pairs of Cosmo-SkyMed and KOMPSAT-2 images collected over Daejeon, Korea, an area with a high density of buildings. The proposed method extracted the correct matching points when compared to conventional methods, such as SIFT and SAR-SIFT. The method also gave quantitatively reasonable results for RMSE of 1.66m and 2.45m over the two pairs of images.

Shape Description and Recognition Using the Relative Distance-Curvature Feature Space (상대거리-곡률 특징 공간을 이용한 형태 기술 및 인식)

  • Kim Min-Ki
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.527-534
    • /
    • 2005
  • Rotation and scale variations make it difficult to solve the problem of shape description and recognition because these variations change the location of points composing the shape. However, some geometric Invariant points and the relations among them are not changed by these variations. Therefore, if points in image space depicted with the r-y coordinates system can be transformed into a new coordinates system that are invariant to rotation and scale, the problem of shape description and recognition becomes easier. This paper presents a shape description method via transformation from the image space into the invariant feature space having two axes: representing relative distance from a centroid and contour segment curvature(CSC). The relative distance describes how far a point departs from the centroid, and the CSC represents the degree of fluctuation in a contour segment. After transformation, mesh features were used to describe the shape mapped onto the feature space. Experimental results show that the proposed method is robust to rotation and scale variations.

Place Modeling and Recognition using Distribution of Scale Invariant Features (스케일 불변 특징들의 분포를 이용한 장소의 모델링 및 인식)

  • Hu, Yi;Shin, Bum-Joo;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.51-58
    • /
    • 2008
  • In this paper, we propose a place modeling based on the distribution of scale-invariant features, and a place recognition method that recognizes places by comparing the place model in a database with the extracted features from input data. The proposed method is based on the assumption that every place can be represented by unique feature distributions that are distinguishable from others. The proposed method uses global information of each place where one place is represented by one distribution model. Therefore, the main contribution of the proposed method is that the time cost corresponding to the increase of the number of places grows linearly without increasing exponentially. For the performance evaluation of the proposed method, the different number of frames and the different number of features are used, respectively. Empirical results illustrate that our approach achieves better performance in space and time cost comparing to other approaches. We expect that the Proposed method is applicable to many ubiquitous systems such as robot navigation, vision system for blind people, wearable computing, and so on.

  • PDF

ISAR Cross-Range Scaling for a Maneuvering Target (기동표적에 대한 ISAR Cross-Range Scaling)

  • Kang, Byung-Soo;Bae, Ji-Hoon;Kim, Kyung-Tae;Yang, Eun-Jung
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.25 no.10
    • /
    • pp.1062-1068
    • /
    • 2014
  • In this paper, a novel approach estimating target's rotation velocity(RV) is proposed for inverse synthetic aperture radar(ISAR) cross-range scaling(CRS). Scale invariant feature transform(SIFT) is applied to two sequently generated ISAR images for extracting non-fluctuating scatterers. Considering the fact that the distance between target's rotation center(RC) and SIFT features is same, we can set a criterion for estimating RV. Then, the criterion is optimized through the proposed method based on particle swarm optimization(PSO) combined with exhaustive search method. Simulation results show that the proposed algorithm can precisely estimate RV of a scenario based maneuvering target without RC information. With the use of the estimated RV, ISAR image can be correctly re-scaled along the cross-range direction.

Learning Domain Invariant Representation via Self-Rugularization (자기 정규화를 통한 도메인 불변 특징 학습)

  • Hyun, Jaeguk;Lee, ChanYong;Kim, Hoseong;Yoo, Hyunjung;Koh, Eunjin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.4
    • /
    • pp.382-391
    • /
    • 2021
  • Unsupervised domain adaptation often gives impressive solutions to handle domain shift of data. Most of current approaches assume that unlabeled target data to train is abundant. This assumption is not always true in practices. To tackle this issue, we propose a general solution to solve the domain gap minimization problem without any target data. Our method consists of two regularization steps. The first step is a pixel regularization by arbitrary style transfer. Recently, some methods bring style transfer algorithms to domain adaptation and domain generalization process. They use style transfer algorithms to remove texture bias in source domain data. We also use style transfer algorithms for removing texture bias, but our method depends on neither domain adaptation nor domain generalization paradigm. The second regularization step is a feature regularization by feature alignment. Adding a feature alignment loss term to the model loss, the model learns domain invariant representation more efficiently. We evaluate our regularization methods from several experiments both on small dataset and large dataset. From the experiments, we show that our model can learn domain invariant representation as much as unsupervised domain adaptation methods.