• Title/Summary/Keyword: Object Feature Extraction

Search Result 266, Processing Time 0.032 seconds

Object Recognition by Invariant Feature Extraction in FLIR (적외선 영상에서의 불변 특징 정보를 이용한 목표물 인식)

  • 권재환;이광연;김성대
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.65-68
    • /
    • 2000
  • This paper describes an approach for extracting invariant features using a view-based representation and recognizing an object with a high speed search method in FLIR. In this paper, we use a reformulated eigenspace technique based on robust estimation for extracting features which are robust for outlier such as noise and clutter. After extracting feature, we recognize an object using a partial distance search method for calculating Euclidean distance. The experimental results show that the proposed method achieves the improvement of recognition rate compared with standard PCA.

  • PDF

Real-time Object Recognition with Pose Initialization for Large-scale Standalone Mobile Augmented Reality

  • Lee, Suwon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4098-4116
    • /
    • 2020
  • Mobile devices such as smartphones are very attractive targets for augmented reality (AR) services, but their limited resources make it difficult to increase the number of objects to be recognized. When the recognition process is scaled to a large number of objects, it typically requires significant computation time and memory. Therefore, most large-scale mobile AR systems rely on a server to outsource recognition process to a high-performance PC, but this limits the scenarios available in the AR services. As a part of realizing large-scale standalone mobile AR, this paper presents a solution to the problem of accuracy, memory, and speed for large-scale object recognition. To this end, we design our own basic feature and realize spatial locality, selective feature extraction, rough pose estimation, and selective feature matching. Experiments are performed to verify the appropriateness of the proposed method for realizing large-scale standalone mobile AR in terms of efficiency and accuracy.

Feature Extraction of Shape of Image Objects in Content-based Image Retrieval (내용기반으로한 이미지 검색에서 이미지 객체들의 외형특징추출)

  • Cho, June-Suh
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.823-828
    • /
    • 2003
  • The main objective of this paper is to provide a methodology of feature extraction using shape of image objects for content-based image retrieval. The shape of most real-life objects is irregular, and hence there is no universal approach to quantify the shape of an arbitrary object. In particular. electronic catalogs contain many image objects for their products. In this paper, we perform feature extraction based on individual objects in images rather than on the whole image itself, since our method uses a shape-based approach of objects using RLC lines within an image. Experiments show that shape parameters distinctly represented image objects and provided better classification and discrimination among image objects in an image database compared to Texture.

Object Feature Extraction and Matching for Effective Multiple Vehicles Tracking (효과적인 다중 차량 추적을 위한 객체 특징 추출 및 매칭)

  • Cho, Du-Hyung;Lee, Seok-Lyong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.789-794
    • /
    • 2013
  • A vehicle tracking system makes it possible to induce the vehicle movement path for avoiding traffic congestion and to prevent traffic accidents in advance by recognizing traffic flow, monitoring vehicles, and detecting road accidents. To track the vehicles effectively, those which appear in a sequence of video frames need to identified by extracting the features of each object in the frames. Next, the identical vehicles over the continuous frames need to be recognized through the matching among the objects' feature values. In this paper, we identify objects by binarizing the difference image between a target and a referential image, and the labelling technique. As feature values, we use the center coordinate of the minimum bounding rectangle(MBR) of the identified object and the averages of 1D FFT(fast Fourier transform) coefficients with respect to the horizontal and vertical direction of the MBR. A vehicle is tracked in such a way that the pair of objects that have the highest similarity among objects in two continuous images are regarded as an identical object. The experimental result shows that the proposed method outperforms the existing methods that use geometrical features in tracking accuracy.

Feature Extraction in 3-Dimensional Object with Closed-surface using Fourier Transform (Fourier Transform을 이용한 3차원 폐곡면 객체의 특징 벡터 추출)

  • 이준복;김문화;장동식
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.3
    • /
    • pp.21-26
    • /
    • 2003
  • A new method to realize 3-dimensional object pattern recognition system using Fourier-based feature extractor has been proposed. The procedure to obtain the invariant feature vector is as follows ; A closed surface is generated by tracing the surface of object using the 3-dimensional polar coordinate. The centroidal distances between object's geometrical center and each closed surface points are calculated. The distance vector is translation invariant. The distance vector is normalized, so the result is scale invariant. The Fourier spectrum of each normalized distance vector is calculated, and the spectrum is rotation invariant. The Fourier-based feature generating from above procedure completely eliminates the effect of variations in translation, scale, and rotation of 3-dimensional object with closed-surface. The experimental results show that the proposed method has a high accuracy.

  • PDF

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

Vehicle Detection in Aerial Images Based on Hyper Feature Map in Deep Convolutional Network

  • Shen, Jiaquan;Liu, Ningzhong;Sun, Han;Tao, Xiaoli;Li, Qiangyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1989-2011
    • /
    • 2019
  • Vehicle detection based on aerial images is an interesting and challenging research topic. Most of the traditional vehicle detection methods are based on the sliding window search algorithm, but these methods are not sufficient for the extraction of object features, and accompanied with heavy computational costs. Recent studies have shown that convolutional neural network algorithm has made a significant progress in computer vision, especially Faster R-CNN. However, this algorithm mainly detects objects in natural scenes, it is not suitable for detecting small object in aerial view. In this paper, an accurate and effective vehicle detection algorithm based on Faster R-CNN is proposed. Our method fuse a hyperactive feature map network with Eltwise model and Concat model, which is more conducive to the extraction of small object features. Moreover, setting suitable anchor boxes based on the size of the object is used in our model, which also effectively improves the performance of the detection. We evaluate the detection performance of our method on the Munich dataset and our collected dataset, with improvements in accuracy and effectivity compared with other methods. Our model achieves 82.2% in recall rate and 90.2% accuracy rate on Munich dataset, which has increased by 2.5 and 1.3 percentage points respectively over the state-of-the-art methods.

NMF-Feature Extraction for Sound Classification (소리 분류를 위한 NMF특징 추출)

  • Yong-Choon Cho;Seungin Choi;Sung-Yang Bang
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.4-6
    • /
    • 2003
  • A holistic representation, such as sparse ceding or independent component analysis (ICA), was successfully applied to explain early auditory processing and sound classification. In contrast, Part-based representation is an alternative way of understanding object recognition in brain. In this paper. we employ the non-negative matrix factorization (NMF)[1]which learns parts-based representation for sound classification. Feature extraction methods from spectrogram using NMF are explained. Experimental results show that NMF-based features improve the performance of sound classification over ICA-based features.

  • PDF

Object Tracking with Sparse Representation based on HOG and LBP Features

  • Boragule, Abhijeet;Yeo, JungYeon;Lee, GueeSang
    • International Journal of Contents
    • /
    • v.11 no.3
    • /
    • pp.47-53
    • /
    • 2015
  • Visual object tracking is a fundamental problem in the field of computer vision, as it needs a proper model to account for drastic appearance changes that are caused by shape, textural, and illumination variations. In this paper, we propose a feature-based visual-object-tracking method with a sparse representation. Generally, most appearance-based models use the gray-scale pixel values of the input image, but this might be insufficient for a description of the target object under a variety of conditions. To obtain the proper information regarding the target object, the following combination of features has been exploited as a corresponding representation: First, the features of the target templates are extracted by using the HOG (histogram of gradient) and LBPs (local binary patterns); secondly, a feature-based sparsity is attained by solving the minimization problems, whereby the target object is represented by the selection of the minimum reconstruction error. The strengths of both features are exploited to enhance the overall performance of the tracker; furthermore, the proposed method is integrated with the particle-filter framework and achieves a promising result in terms of challenging tracking videos.

A study on automatic wear debris recognition by using particle feature extraction (입자 유형별 형상추출에 의한 마모입자 자동인식에 관한 연구)

  • ;;;Grigoriev, A.Y.
    • Proceedings of the Korean Society of Tribologists and Lubrication Engineers Conference
    • /
    • 1998.04a
    • /
    • pp.314-320
    • /
    • 1998
  • Wear debris morphology is closely related to the wear mode and mechanism occured. Image recognition of wear debris is, therefore, a powerful tool in wear monitoring. But it has usually required expert's experience and the results could be too subjective. Development of automatic tools for wear debris recognition is needed to solve this problem. In this work, an algorithm for automatic wear debris recognition was suggested and implemented by PC base software. The presented method defined a characteristic 3-dimensional feature space where typical types of wear debris were separately located by the knowledge-based system and compared the similarity of object wear debris concerned. The 3-dimensional feature space was obtained from multiple feature vectors by using a multi-dimensional scaling technique. The results showed that the presented automatic wear debris recognition was satisfactory in many cases application.

  • PDF