• Title/Summary/Keyword: Object Feature Extraction

Search Result 266, Processing Time 0.034 seconds

A Hybrid Proposed Framework for Object Detection and Classification

  • Aamir, Muhammad;Pu, Yi-Fei;Rahman, Ziaur;Abro, Waheed Ahmed;Naeem, Hamad;Ullah, Farhan;Badr, Aymen Mudheher
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1176-1194
    • /
    • 2018
  • The object classification using the images' contents is a big challenge in computer vision. The superpixels' information can be used to detect and classify objects in an image based on locations. In this paper, we proposed a methodology to detect and classify the image's pixels' locations using enhanced bag of words (BOW). It calculates the initial positions of each segment of an image using superpixels and then ranks it according to the region score. Further, this information is used to extract local and global features using a hybrid approach of Scale Invariant Feature Transform (SIFT) and GIST, respectively. To enhance the classification accuracy, the feature fusion technique is applied to combine local and global features vectors through weight parameter. The support vector machine classifier is a supervised algorithm is used for classification in order to analyze the proposed methodology. The Pascal Visual Object Classes Challenge 2007 (VOC2007) dataset is used in the experiment to test the results. The proposed approach gave the results in high-quality class for independent objects' locations with a mean average best overlap (MABO) of 0.833 at 1,500 locations resulting in a better detection rate. The results are compared with previous approaches and it is proved that it gave the better classification results for the non-rigid classes.

Extraction of Geometric Primitives from Point Cloud Data

  • Kim, Sung-Il;Ahn, Sung-Joon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2010-2014
    • /
    • 2005
  • Object detection and parameter estimation in point cloud data is a relevant subject to robotics, reverse engineering, computer vision, and sport mechanics. In this paper a software is presented for fully-automatic object detection and parameter estimation in unordered, incomplete and error-contaminated point cloud with a large number of data points. The software consists of three algorithmic modules each for object identification, point segmentation, and model fitting. The newly developed algorithms for orthogonal distance fitting (ODF) play a fundamental role in each of the three modules. The ODF algorithms estimate the model parameters by minimizing the square sum of the shortest distances between the model feature and the measurement points. Curvature analysis of the local quadric surfaces fitted to small patches of point cloud provides the necessary seed information for automatic model selection, point segmentation, and model fitting. The performance of the software on a variety of point cloud data will be demonstrated live.

  • PDF

Cascade Selective Window for Fast and Accurate Object Detection

  • Zhang, Shu;Cai, Yong;Xie, Mei
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.3
    • /
    • pp.1227-1232
    • /
    • 2015
  • Several works help make sliding window object detection fast, nevertheless, computational demands remain prohibitive for numerous applications. This paper proposes a fast object detection method based on three strategies: cascade classifier, selective window search and fast feature extraction. Experimental results show that the proposed method outperforms the compared methods and achieves both high detection precision and low computation cost. Our approach runs at 17ms per frame on 640×480 images while attaining state-of-the-art accuracy.

Simplified Representation of Image Contour

  • Yoo, Suk Won
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.317-322
    • /
    • 2018
  • We use edge detection technique for the input image to extract the entire edges of the object in the image and then select only the edges that construct the outline of the object. By examining the positional relation between these pixels composing the outline, a simplified version of the outline of the object in the input image is generated by removing unnecessary pixels while maintaining the condition of connection of the outline. For each pixel constituting the outline, its direction is calculated by examining the positional relation with the next pixel. Then, we group the consecutive pixels with same direction into one and then change them to a line segment instead of a point. Among those line segments composing the outline of the object, a line segment whose length is smaller than a predefined minimum length of acceptable line segment is removed by merging it into one of the adjacent line segments. As a result, an outline composed of line segments of over a certain length is obtained through this process.

Registration of Aerial Image with Lines using RANSAC Algorithm

  • Ahn, Y.;Shin, S.;Schenk, T.;Cho, W.
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.529-536
    • /
    • 2007
  • Registration between image and object space is a fundamental step in photogrammetry and computer vision. Along with rapid development of sensors - multi/hyper spectral sensor, laser scanning sensor, radar sensor etc., the needs for registration between different sensors are ever increasing. There are two important considerations on different sensor registration. They are sensor invariant feature extraction and correspondence between them. Since point to point correspondence does not exist in image and laser scanning data, it is necessary to have higher entities for extraction and correspondence. This leads to modify first, existing mathematical and geometrical model which was suitable for point measurement to line measurements, second, matching scheme. In this research, linear feature is selected for sensor invariant features and matching entity. Linear features are incorporated into mathematical equation in the form of extended collinearity equation for registration problem known as photo resection which calculates exterior orientation parameters. The other emphasis is on the scheme of finding matched entities in the aide of RANSAC (RANdom SAmple Consensus) in the absence of correspondences. To relieve computational load which is a common problem in sampling theorem, deterministic sampling technique and selecting 4 line features from 4 sectors are applied.

A scheme of extracting age-related wrinkle feature and skin age based on dermoscopic images (피부 현미경 영상을 통한 피부 특징 추출 및 피부 나이 도출 기법)

  • Choi, Young-Hwan;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.14 no.4
    • /
    • pp.332-338
    • /
    • 2010
  • Usually, mage feature extraction methods are performed as a pre-processing step in many applications including image retrieval, object recognition, and image indexing. Especially, in the image texture analysis, texture feature extraction methods attempt to increase texture contrast to make it easier to extract the texture features from the image. One of the distinct textures in microscopic skin image is the wrinkle, and its features could provide various useful information for the age-related applications. In this paper, we propose a scheme to extract age-related features from the skin images and improve its accuracy in the skin age estimation.

Trajectory Generation of a Moving Object for a Mobile Robot in Predictable Environment

  • Jin, Tae-Seok;Lee, Jang-Myung
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.5 no.1
    • /
    • pp.27-35
    • /
    • 2004
  • In the field of machine vision using a single camera mounted on a mobile robot, although the detection and tracking of moving objects from a moving observer, is complex and computationally demanding task. In this paper, we propose a new scheme for a mobile robot to track and capture a moving object using images of a camera. The system consists of the following modules: data acquisition, feature extraction and visual tracking, and trajectory generation. And a single camera is used as visual sensors to capture image sequences of a moving object. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the active camera. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to capture the moving object, the linear and angular velocities are estimated and utilized. The experimental results of tracking and capturing of the target object with the mobile robot are presented.

Contour and Feature Parameter Extraction for Moving Object Tracking in Traffic Scenes (도로영상에서 움직이는 물체 추적을 위한 윤곽선 및 특징 파라미터 추출)

  • Lee, Chul-Hun;Seol Sung-Wook;Joo Jae-Heum;Nam Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.1
    • /
    • pp.11-20
    • /
    • 2000
  • This paper presents the method of extracting the contour and shape parameters for moving object tracking in traffic scenes. The contour is extracted by applying difference image method in reduction image and the features are extracted from original image to grow the accuracy of tracking. We used features such as circle distribution, center moment, and maximum and minimum ratio. Data association problem is solved by these features. Kalman filters are used for moving object tracking on real time. The simulation results indicate that the proposed algorithm appears to generate feature vectors good enough for multiple vehicle tracking.

  • PDF

Activity Object Detection Based on Improved Faster R-CNN

  • Zhang, Ning;Feng, Yiran;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.416-422
    • /
    • 2021
  • Due to the large differences in human activity within classes, the large similarity between classes, and the problems of visual angle and occlusion, it is difficult to extract features manually, and the detection rate of human behavior is low. In order to better solve these problems, an improved Faster R-CNN-based detection algorithm is proposed in this paper. It achieves multi-object recognition and localization through a second-order detection network, and replaces the original feature extraction module with Dense-Net, which can fuse multi-level feature information, increase network depth and avoid disappearance of network gradients. Meanwhile, the proposal merging strategy is improved with Soft-NMS, where an attenuation function is designed to replace the conventional NMS algorithm, thereby avoiding missed detection of adjacent or overlapping objects, and enhancing the network detection accuracy under multiple objects. During the experiment, the improved Faster R-CNN method in this article has 84.7% target detection result, which is improved compared to other methods, which proves that the target recognition method has significant advantages and potential.

A Stereo Image Recognition-Based Method for measuring the volume of 3D Object (스테레오 영상 인식에 기반한 3D 물체의 부피계측방법)

  • Jeong, Yun-Su;Lee, Hae-Won;Kim, Jin-Seok;Won, Jong-Un
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.237-244
    • /
    • 2002
  • In this paper, we propose a stereo image recognition-based method for measuring the volume of the rectangular parallelepiped. The method measures the volume from two images captured with two CCD (charge coupled device) cameras by sequential processes such as ROI (region of interest) extraction, feature extraction, and stereo matching-based vortex recognition. The proposed method makes it possible to measure the volume of the 3D object at high speed because only a few features are used in the process of stereo matching. From experimental results, it is demonstrated that this method is very effective for measuring the volume of the rectangular parallelepiped at high speed.