• Title/Summary/Keyword: Feature extraction algorithm

Search Result 877, Processing Time 0.031 seconds

Automatic Extraction and Measurement of Visual Features of Mushroom (Lentinus edodes L.) (표고 외관 특징점의 자동 추출 및 측정)

  • Hwang, Heon;Lee, Yong-Guk
    • Journal of Bio-Environment Control
    • /
    • v.1 no.1
    • /
    • pp.37-51
    • /
    • 1992
  • Quantizing and extracting visual features of mushroom(Lentinus edodes L.) are crucial to the sorting and grading automation, the growth state measurement, and the dried performance indexing. A computer image processing system was utilized for the extraction and measurement of visual features of front and back sides of the mushroom. The image processing system is composed of the IBM PC compatible 386DK, ITEX PCVISION Plus frame grabber, B/W CCD camera, VGA color graphic monitor, and image output RGB monitor. In this paper, an automatic thresholding algorithm was developed to yield the segmented binary image representing skin states of the front and back sides. An eight directional Freeman's chain coding was modified to solve the edge disconnectivity by gradually expanding the mask size of 3$\times$3 to 9$\times$9. A real scaled geometric quantity of the object was directly extracted from the 8-directional chain element. The external shape of the mushroom was analyzed and converted to the quantitative feature patterns. Efficient algorithms for the extraction of the selected feature patterns and the recognition of the front and back side were developed. The developed algorithms were coded in a menu driven way using MS_C language Ver.6.0, PC VISION PLUS library fuctions, and VGA graphic functions.

  • PDF

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.

Discriminative Power Feature Selection Method for Motor Imagery EEG Classification in Brain Computer Interface Systems

  • Yu, XinYang;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.12-18
    • /
    • 2013
  • Motor imagery classification in electroencephalography (EEG)-based brain-computer interface (BCI) systems is an important research area. To simplify the complexity of the classification, selected power bands and electrode channels have been widely used to extract and select features from raw EEG signals, but there is still a loss in classification accuracy in the state-of- the-art approaches. To solve this problem, we propose a discriminative feature extraction algorithm based on power bands with principle component analysis (PCA). First, the raw EEG signals from the motor cortex area were filtered using a bandpass filter with ${\mu}$ and ${\beta}$ bands. This research considered the power bands within a 0.4 second epoch to select the optimal feature space region. Next, the total feature dimensions were reduced by PCA and transformed into a final feature vector set. The selected features were classified by applying a support vector machine (SVM). The proposed method was compared with a state-of-art power band feature and shown to improve classification accuracy.

Feature Extraction of Handwritten Numerals using Projection Runlength (Projection Runlength를 이용한 필기체 숫자의 특징추출)

  • Park, Joong-Jo;Jung, Soon-Won;Park, Young-Hwan;Kim, Kyoung-Min
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.818-823
    • /
    • 2008
  • In this paper, we propose a feature extraction method which extracts directional features of handwritten numerals by using the projection runlength. Our directional featrures are obtained from four directional images, each of which contains horizontal, vertical, right-diagonal and left-diagonal lines in entire numeral shape respectively. A conventional method which extracts directional features by using Kirsch masks generates edge-shaped double line directional images for four directions, whereas our method uses the projections and their runlengths for four directions to produces single line directional images for four directions. To obtain the directional projections for four directions from a numeral image, some preprocessing steps such as thinning and dilation are required, but the shapes of resultant directional lines are more similar to the numeral lines of input numerals. Four [$4{\times}4$] directional features of a numeral are obtained from four directional line images through a zoning method. By using a hybrid feature which is made by combining our feature with the conventional features of a mesh features, a kirsch directional feature and a concavity feature, higher recognition rates of the handwrittern numerals can be obtained. For recognition test with given features, we use a multi-layer perceptron neural network classifier which is trained with the back propagation algorithm. Through the experiments with the handwritten numeral database of Concordia University, we have achieved a recognition rate of 97.85%.

Disease Region Feature Extraction of Medical Image using Wavelet (Wavelet에 의한 의용영상의 병소부위 특징추출)

  • 이상복;이주신
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.3
    • /
    • pp.73-81
    • /
    • 1998
  • In this paper suggest for methods disease region feature extraction of medical image using wavelet. In the preprocessing, the shape informations of medical image are selected by performing the discrete wavelet transform(DWT) with four level coefficient matrix. In this approach, based on the characteristics of the coefficient matrix, 96 feature parameters are calculated as follows: Firstly. obtaining 32 feature parameters which have the characteristics of low frequency from the parameters according to the horizontal high frequency are calculated from the coefficient matrix of horizontal high frequency. In the third place, 16 vertical feature parameters are also calculated using the same kind of procedure with respect to the vertical high frequency. Finally, 32 feature parameters of diagonal high frequency are obtained from the coefficient matrix of diagonal high frequency. Consequently, 96 feature aprameters extracted. Using suggest algorithm in this paper will, implamentation can automatic recognition system, increasing efficiency of picture achieve communication system.

  • PDF

Panoramic Image Stitching using Feature Extracting and Matching on Mobile Device (모바일 기기에서 특징적 추출과 정합을 활용한 파노라마 이미지 스티칭)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.4
    • /
    • pp.97-102
    • /
    • 2016
  • Image stitching is a process of combining two or more images with overlapping area to create a panorama of input images, which is considered as an active research area in computer vision, especially in the field of augmented reality with 360 degree images. Image stitching techniques can be categorized into two general approaches: direct and feature based techniques. Direct techniques compare all the pixel intensities of the images with each other, while feature based approaches aim to determine a relationship between the images through distinct features extracted from the images. This paper proposes a novel image stitching method based on feature pixels with approximated clustering filter. When the features are extracted from input images, we calculate a meaning of the minutiae, and apply an effective feature extraction algorithm to improve the processing time. With the evaluation of the results, the proposed method is corresponding accurate and effective, compared to the previous approaches.

Terrain Feature Extraction and Classification using Contact Sensor Data (접촉식 센서 데이터를 이용한 지질 특성 추출 및 지질 분류)

  • Park, Byoung-Gon;Kim, Ja-Young;Lee, Ji-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.3
    • /
    • pp.171-181
    • /
    • 2012
  • Outdoor mobile robots are faced with various terrain types having different characteristics. To run safely and carry out the mission, mobile robot should recognize terrain types, physical and geometric characteristics and so on. It is essential to control appropriate motion for each terrain characteristics. One way to determine the terrain types is to use non-contact sensor data such as vision and laser sensor. Another way is to use contact sensor data such as slope of body, vibration and current of motor that are reaction data from the ground to the tire. In this paper, we presented experimental results on terrain classification using contact sensor data. We made a mobile robot for collecting contact sensor data and collected data from four terrains we chose for experimental terrains. Through analysis of the collecting data, we suggested a new method of terrain feature extraction considering physical characteristics and confirmed that the proposed method can classify the four terrains that we chose for experimental terrains. We can also be confirmed that terrain feature extraction method using Fast Fourier Transform (FFT) typically used in previous studies and the proposed method have similar classification performance through back propagation learning algorithm. However, both methods differ in the amount of data including terrain feature information. So we defined an index determined by the amount of terrain feature information and classification error rate. And the index can evaluate classification efficiency. We compared the results of each method through the index. The comparison showed that our method is more efficient than the existing method.

Vehicle Detection in Aerial Images Based on Hyper Feature Map in Deep Convolutional Network

  • Shen, Jiaquan;Liu, Ningzhong;Sun, Han;Tao, Xiaoli;Li, Qiangyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1989-2011
    • /
    • 2019
  • Vehicle detection based on aerial images is an interesting and challenging research topic. Most of the traditional vehicle detection methods are based on the sliding window search algorithm, but these methods are not sufficient for the extraction of object features, and accompanied with heavy computational costs. Recent studies have shown that convolutional neural network algorithm has made a significant progress in computer vision, especially Faster R-CNN. However, this algorithm mainly detects objects in natural scenes, it is not suitable for detecting small object in aerial view. In this paper, an accurate and effective vehicle detection algorithm based on Faster R-CNN is proposed. Our method fuse a hyperactive feature map network with Eltwise model and Concat model, which is more conducive to the extraction of small object features. Moreover, setting suitable anchor boxes based on the size of the object is used in our model, which also effectively improves the performance of the detection. We evaluate the detection performance of our method on the Munich dataset and our collected dataset, with improvements in accuracy and effectivity compared with other methods. Our model achieves 82.2% in recall rate and 90.2% accuracy rate on Munich dataset, which has increased by 2.5 and 1.3 percentage points respectively over the state-of-the-art methods.

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.

An Algorithm for a pose estimation of a robot using Scale-Invariant feature Transform

  • Lee, Jae-Kwang;Huh, Uk-Youl;Kim, Hak-Il
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.517-519
    • /
    • 2004
  • This paper describes an approach to estimate a robot pose with an image. The algorithm of pose estimation with an image can be broken down into three stages : extracting scale-invariant features, matching these features and calculating affine invariant. In the first step, the robot mounted mono camera captures environment image. Then feature extraction is executed in a captured image. These extracted features are recorded in a database. In the matching stage, a Random Sample Consensus(RANSAC) method is employed to match these features. After matching these features, the robot pose is estimated with positions of features by calculating affine invariant. This algorithm is implemented and demonstrated by Matlab program.

  • PDF