• Title/Summary/Keyword: 아다부스트 훈련

Search Result 3, Processing Time 0.017 seconds

License Plate Detection with Improved Adaboost Learning based on Newton's Optimization and MCT (뉴턴 최적화를 통해 개선된 아다부스트 훈련과 MCT 특징을 이용한 번호판 검출)

  • Lee, Young-Hyun;Kim, Dae-Hun;Ko, Han-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.71-82
    • /
    • 2012
  • In this paper, we propose a license plate detection method with improved Adaboost learning and MCT (Modified Census Transform). The MCT represents the local structure patterns as integer numbered feature values which has robustness to illumination change and memory efficiency. However, since these integer values are discrete, a lookup table is needed to design a weak classifier for Adaboost learning. Some previous research efforts have focused on minimization of exponential criterion for Adaboost optimization. In this paper, a method that uses MCT and improved Adaboost learning based on Newton's optimization to exponential criterion is proposed for license plate detection. Experimental results on license patch images and field images demonstrate that the proposed method yields higher performance of detection rates with low false positives than the conventional method using the original Adaboost learning.

An Improved AdaBoost Algorithm by Clustering Samples (샘플 군집화를 이용한 개선된 아다부스트 알고리즘)

  • Baek, Yeul-Min;Kim, Joong-Geun;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.643-646
    • /
    • 2013
  • We present an improved AdaBoost algorithm to avoid overfitting phenomenon. AdaBoost is widely known as one of the best solutions for object detection. However, AdaBoost tends to be overfitting when a training dataset has noisy samples. To avoid the overfitting phenomenon of AdaBoost, the proposed method divides positive samples into K clusters using k-means algorithm, and then uses only one cluster to minimize the training error at each iteration of weak learning. Through this, excessive partitions of samples are prevented. Also, noisy samples are excluded for the training of weak learners so that the overfitting phenomenon is effectively reduced. In our experiment, the proposed method shows better classification and generalization ability than conventional boosting algorithms with various real world datasets.

Pedestrian Recognition using Adaboost Algorithm based on Cascade Method by Curvature and HOG (곡률과 HOG에 의한 연속 방법에 기반한 아다부스트 알고리즘을 이용한 보행자 인식)

  • Lee, Yeung-Hak;Ko, Joo-Young;Suk, Jung-Hee;Roh, Tae-Moon;Shim, Jae-Chang
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.654-662
    • /
    • 2010
  • In this paper, we suggest an advanced algorithm, to recognize pedestrian/non-pedestrian using second-stage cascade method, which applies Adaboost algorithm to make a strong classification from weak classifications. First, we extract two feature vectors: (i) Histogram of Oriented Gradient (HOG) which includes gradient information and differential magnitude; (ii) Curvature-HOG which is based on four different curvature features per pixel. And then, a strong classification needs to be obtained from weak classifications for composite recognition method using both HOG and curvature-HOG. In the proposed method, we use one feature vector and one strong classification for the first stage of recognition. For the recognition-failed image, the other feature and strong classification will be used for the second stage of recognition. Based on our experiment, the proposed algorithm shows higher recognition rate compared to the traditional method.