• Title/Summary/Keyword: Harr Classifier

Search Result 7, Processing Time 0.019 seconds

Application of Multi-Class AdaBoost Algorithm to Terrain Classification of Satellite Images

  • Nguyen, Ngoc-Hoa;Woo, Dong-Min
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.536-543
    • /
    • 2014
  • Terrain classification is still a challenging issue in image processing, especially with high resolution satellite images. The well-known obstacles include low accuracy in the detection of targets, especially for the case of man-made structures, such as buildings and roads. In this paper, we present an efficient approach to classify and detect building footprints, foliage, grass and road from high resolution grayscale satellite images. Our contribution is to build a strong classifier using AdaBoost based on a combination of co-occurrence and Haar-like features. We expect that the inclusion of Harr-like feature improves the classification performance of the man-made structures, since Haar-like feature is extracted from corner features and rectangle features. Also, the AdaBoost algorithm selects only critical features and generates an extremely efficient classifier. Experimental result indicates that the classification accuracy of AdaBoost classifier is much higher than that of the conventional classifier using back propagation algorithm. Also, the inclusion of Harr-like feature significantly improves the classification accuracy. The accuracy of the proposed method is 98.4% for the target detection and 92.8% for the classification on high resolution satellite images.

A Study on the Eye-line Detection from Facial Image taken by Smart Phone (스마트 폰에서 취득한 얼굴영상에서 아이라인 검출에 관한 연구)

  • Koo, Ha-Sung;Song, Ho-Geun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.10
    • /
    • pp.2231-2238
    • /
    • 2011
  • In this paper, the extract method of eye and eye-line from picture of a person is proposed. Most of existing papers are to extract the position of eyeball but in this paper, by extracting not only the position of eyeball but also eye-line, it can be applied to the face application program variously. The experimental data of the input picture is a full face photograph taken by smart phone, basically the picture is limited to the face of one person and back ground can be taken from every where and no restriction of race. The proposed method is to extract face candidated area by using Harr Classifier and set up the candidate area of eye position from face candidate area. To extract high value from eye candidate area using dilate operation, and proposed the method to classify eye and eyelash by local thresholding of the picture. After that, using thresholding image from eyemapC that Hsu's suggested, and separated the area with eye and without eye. Finally extract the contour of eye and detect eye-line using optimum ellipse estimation.

Learning Algorithm for Multiple Distribution Data using Haar-like Feature and Decision Tree (다중 분포 학습 모델을 위한 Haar-like Feature와 Decision Tree를 이용한 학습 알고리즘)

  • Kwak, Ju-Hyun;Woen, Il-Young;Lee, Chang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.43-48
    • /
    • 2013
  • Adaboost is widely used for Haar-like feature boosting algorithm in Face Detection. It shows very effective performance on single distribution model. But when detecting front and side face images at same time, Adaboost shows it's limitation on multiple distribution data because it uses linear combination of basic classifier. This paper suggest the HDCT, modified decision tree algorithm for Haar-like features. We still tested the performance of HDCT compared with Adaboost on multiple distributed image recognition.

Implementation of User Gesture Recognition System for manipulating a Floating Hologram Character (플로팅 홀로그램 캐릭터 조작을 위한 사용자 제스처 인식 시스템 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.143-149
    • /
    • 2019
  • Floating holograms are technologies that provide rich 3D stereoscopic images in a wide space such as advertisement, concert. In addition, It is possible to reduce the 3D glasses inconvenience, eye strain, and space distortion, and to enjoy 3D images with excellent realism and existence. Therefore, this paper implements a user gesture recognition system for manipulating a floating hologram characters that can be used in a small space devices. The proposed method detects face region using haar feature-based cascade classifier, and recognizes the user gestures using a user gesture-occurred position information that is acquired from the gesture difference image in real time. And Each classified gesture information is mapped to the character motion in floating hologram for manipulating a character action. In order to evaluate the performance of the proposed user gesture recognition system for manipulating a floating hologram character, we make the floating hologram display devise, and measures the recognition rate of each gesture repeatedly that includes body shaking, walking, hand shaking, and jumping. As a results, the average recognition rate was 88%.

Robust feature vector composition for frontal face detection (노이즈에 강인한 정면 얼굴 검출을 위한 특성벡터 추출법)

  • Lee Seung-Ik;Won Chulho;Im Sung-Woon;Kim Duk-Gyoo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.6
    • /
    • pp.75-82
    • /
    • 2005
  • The robust feature vector selection method for the multiple frontal face detection is proposed in this paper. The proposed feature vector for the training and classification are integrated by means, amplitude projections, and its 1D Harr wavelet of the input image. And the statistical modeling is performed both for face and nonface classes. Finally, the estimated probability density functions (PDFs) are applied for the detection of multiple frontal faces in the still image. The proposed method can handle multiple faces, partially occluded faces, and slightly posed-angle faces. And also the proposed method is very effective for low quality face images. Experimental results show that detection rate of the propose method is $98.3\%$ with three false detections on the testing data, SET3 which have 227 faces in 80 images.

Real-Time Head Tracking using Adaptive Boosting in Surveillance (서베일런스에서 Adaptive Boosting을 이용한 실시간 헤드 트래킹)

  • Kang, Sung-Kwan;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.11 no.2
    • /
    • pp.243-248
    • /
    • 2013
  • This paper proposes an effective method using Adaptive Boosting to track a person's head in complex background. By only one way to feature extraction methods are not sufficient for modeling a person's head. Therefore, the method proposed in this paper, several feature extraction methods for the accuracy of the detection head running at the same time. Feature Extraction for the imaging of the head was extracted using sub-region and Haar wavelet transform. Sub-region represents the local characteristics of the head, Haar wavelet transform can indicate the frequency characteristics of face. Therefore, if we use them to extract the features of face, effective modeling is possible. In the proposed method to track down the man's head from the input video in real time, we ues the results after learning Harr-wavelet characteristics of the three types using AdaBoosting algorithm. Originally the AdaBoosting algorithm, there is a very long learning time, if learning data was changes, and then it is need to be performed learning again. In order to overcome this shortcoming, in this research propose efficient method using cascade AdaBoosting. This method reduces the learning time for the imaging of the head, and can respond effectively to changes in the learning data. The proposed method generated classifier with excellent performance using less learning time and learning data. In addition, this method accurately detect and track head of person from a variety of head data in real-time video images.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.