• Title/Summary/Keyword: Haar-Feature

Search Result 143, Processing Time 0.024 seconds

Real-Time Pupil Detection System Using PC Camera (PC 카메라를 이용한 실시간 동공 검출)

  • 조상규;황치규;황재정
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1184-1192
    • /
    • 2004
  • A real-time pupil detection system that detects the pupil movement from the real-time video data achieved by the visual light camera for general purpose personal computer is proposed. It is implemented with three steps; at first, face region is detected using the Haar-like feature detection scheme, and then eye region is detected within the face region using the template-based scheme. Finally, pupil movement is detected within the eye region by convolution of the horizontal and vertical histogram profiling and Gaussian filter. As results, we obtained more than 90% of the detection rate from 2375 simulation images and the data processing time is about 160㎳, that detects 7 times per second.

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

Detection Method of Face Rotation Angle for Crosstalk Cancellation (크로스토크 제거를 위한 얼굴 방위각 검출 기법)

  • Han, Sang-Il;Cha, Hyung-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.58-65
    • /
    • 2007
  • The method of 3D sound realization using 2 speakers provides two advantages: cheap and easy to build. In the case, crosstalk between 2 speakers has to be eliminated. To calculate and remove the effect of the crosstalk it is essential to find a rotation angle of human head correctly. In the paper, we suggest an algorithm to find the head angle of 2 channel system. We first detect a face area of the given image using Haar-like feature. After that, the eve detection using pre-processor and morphology method. Finally, we calculate the face rotation angle with the face andi the eye location. As a result of the experiment on various face images, the proposed method improves the efficiency much better than the conventional methods.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Object Detection and Tracking with Infrared Videos at Night-time (야간 적외선 카메라를 이용한 객체 검출 및 추적)

  • Choi, Beom-Joon;Park, Jang-Sik;Song, Jong-Kwan;Yoon, Byung-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.2
    • /
    • pp.183-188
    • /
    • 2015
  • In this paper, it is proposed to detect and track pedestrian and analyse tracking performance with nighttime CCTV video. The detection is performed by a cascade classifier with Haar-like feature trained with Adaboost algorithm. Tracking pedestrian is performed by a particle filter. As results of experiments, it is introduced that efficient number of particles and the distributions are applied to track pedestrian at the night-time. Performance of detection and tracking is verified with nighttime CCTV video that is obtained at alleys etc.

Long Distance Vehicle Recognition and Tracking using Shadow (그림자를 이용한 원거리 차량 인식 및 추적)

  • Ahn, Young-Sun;Kwak, Seong-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.1
    • /
    • pp.251-256
    • /
    • 2019
  • This paper presents an algorithm for recognizing and tracking a vehicle at a distance using a monocular camera installed at the center of the windshield of a vehicle to operate an autonomous vehicle in a racing. The vehicle is detected using the Haar feature, and the size and position of the vehicle are determined by detecting the shadows at the bottom of the vehicle. The region around the recognized vehicle is determined as ROI (Region Of Interest) and the vehicle shadow within the ROI is found and tracked in the next frame. Then the position, relative speed and direction of the vehicle are predicted. Experimental results show that the vehicle is recognized with a recognition rate of over 90% at a distance of more than 100 meters.

Design of RBFNNs Pattern Classifier Realized with the Aid of Face Features Detection (얼굴 특징 검출에 의한 RBFNNs 패턴분류기의 설계)

  • Park, Chan-Jun;Kim, Sun-Hwan;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.120-126
    • /
    • 2016
  • In this study, we propose a method for effectively detecting and recognizing the face in image using RBFNNs pattern classifier and HCbCr-based skin color feature. Skin color detection is computationally rapid and is robust to pattern variation for face detection, however, the objects with similar colors can be mistakenly detected as face. Thus, in order to enhance the accuracy of the skin detection, we take into consideration the combination of the H and CbCr components jointly obtained from both HSI and YCbCr color space. Then, the exact location of the face is found from the candidate region of skin color by detecting the eyes through the Haar-like feature. Finally, the face recognition is performed by using the proposed FCM-based RBFNNs pattern classifier. We show the results as well as computer simulation experiments carried out by using the image database of Cambridge ICPR.

Extraction of full body size parameters for personalized recommendation module (개인 맞춤형 추천모듈을 위한 전신 신체사이즈 추출)

  • Park, Yong-Hee;Chin, Seong-Ah
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.12
    • /
    • pp.5113-5119
    • /
    • 2010
  • Anthropometry has been broadly explored in various fields including automobile industry, home electronic appliances, medical appliances and sports goods with aiming at reaching satisfaction to consumer's need and efficiency. However, current technologies to measure a human body still have barriers in which the methods mostly seem to be contingent on expensive devices such as scanner and digital measuring instruments and to be directly touchable to the body when obtaining body size.. Therefore, in this paper, we present a general method to automatically extract size of body from a real body image acquired from a camera and to utilize it into recommend systems including clothing and bicycle fitting. At first, Haar-like features and AdaBoost algorithm are employed to detect body position. Then features of body can be recognized using AAM. Finally clothing and bicycle recommending modules have been implemented and experimented to validate the proposed method.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

An Algorithim for Converting 2D Face Image into 3D Model (얼굴 2D 이미지의 3D 모델 변환 알고리즘)

  • Choi, Tae-Jun;Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.41-48
    • /
    • 2015
  • Recently, the spread of 3D printers has been increasing the demand for 3D models. However, the creation of 3D models should have a trained specialist using specialized softwares. This paper is about an algorithm to produce a 3D model from a single sheet of two-dimensional front face photograph, so that ordinary people can easily create 3D models. The background and the foreground are separated from a photo and predetermined constant number vertices are placed on the seperated foreground 2D image at a same interval. The arranged vertex location are extended in three dimensions by using the gray level of the pixel on the vertex and the characteristics of eyebrows and nose of the nomal human face. The separating method of the foreground and the background uses the edge information of the silhouette. The AdaBoost algorithm using the Haar-like feature is also employed to find the location of the eyes and nose. The 3D models obtained by using this algorithm are good enough to use for 3D printing even though some manual treatment might be required a little bit. The algorithm will be useful for providing 3D contents in conjunction with the spread of 3D printers.