• Title/Summary/Keyword: Face classification

Search Result 424, Processing Time 0.038 seconds

Classification of Normal and Abnormal QRS-complex for Home Health Management System (재택건강관리 시스템을 위한 정상 및 비정상 심전도의 분류)

  • 최안식;우응제;박승훈;윤영로
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.2
    • /
    • pp.129-135
    • /
    • 2004
  • In the home health management system, we often face the situation to handle biological signals that are frequently measured from normal subjects. In such a case, it is necessary to decide whether the signal at a certain moment is normal or abnormal. Since ECC is one of the most frequently measured biological signals, we describe algorithms that detect QRS-complex and decide whether it is normal or abnormal. The developed QRS detection algorithm is a simplified version of the conventional algorithm providing enough performance for the proposed application. The developed classification algorithm that detects abnormal from mostly normal beats is based on QRS width, R-R interval and QRS shape parameter using Karhunen-Loeve transformation. The simplified QRS detector correctly detected about 99% of all beats in the MTT/BIH ECG database. The classification algorithm correctly classified about 96% of beats as normal or abnormal. The QRS detection and classification algorithm described in this paper could be used in home health management system.

Cross cultural characteristics of facial attractiveness (얼굴 매력의 교차문화권적 특징)

  • Kim, soo-jeoung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.677-679
    • /
    • 2007
  • With an assumption that the view point of a given society and time on facial attractiveness can be inferred by analyzing popular stars' faces, the cross-cultural differences in the physical measures of Korean and foreign stars were investigated. A classification model of affective facial impressions was used to obtain the physical measures of the faces and classifying them into a face-type category. The number of face images analyzed in the study were 629 in total: 258 Korean stars, and 200 foreign stars. The results show that the common characteristics found in the cross-cultural analyses of Western and Eastern stars was a babyish impression. Babyish feature was found distinctive also in western male stars, while such trend was not found in Asian male star.

  • PDF

Ear Detection using Haar-like Feature and Template (Haar-like 특징과 템플릿을 이용한 귀 검출)

  • Hahn, Sang-Il;Cha, Hyung-Tai
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.875-882
    • /
    • 2008
  • Ear detection in an image processing is the one of the important area in biometrics. In this paper we propose a human ear detection algorithm with side face images. First, we search a face candidate area in an input image by using skin-color model and try to find an ear area based on Haar-like feature. Then, to verity whether it is the ear area or not, we use the template which is excellent object classification compare to recognize the characters in the plate. In this experiment, the proposed method showed that the processing speed is improved by 60% than previous works and the detection success rate is 92%.

Extraction of Texture Region-Based Average of Variations of Local Correlations Coefficients (국부상관계수의 영역 평균변화량에 의한 질감영역 추출)

  • 서상용;임채환;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.5A
    • /
    • pp.709-716
    • /
    • 2000
  • We present an efficient algorithm using region-based average of variations of local correlation coefficients (LCC) for the extraction of texture regions. The key idea of this algorithm for the classification of texture and shade regions is to utilize the fact that the averages of the variations of LCCs according to different orientations texture regions are clearly larger than those in shade regions. In order to evaluate the performance of the proposed algorithm, we use nine test images (Lena, Bsail, Camera Man, Face, Woman, Elaine, Jet, Tree, and Tank) of 8-bit 256$\times$256 pixels. Experimental results show that the proposed feature extracts well the regions which appear visually as texture regions.

  • PDF

Extreme Learning Machine Ensemble Using Bagging for Facial Expression Recognition

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.443-458
    • /
    • 2014
  • An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly.

Development of Expert System for Burr Formation in Face Milling (밀링가공시 버형성 예측을 위한 전문가 시스템 개발)

  • Ko, Sung-Lim;Kim, Young-Jin;Ko, Dae-Cheol;Han, Sang-U;Lee, Je-Yeol;Ahn, Yong-Jin
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.2
    • /
    • pp.199-205
    • /
    • 2001
  • Burr makes troubles on manufacturing process due to deburring cost, quality of products and productivity. This paper described the results of experimental study on the influence of the cutting parameters on the formation of exit burrs in face milling. Using the results of experimental study, burr types are classified and data bases are developed to predict burr formation result. From the CAD file for work geometry and the NC data for tool path, the exit angles are calculated at every edges. This program predicts the burr geometry at exit edges using the prediction algorithm and data bases which are developed experimentally. Simulation results on deformation strain and temperature are also available in specific 2-dimensional cutting conditions. Also algorithm which can determine the exit angle is proposed.

  • PDF

Analysis of Facial Coloration in Accordance with the Type of Personal Color System of Female University Students (여대생의 퍼스널 컬러 시스템 유형에 따른 얼굴색 분석)

  • Lee, Eun-Young;Park, Kil-Soon
    • The Research Journal of the Costume Culture
    • /
    • v.20 no.2
    • /
    • pp.144-153
    • /
    • 2012
  • This study performed a simultaneous sensory evaluation and color measurement, targeting 136 female university students who live in the Dae-Jeon region. the study measured participants'facial coloration under the condition of available light between 11 AM and 3 PM from Spring (May) to Autumn (October) in 2009. For statistical analysis, descriptive statistics, a member variate analysis, and discriminant analysis were executed using SPSS version 18.0 of the statistics program. The results of this study are as follows. First, as a result of the sensory evaluation, the blue undertone well matched to face type was dominantly distributed among the female university student participants. Second, the forehead showed a type of yellowish coloration and was relatively dark to cheeks. However the cheek displayed a reddish coloration and was relatively bright compared to the forehead from an evaluation of a cheek and forehead color measurement. Third, due to the investigation the of facial coloration variable, a yellowish and reddish chromaticity on the cheek were evident as a variable of facial coloration, which has an influence on the classification of the types of facial color. As a result of the induced discriminant through these two color variables, the yellowish chromaticity appeared as a color variable to have a greater influence than the reddish chromaticity on the cheek.

A Prediction Model for Depression in Patients with Parkinson's Disease (파킨슨병 환자의 우울 예측 모형)

  • Bae, Eun Sook;Chun, Sang Myung;Kim, Jae Woo;Kang, Chang Wan
    • Korean Journal of Health Education and Promotion
    • /
    • v.30 no.5
    • /
    • pp.139-151
    • /
    • 2013
  • Objectives: This study investigated how income, duration of illness, social stigma, quality of sleeping, ADL and social participation related to Parkinson's disease(PD) predict depression in a conceptual model based on the International Classification of Functioning(ICF) model. Methods: The sample included 206 adults with idiopathic Parkinson's disease(IPD) attending D university hospital in B Metro-politan City. A structured questionnaire was used and conducted face-to-face interviews. The collected data were analyzed for fitness, using the AMOS 18.0 program. Results: A path analysis showed that the overall model provided empirical evidence for linkages in the ICF model. Depression was manifested by significant direct effects of social stigma(${\beta}=.20$, p<.001), quality of sleeping(${\beta}=-.40$, p<.001), ADL(${\beta}=-.20$, p<.01), and social participation(${\beta}=-.12$, p<.05), indirect effects including income(p<.05), duration of illness(p<.05). These variables explained 45.9% of variance in the prediction model. Conclusions: This model may help nurses to collect and assess information to develop intervention program for depression.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Evolution of the Stethoscope: Advances with the Adoption of Machine Learning and Development of Wearable Devices

  • Yoonjoo Kim;YunKyong Hyon;Seong-Dae Woo;Sunju Lee;Song-I Lee;Taeyoung Ha;Chaeuk Chung
    • Tuberculosis and Respiratory Diseases
    • /
    • v.86 no.4
    • /
    • pp.251-263
    • /
    • 2023
  • The stethoscope has long been used for the examination of patients, but the importance of auscultation has declined due to its several limitations and the development of other diagnostic tools. However, auscultation is still recognized as a primary diagnostic device because it is non-invasive and provides valuable information in real-time. To supplement the limitations of existing stethoscopes, digital stethoscopes with machine learning (ML) algorithms have been developed. Thus, now we can record and share respiratory sounds and artificial intelligence (AI)-assisted auscultation using ML algorithms distinguishes the type of sounds. Recently, the demands for remote care and non-face-to-face treatment diseases requiring isolation such as coronavirus disease 2019 (COVID-19) infection increased. To address these problems, wireless and wearable stethoscopes are being developed with the advances in battery technology and integrated sensors. This review provides the history of the stethoscope and classification of respiratory sounds, describes ML algorithms, and introduces new auscultation methods based on AI-assisted analysis and wireless or wearable stethoscopes.