• Title/Summary/Keyword: Facial Age Estimation

Search Result 10, Processing Time 0.027 seconds

A study of age estimation from occluded images (가림이 있는 얼굴 영상의 나이 인식 연구)

  • Choi, Sung Eun
    • Journal of Platform Technology
    • /
    • v.10 no.3
    • /
    • pp.44-50
    • /
    • 2022
  • Research on facial age estimation is being actively conducted because it is used in various application fields. Facial images taken in various environments often have occlusions, and there is a problem in that performance of age estimation is degraded. Therefore, we propose age estimation method by creating an occluded part using image extrapolation technology to improve the age estimation performance of an occluded face image. In order to confirm the effect of occlusion in the image on the age estimation performance, an image with occlusion is generated using a mask image. The occluded part of facial image is restored using SpiralNet, which is one of the image extrapolation techniques, and it is a method to create an occluded part while crossing the edge of an image. Experimental results show that age estimation performance of occluded facial image is significantly degraded. It was confirmed that the age estimation performance is improved when using a face image with reconstructed occlusions using SpiralNet by experiments.

Study on the Face recognition, Age estimation, Gender estimation Framework using OpenBR. (OpenBR을 이용한 안면인식, 연령 산정, 성별 추정 프로그램 구현에 관한 연구)

  • Kim, Nam-woo;Kim, Jeong-Tae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.779-782
    • /
    • 2017
  • OpenBR is a framework for researching new facial recognition methods, improving existing algorithms, interacting with commercial systems, measuring perceived performance, and deploying automated biometric systems. Designed to facilitate rapid algorithm prototyping, it features a mature core framework, flexible plug-in system, and open and closed source development support. The established algorithms can be used for specific forms such as face recognition, age estimation, and gender estimation. In this paper, we describe the framework of OpenBR and implement facial recognition, gender estimation, and age estimation using supported programs.

  • PDF

Hierarchical Age Estimation based on Dynamic Grouping and OHRank

  • Zhang, Li;Wang, Xianmei;Liang, Yuyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2480-2495
    • /
    • 2014
  • This paper describes a hierarchical method for image-based age estimation that combines age group classification and age value estimation. The proposed method uses a coarse-to-fine strategy with different appearance features to describe facial shape and texture. Considering the damage to continuity between neighboring groups caused by fixed divisions during age group classification, a dynamic grouping technique is employed to allow non-fixed groups. Based on the given group, an ordinal hyperplane ranking (OHRank) model is employed to transform age estimation into a series of binary enquiry problems that can take advantage of the intrinsic correlation and ordinal information of age. A set of experiments on FG-NET are presented and the results demonstrate the validity of our solution.

Age Estimation via Selecting Discriminated Features and Preserving Geometry

  • Tian, Qing;Sun, Heyang;Ma, Chuang;Cao, Meng;Chu, Yi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1721-1737
    • /
    • 2020
  • Human apparent age estimation has become a popular research topic and attracted great attention in recent years due to its wide applications, such as personal security and law enforcement. To achieve the goal of age estimation, a large number of methods have been pro-posed, where the models derived through the cumulative attribute coding achieve promised performance by preserving the neighbor-similarity of ages. However, these methods afore-mentioned ignore the geometric structure of extracted facial features. Indeed, the geometric structure of data greatly affects the accuracy of prediction. To this end, we propose an age estimation algorithm through joint feature selection and manifold learning paradigms, so-called Feature-selected and Geometry-preserved Least Square Regression (FGLSR). Based on this, our proposed method, compared with the others, not only preserves the geometry structures within facial representations, but also selects the discriminative features. Moreover, a deep learning extension based FGLSR is proposed later, namely Feature selected and Geometry preserved Neural Network (FGNN). Finally, related experiments are conducted on Morph2 and FG-Net datasets for FGLSR and on Morph2 datasets for FGNN. Experimental results testify our method achieve the best performances.

Facial Age Estimation Using Convolutional Neural Networks Based on Inception Modules (인셉션 모듈 기반 컨볼루션 신경망을 이용한 얼굴 연령 예측)

  • Sukh-Erdene, Bolortuya;Cho, Hyun-chong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.9
    • /
    • pp.1224-1231
    • /
    • 2018
  • Automatic age estimation has been used in many social network applications, practical commercial applications, and human-computer interaction visual-surveillance biometrics. However, it has rarely been explored. In this paper, we propose an automatic age estimation system, which includes face detection and convolutional deep learning based on an inception module. The latter is a 22-layer-deep network that serves as the particular category of the inception design. To evaluate the proposed approach, we use 4,000 images of eight different age groups from the Adience age dataset. k-fold cross-validation (k = 5) is applied. A comparison of the performance of the proposed work and recent related methods is presented. The results show that the proposed method significantly outperforms existing methods in terms of the exact accuracy and off-by-one accuracy. The off-by-one accuracy is when the result is off by one adjacent age label to the above or below. For the exact accuracy, the age label of "60+" is classified with the highest accuracy of 76%.

Perceived Age Prediction from Face Image Based on Super-resolution and Tanh-polar Transform (얼굴영상의 초해상도화 및 Tanh-polar 변환 기반의 인지나이 예측)

  • Ilkoo Ahn ;Siwoo Lee
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.329-335
    • /
    • 2023
  • Perceived age is defined as age estimated based on physical appearance. Perceived age is an important indicator of the overall health status of the elderly. This is because people who appear older tend to have higher rates of morbidity and mortality than people of the same chronological age. Although perceived age is an important indicator, there is a lack of objective methods to quantify perceived age. In this paper, we construct a quantified perceived age model from face images using a convolutional neural network. The face images are enlarged to super-resolution and the skin, an important feature in perceived age, is made clear. Moreover, through Tanh-polar transformation, the central area of the face occupies a relatively larger area than the boundary area, helping the neural network better recognize facial skin features. The experimental results show mean absolute error (MAE) of 6.59, showing that the proposed model is superior to existing method.

Developmental Changes in Emotional-States and Facial Expression (정서 상태와 얼굴표정간의 연결 능력의 발달)

  • Park, Soo-Jin;Song, In-Hae;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.127-133
    • /
    • 2007
  • The present study investigated whether the emotional states reading ability through facial expression changes by age(3-, 5-year-old and university student groups), sex(male, female), facial expression's presenting areas(face, eyes) and the type of emotions(basic emotions, complex emotions). 32 types of emotional state's facial expressions which are linked relatively strong with the emotional vocabularies were used as stimuli. Stimuli were collected by taking photographs of professional actors facial expression performance. Each individuals were presented with stories which set off certain emotions, and then were asked to choose a facial expression that the principal character would have made for the occasion presented in stories. The result showed that the ability of facial expression reading improves as the age get higher. Also, they performed better with the condition of face than eyes, and basic emotions than complex emotions. While female doesn't show any performance difference with the presenting areas, male shows better performance in case of facial condition compared with eye condition. The results demonstrate that age, facial expression's presenting areas and the type of emotions effect on estimation of other people's emotion through facial expressions.

  • PDF

The Clinical Analysis of the Nasal Septal Cartilage by Measurement Using Computed Tomography

  • Hwang, So Min;Lim, On;Hwang, Min Kyu;Kim, Min Wook;Lee, Jong Seo
    • Archives of Craniofacial Surgery
    • /
    • v.17 no.3
    • /
    • pp.140-145
    • /
    • 2016
  • Background: The nasal septal cartilage is often used as a donor graft in rhinoplasty operations but can vary widely in size across the patient population. As such, preoperative estimation of the cartilaginous area is important for patient counseling as well as operating planning. We aim to estimate septal cartilage area by using facial computed tomography (CT) studies. Methods: The study was performed using facial CT images taken from 200 patients between January 2012 to July 2015. Using the mid-sagittal image, the boundary of cartilaginous septum was delineated from soft tissue using the mean difference in signal intensity (or brightness). The area within this boundary was calculated. The calculated area for septal cartilage was then compared across age groups and sexes. Results: Overall, the mean area of nasal septal cartilage was $8.18cm^2$ with the maximum of $12.42cm^2$ and the minimum of $4.89cm^2$. The cartilage areas were measured to be larger in men than in women (p<0.05). The area decreased with advancing age (p<0.05). Conclusion: Measuring the size of septal cartilage using brightness difference is more precise and reliable than previously reported methods. This method can be utilized as the standard for prevention of postoperative complication.

Face Detection Using Pixel Direction Code and Look-Up Table Classifier (픽셀 방향코드와 룩업테이블 분류기를 이용한 얼굴 검출)

  • Lim, Kil-Taek;Kang, Hyunwoo;Han, Byung-Gil;Lee, Jong Taek
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.5
    • /
    • pp.261-268
    • /
    • 2014
  • Face detection is essential to the full automation of face image processing application system such as face recognition, facial expression recognition, age estimation and gender identification. It is found that local image features which includes Haar-like, LBP, and MCT and the Adaboost algorithm for classifier combination are very effective for real time face detection. In this paper, we present a face detection method using local pixel direction code(PDC) feature and lookup table classifiers. The proposed PDC feature is much more effective to dectect the faces than the existing local binary structural features such as MCT and LBP. We found that our method's classification rate as well as detection rate under equal false positive rate are higher than conventional one.

Pupil Data Measurement and Social Emotion Inference Technology by using Smart Glasses (스마트 글래스를 활용한 동공 데이터 수집과 사회 감성 추정 기술)

  • Lee, Dong Won;Mun, Sungchul;Park, Sangin;Kim, Hwan-jin;Whang, Mincheol
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.973-979
    • /
    • 2020
  • This study aims to objectively and quantitatively determine the social emotion of empathy by collecting pupillary response. 52 subjects (26 men and 26 women) voluntarily participated in the experiment. After the measurement of the reference of 30 seconds, the experiment was divided into the task of imitation and spontaneously self-expression. The two subjects were interacted through facial expressions, and the pupil images were recorded. The pupil data was processed through binarization and circular edge detection algorithm, and outlier detection and removal technique was used to reject eye-blinking. The pupil size according to the empathy was confirmed for statistical significance with test of normality and independent sample t-test. Statistical analysis results, the pupil size was significantly different between empathy (M ± SD = 0.050 ± 1.817)) and non-empathy (M ± SD = 1.659 ± 1.514) condition (t(92) = -4.629, p = 0.000). The rule of empathy according to the pupil size was defined through discriminant analysis, and the rule was verified (Estimation accuracy: 75%) new 12 subjects (6 men and 6 women, mean age ± SD = 22.84 ± 1.57 years). The method proposed in this study is non-contact camera technology and is expected to be utilized in various virtual reality with smart glasses.