• Title/Summary/Keyword: facial features

Search Result 642, Processing Time 0.023 seconds

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.

Acquired facial lipoatrophy: A report of 3 cases with imaging features

  • Lee, Chena;Kim, Jo-Eun;Yi, Won-Jin;Heo, Min-Suk;Lee, Sam-Sun;Han, Sang-Sun;Choi, Soon-Chul;Huh, Kyung-Hoe
    • Imaging Science in Dentistry
    • /
    • v.50 no.3
    • /
    • pp.255-260
    • /
    • 2020
  • Acquired facial lipoatrophy is a rare disease with an unclear etiology and pathological pathway. The distinct causative factors of this disease have been not elucidated, but it is suspected to be associated with immune system-related diseases, most notably AIDS. Although the management of facial lipoatrophy is very important for patients' social life and mental health, no treatment framework has been developed due to the unknown nature of the disease manifestation. The present case report was designed to provide sequential imaging to visualize the disease progression. The clinical backgrounds of the patients are also introduced, helping characterize this disease entity more clearly for maxillofacial specialists.

Improvement of Facial Emotion Recognition Performance through Addition of Geometric Features (기하학적 특징 추가를 통한 얼굴 감정 인식 성능 개선)

  • Hoyoung Jung;Hee-Il Hahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.155-161
    • /
    • 2024
  • In this paper, we propose a new model by adding landmark information as a feature vector to the existing CNN-based facial emotion classification model. Facial emotion classification research using CNN-based models is being studied in various ways, but the recognition rate is very low. In order to improve the CNN-based models, we propose algorithms that improves facial expression classification accuracy by combining the CNN model with a landmark-based fully connected network obtained by ASM. By including landmarks in the CNN model, the recognition rate was improved by several percent, and experiments confirmed that further improved results could be obtained by adding FACS-based action units to the landmarks.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Symmetric Shape Deformation Considering Facial Features and Attractiveness Improvement (얼굴 특징을 고려한 대칭적인 형상 변형과 호감도 향상)

  • Kim, Jeong-Sik;Shin, Il-Kyu;Choi, Soo-Mi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.2
    • /
    • pp.29-37
    • /
    • 2010
  • In this paper, we present a novel deformation method for alleviating the asymmetry of a scanned 3D face considering facial features. To handle detailed areas of the face, we developed a new local 3D shape descriptor based on facial features and surface curvatures. Our shape descriptor can improve the accuracy when deforming a 3D face toward a symmetric configuration, because it provides accurate point pairing with respect to the plane of symmetry. In addition, we use point-based representation over all stages of symmetrization, which makes it much easier to support discrete processes. Finally, we performed a statistical analysis to assess subjects' preference for the symmetrized faces by our approach.

Robust 3D Facial Landmark Detection Using Angular Partitioned Spin Images (각 분할 스핀 영상을 사용한 3차원 얼굴 특징점 검출 방법)

  • Kim, Dong-Hyun;Choi, Kang-Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.199-207
    • /
    • 2013
  • Spin images representing efficiently surface features of 3D mesh models have been used to detect facial landmark points. However, at a certain point, different normal direction can lead to quite different spin images. Moreover, since 3D points are projected to the 2D (${\alpha}-{\beta}$) space during spin image generation, surface features cannot be described clearly. In this paper, we present a method to detect 3D facial landmark using improved spin images by partitioning the search area with respect to angle. By generating sub-spin images for angular partitioned 3D spaces, more unique features describing corresponding surfaces can be obtained, and improve the performance of landmark detection. In order to generate spin images robust to inaccurate surface normal direction, we utilize on averaging surface normal with its neighboring normal vectors. The experimental results show that the proposed method increases the accuracy in landmark detection by about 34% over a conventional method.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Guillain-Barré syndrome associated with SARS-CoV-2 vaccination: how is it different? a systematic review and individual participant data meta-analysis

  • Yerasu Muralidhar Reddy;Jagarlapudi MK Murthy;Syed Osman;Shyam Kumar Jaiswal;Abhinay Kumar Gattu;Lalitha Pidaparthi;Santosh Kumar Boorgu;Roshan Chavan;Bharadwaj Ramakrishnan;Sreekanth Reddy Yeduguri
    • Clinical and Experimental Vaccine Research
    • /
    • v.12 no.2
    • /
    • pp.143-155
    • /
    • 2023
  • Purpose: An association between Guillain-Barré syndrome (GBS) and severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) vaccination has been reported. We aimed to summarize the clinical features of GBS associated with SARS-CoV-2 vaccination and determine the contrasting features from coronavirus disease-19 (COVID-19) associated GBS and GBS following other causes. Materials and Methods: We performed PubMed search for articles published between 1 December 2020 and 27 January 2022 using search terms related to "SARS-CoV-2 vaccination" and "GBS". Reference searching of the eligible studies was performed. Sociodemographic and vaccination data, clinical and laboratory features, and outcomes were extracted. We compared these findings with post-COVID-19 GBS and International GBS Outcome Study (IGOS) (GBS from other causes) cohorts. Results: We included 100 patients in the analysis. Mean age was 56.88 years, and 53% were males. Six-eight received non-replicating virus vector and 30 took messenger RNA (mRNA) vaccines. The median interval between the vaccination and the GBS onset was 11 days. Limb weakness, facial palsy, sensory symptoms, dysautonomia, and respiratory insufficiency were seen in 78.65%, 53.3%, 77.4%, 23.5%, and 25%, respectively. The commonest clinical and electrodiagnostic subtype were sensory-motor variant (68%) and acute inflammatory demyelinating polyneuropathy (61.4%), respectively. And 43.9% had poor outcome (GBS outcome score ≥3). Pain was common with virus vector than mRNA vaccine, and the latter had severe disease at presentation (Hughes grade ≥3). Sensory phenomenon and facial weakness were common in vaccination cohort than post-COVID-19 and IGOS. Conclusion: There are distinct differences between GBS associated with SARS-CoV-2 vaccination and GBS due to other causes. Facial weakness and sensory symptoms were commonly seen in the former and outcomes poor.

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.