• Title/Summary/Keyword: Facial Visualization

Search Result 38, Processing Time 0.026 seconds

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

Phased Visualization of Facial Expressions Space using FCM Clustering (FCM 클러스터링을 이용한 표정공간의 단계적 가시화)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.2
    • /
    • pp.18-26
    • /
    • 2008
  • This paper presents a phased visualization method of facial expression space that enables the user to control facial expression of 3D avatars by select a sequence of facial frames from the facial expression space. Our system based on this method creates the 2D facial expression space from approximately 2400 facial expression frames, which is the set of neutral expression and 11 motions. The facial expression control of 3D avatars is carried out in realtime when users navigate through facial expression space. But because facial expression space can phased expression control from radical expressions to detail expressions. So this system need phased visualization method. To phased visualization the facial expression space, this paper use fuzzy clustering. In the beginning, the system creates 11 clusters from the space of 2400 facial expressions. Every time the level of phase increases, the system doubles the number of clusters. At this time, the positions of cluster center and expression of the expression space were not equal. So, we fix the shortest expression from cluster center for cluster center. We let users use the system to control phased facial expression of 3D avatar, and evaluate the system based on the results.

Coding Style Score Visualization Using Facial Expression (얼굴 표정을 이용한 코딩 스타일 점수 시각화)

  • Ji, Jeong-Hoon;Lee, Yun-Jung;Woo, Gyun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.7
    • /
    • pp.578-583
    • /
    • 2010
  • This paper presents an automated visualization system, called StyleVisualizer, which checks the coding style of source codes and visualizes the coding style score using facial expression. Our system represents some kinds of facial expressions according to the evaluated score of the code style: A smile face means that the source code follows coding standards correctly. To measure the effectiveness of the StyleVisualizer, some experiments have been conducted on two class students in an applied computer course. In the experiments, we have compared the error ratio for obeying the coding standards when the StyleVisualizer was used or not. According to the experimental results, the error ratio with the StyleVisualizer was reduced above 30% than that without it. We expect that our system can encourage the students to obey the coding standards by providing the feedback of the visualized faces corresponding to their programs, resulting in high readable programs.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

Preoperative Identification of Facial Nerve in Vestibular Schwannomas Surgery Using Diffusion Tensor Tractography

  • Choi, Kyung-Sik;Kim, Min-Su;Kwon, Hyeok-Gyu;Jang, Sung-Ho;Kim, Oh-Lyong
    • Journal of Korean Neurosurgical Society
    • /
    • v.56 no.1
    • /
    • pp.11-15
    • /
    • 2014
  • Objective : Facial nerve palsy is a common complication of treatment for vestibular schwannoma (VS), so preserving facial nerve function is important. The preoperative visualization of the course of facial nerve in relation to VS could help prevent injury to the nerve during the surgery. In this study, we evaluate the accuracy of diffusion tensor tractography (DTT) for preoperative identification of facial nerve. Methods : We prospectively collected data from 11 patients with VS, who underwent preoperative DTT for facial nerve. Imaging results were correlated with intraoperative findings. Postoperative DTT was performed at postoperative 3 month. Facial nerve function was clinically evaluated according to the House-Brackmann (HB) facial nerve grading system. Results : Facial nerve courses on preoperative tractography were entirely correlated with intraoperative findings in all patients. Facial nerve was located on the anterior of the tumor surface in 5 cases, on anteroinferior in 3 cases, on anterosuperior in 2 cases, and on posteroinferior in 1 case. In postoperative facial nerve tractography, preservation of facial nerve was confirmed in all patients. No patient had severe facial paralysis at postoperative one year. Conclusion : This study shows that DTT for preoperative identification of facial nerve in VS surgery could be a very accurate and useful radiological method and could help to improve facial nerve preservation.

A Novel Cross Channel Self-Attention based Approach for Facial Attribute Editing

  • Xu, Meng;Jin, Rize;Lu, Liangfu;Chung, Tae-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2115-2127
    • /
    • 2021
  • Although significant progress has been made in synthesizing visually realistic face images by Generative Adversarial Networks (GANs), there still lacks effective approaches to provide fine-grained control over the generation process for semantic facial attribute editing. In this work, we propose a novel cross channel self-attention based generative adversarial network (CCA-GAN), which weights the importance of multiple channels of features and archives pixel-level feature alignment and conversion, to reduce the impact on irrelevant attributes while editing the target attributes. Evaluation results show that CCA-GAN outperforms state-of-the-art models on the CelebA dataset, reducing Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) by 15~28% and 25~100%, respectively. Furthermore, visualization of generated samples confirms the effect of disentanglement of the proposed model.

A Study on Facial Visualization System based on one's Personality applied with the Oriental Physiognomy (동양 관상학을 적용한 성격별 얼굴 설계 시스템에 관한 연구)

  • Kang, Seon-Hee;Kim, Hyo-D.;Lee, Kyung-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.346-357
    • /
    • 2008
  • 관상학(Physiognomy)이란 사람의 얼굴을 보고 그의 운명, 성격, 수명 따위를 판단하는 방법을 연구하는 학문을 말한다. 이 논문에서 언급하는 관상학은 동양에서 말하는 관상학, 특히 얼굴의 부분적 특성이나 전체적인 조화를 통해 성격과 운영을 예측하는 학문을 의미한다. 이 연구는 동양 관상학을 적용한 성격별 얼굴 설계 시스템 구축에 관한 것으로, 첫째, 보편적인 성격 분류를 위해 MBTI에서 다루는 성격 어휘 161개를 군집분석을 통해 39개의 대표 어휘로 추출하였다. 추출된 대표 성격 어휘의 의미상 거리를 나타내기 위하여 서베이를 통해 얻은 데이터를 다차원 척도법을 통해 2차원 공간상에 성격 어휘의 관계를 분석하였다. 둘째, 얼굴 시각화를 위해 먼저 얼굴의 형태적 특성을 결정짓는 요소를 크게 얼굴형, 눈, 코, 입, 이마, 눈썹으로 분류하고, 분류된 6가지 얼굴 형태의 29가지 하위요소 별 성격을 한국인의 얼굴 특성을 기준으로 관상학적 정리 및 숫자형식 코드화를 하였다. 추출된 대표 성격 어휘별 얼굴 요소의 형태를 앞서 정리된 코드에 따라 하나의 얼굴 형태로 조합하여 39가지 얼굴을 시각화 하여 마지막으로, 성격별 얼굴 설계 시스템 'FACE'를 제작하였다. 이 연구는 사람의 성격 특성에 따라 그에 맞는 얼굴 형태를 구현하는 시스템을 제작하여 일반 사용자 뿐 아니라 애니메이션 캐릭터 개발자에게 객관적인 도움을 줄 수 있으며 또한 예로부터 내려오는 관상학의 적용 범위를 넓힐 수 있는 가능성을 보여주었다고 할 수 있다.

  • PDF

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Möbius Syndrome Demonstrated by the High-Resolution MR Imaging: a Case Report and Review of Literature

  • Hwang, Minhee;Baek, Hye Jin;Ryu, Kyeong Hwa;Choi, Bo Hwa;Ha, Ji Young;Do, Hyun Jung
    • Investigative Magnetic Resonance Imaging
    • /
    • v.23 no.2
    • /
    • pp.167-171
    • /
    • 2019
  • $M\ddot{o}bius$ syndrome is a rare congenital condition, characterized by abducens and facial nerve palsy, resulting in limitation of lateral gaze movement and facial diplegia. However, to our knowledge, there have been few studies on evaluation of cranial nerves, on MR imaging in $M\ddot{o}bius$ syndrome. Herein, we describe a rare case of $M\ddot{o}bius$ syndrome representing limitation of lateral gaze, and weakness of facial expression, since the neonatal period. In this case, high-resolution MR imaging played a key role in diagnosing $M\ddot{o}bius$ syndrome, by direct visualization of corresponding cranial nerves abnormalities.