• Title/Summary/Keyword: Facial expression factors

Search Result 50, Processing Time 0.026 seconds

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

A Study on Facial expressions for the developing 3D-Character Contents (3D캐릭터콘텐츠제작을 위한 표정에 관한 연구)

  • 윤봉식;김영순
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.05a
    • /
    • pp.478-484
    • /
    • 2004
  • This study is a fundamental research for the developing 3D character contents about facial expression as a sort of non-linguistic signs, focusing on an expression of emotion factors of a person. It contributes a framework for symbolic analysis about Human's emotions along with a general review of expression. The human face is the most complex and versatile of all species. For humans, the face is a rich and versatile instrument serving many different functions. It serves as a window to display one's own motivational state. This makes one's behavior more predictable and understandable to others and improves communication. The face can be used to supplement verbal communication. A prompt facial display can reveal the speaker's attitude about the information being conveyed. Alternatively, the face can be used to complement verbal communication, such as lifting of eyebrows to lend additional emphasis to stressed word. The facial expression plays a important role under the digital visual context. This study will present a frame of facial expression categories for effective manufacture of cartoon and animation that appeal to the visual emotion of the human.

  • PDF

The improved facial expression recognition algorithm for detecting abnormal symptoms in infants and young children (영유아 이상징후 감지를 위한 표정 인식 알고리즘 개선)

  • Kim, Yun-Su;Lee, Su-In;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.430-436
    • /
    • 2021
  • The non-contact body temperature measurement system is one of the key factors, which is manage febrile diseases in mass facilities using optical and thermal imaging cameras. Conventional systems can only be used for simple body temperature measurement in the face area, because it is used only a deep learning-based face detection algorithm. So, there is a limit to detecting abnormal symptoms of the infants and young children, who have difficulty expressing their opinions. This paper proposes an improved facial expression recognition algorithm for detecting abnormal symptoms in infants and young children. The proposed method uses an object detection model to detect infants and young children in an image, then It acquires the coordinates of the eyes, nose, and mouth, which are key elements of facial expression recognition. Finally, facial expression recognition is performed by applying a selective sharpening filter based on the obtained coordinates. According to the experimental results, the proposed algorithm improved by 2.52%, 1.12%, and 2.29%, respectively, for the three expressions of neutral, happy, and sad in the UTK dataset.

FGW-FER: Lightweight Facial Expression Recognition with Attention

  • Huy-Hoang Dinh;Hong-Quan Do;Trung-Tung Doan;Cuong Le;Ngo Xuan Bach;Tu Minh Phuong;Viet-Vu Vu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2505-2528
    • /
    • 2023
  • The field of facial expression recognition (FER) has been actively researched to improve human-computer interaction. In recent years, deep learning techniques have gained popularity for addressing FER, with numerous studies proposing end-to-end frameworks that stack or widen significant convolutional neural network layers. While this has led to improved performance, it has also resulted in larger model sizes and longer inference times. To overcome this challenge, our work introduces a novel lightweight model architecture. The architecture incorporates three key factors: Depth-wise Separable Convolution, Residual Block, and Attention Modules. By doing so, we aim to strike a balance between model size, inference speed, and accuracy in FER tasks. Through extensive experimentation on popular benchmark FER datasets, our proposed method has demonstrated promising results. Notably, it stands out due to its substantial reduction in parameter count and faster inference time, while maintaining accuracy levels comparable to other lightweight models discussed in the existing literature.

A Study on The Expression of Digital Eye Contents for Emotional Communication (감성 커뮤니케이션을 위한 디지털 눈 콘텐츠 표현 연구)

  • Lim, Yoon-Ah;Lee, Eun-Ah;Kwon, Jieun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.563-571
    • /
    • 2017
  • The purpose of this paper is to establish an emotional expression factors of digital eye contents that can be applied to digital environments. The emotion which can be applied to the smart doll is derived and we suggest guidelines for expressive factors of each emotion. For this paper, first, we research the concepts and characteristics of emotional expression are shown in eyes by the publications, animation and actual video. Second, we identified six emotions -Happy, Angry, Sad, Relaxed, Sexy, Pure- and extracted the emotional expression factors. Third, we analyzed the extracted factors to establish guideline for emotional expression of digital eyes. As a result, this study found that the factors to distinguish and represent each emotion are classified four categories as eye shape, gaze, iris size and effect. These can be used as a way to enhance emotional communication effects such as digital contents including animations, robots and smart toys.

Expression and Functional Analysis of cofilin1-like in Craniofacial Development in Zebrafish

  • Jin, Sil;Jeon, Haewon;Choe, Chong Pyo
    • Development and Reproduction
    • /
    • v.26 no.1
    • /
    • pp.23-36
    • /
    • 2022
  • Pharyngeal pouches, a series of outgrowths of the pharyngeal endoderm, are a key epithelial structure governing facial skeleton development in vertebrates. Pouch formation is achieved through collective cell migration and rearrangement of pouch-forming cells controlled by actin cytoskeleton dynamics. While essential transcription factors and signaling molecules have been identified in pouch formation, regulators of actin cytoskeleton dynamics have not been reported yet in any vertebrates. Cofilin1-like (Cfl1l) is a fish-specific member of the Actin-depolymerizing factor (ADF)/Cofilin family, a critical regulator of actin cytoskeleton dynamics in eukaryotic cells. Here, we report the expression and function of cfl1l in pouch development in zebrafish. We first showed that fish cfl1l might be an ortholog of vertebrate adf, based on phylogenetic analysis of vertebrate adf and cfl genes. During pouch formation, cfl1l was expressed sequentially in the developing pouches but not in the posterior cell mass in which future pouch-forming cells are present. However, pouches, as well as facial cartilages whose development is dependent upon pouch formation, were unaffected by loss-of-function mutations in cfl1l. Although it could not be completely ruled out a possibility of a genetic redundancy of Cfl1l with other Cfls, our results suggest that the cfl1l expression in the developing pouches might be dispensable for regulating actin cytoskeleton dynamics in pouch-forming cells.

A study on the thematic types, expression techniques, and impact of body positive movement content on the short clip platform TikTok (쇼트 클립 플랫폼 틱톡(TikTok)에 나타난 보디 포지티브 무브먼트 콘텐츠의 주제 유형 및 표현기법)

  • Koh Woon Kim
    • The Research Journal of the Costume Culture
    • /
    • v.32 no.1
    • /
    • pp.17-37
    • /
    • 2024
  • This study examines the rise of the Body Positive Movement on TikTok and its role as a form of online content activism influencing the fashion design and industry. Through a combination of literature review and case study methodology, the study explores the expression techniques and thematic types of Body Positive Movement on TikTok. Reviews of literature, previous studies, online articles, fashion journals, and relevant search terms on TikTok informed a definition of Body Positive Movement and an analysis of its formation and rise. The research findings confirm the impact TikTok content on Body Positive Movement has on the fashion industry in addressing external factors (i.e., 'Appearance', 'Race', 'Aging', 'Physical Disability') and intrinsic factors (i.e., 'Acceptance of Diversity', 'Self-Esteem', 'Rejection of Stereotypes', 'Appropriate Representation', 'Information Provision'). The key external factor , 'Appearance', includes subcategories such as 'Body Shape', 'Body Hair', 'Skin', and 'Facial Features'. TikTok content creators on fashion creatively combine music, emojis, and visual storytelling to exhibit positive self-perception concerning these factors. A significant finding of the study is that short clips predominantly manifesting external factors differentiate into informative or enlightening videos associated with intrinsic factors. The study underscores Body Positive Movement's important influence on the fashion industry from design to presentation.

A Study on Expression and the Extent of Using Make-up According to the Make-up Lifestyle of Woman (성인 여성의 메이크업 라이프스타일에 따른 메이크업 표현과 사용정도에 관한 연구)

  • 배정숙;류현혜
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.28 no.2
    • /
    • pp.332-343
    • /
    • 2004
  • This is a study on expression and the extent of using make-up according to the lifestyle of woman. The purpose of this study is to induce factors which decide the lifestyle of woman, group them, understand groups' demographic characteristic and study on make-up expression and the extent of using make-up according to the lifestyle of groups. This survey is conducted to 611 women and analyzed with SPSS package. The result of a study is as follows: 1. We classified them into 5 factors such as factors of make-up preference, arance-oriented, economy and information-oriented, daily make-up, and interest in make-up with the method of AIO analysis. Then I researched groups on the basis of the mean of those factors. As a result, it is classified as a make-up oriented group, a consciously daily make-up group, a unconcern of make-up group, and a reasonable make-up pursuit group. 2. The demographic characteristic according to the classified lifestyles showed the difference as a result of variance analysis of age, marital status, job, education, and monthly pay. 3. A result of variance analysis on the extent of satisfaction with their faces according to the lifestyle showed the difference of facial satisfaction with complexion, skin, eyes, nose and so on. 4. We analyzed a reason of make-up, a extent of make-up, image to express, the most concerning part for make-up, and the type of cosmetics which people use most in order to know the difference of make-up expression and the extent of using make-up. As a result, its variance showed the difference among groups.

Realistic Expression Factor to Visual Presence of Virtual Avatar in Eye Reflection (가상 아바타의 각막면에 비친 반사영상의 시각적 실재감에 대한 실감표현 요소)

  • Won, Myoung Ju;Lee, Eui Chul;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.7
    • /
    • pp.9-15
    • /
    • 2013
  • In the VnR (Virtual and Real Worlds) of recent virtual reality convergence, the modelling of realistic human face is focused on the facial appearance such as the shape of facial parts and muscle movement. However, the facial changing parameters caused by environmental factors beyond the facial appearance factors can be regarded as important ones in terms of effectively representing virtual avatar. Therefore, this study evaluates user's visual feeling response according to the opacity variation of eye reflection of virtual avatar which is considered as a new parameter for reprenting realistic avatar. Experimental result showed that more clear eye reflection induced more realistic visual feeling of subjects. This result can be regarded as a basis for designing realistic virtual avatar by supporting a new visual realistic representing factor (eye reflection) and its degree of representation (reflectance ratio).

Back-Propagation Neural Network Based Face Detection and Pose Estimation (오류-역전파 신경망 기반의 얼굴 검출 및 포즈 추정)

  • Lee, Jae-Hoon;Jun, In-Ja;Lee, Jung-Hoon;Rhee, Phill-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.853-862
    • /
    • 2002
  • Face Detection can be defined as follows : Given a digitalized arbitrary or image sequence, the goal of face detection is to determine whether or not there is any human face in the image, and if present, return its location, direction, size, and so on. This technique is based on many applications such face recognition facial expression, head gesture and so on, and is one of important qualify factors. But face in an given image is considerably difficult because facial expression, pose, facial size, light conditions and so on change the overall appearance of faces, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact face detection which overcomes some restrictions by using neural network. The proposed system can be face detection irrelevant to facial expression, background and pose rapidily. For this. face detection is performed by neural network and detection response time is shortened by reducing search region and decreasing calculation time of neural network. Reduced search region is accomplished by using skin color segment and frame difference. And neural network calculation time is decreased by reducing input vector sire of neural network. Principle Component Analysis (PCA) can reduce the dimension of data. Also, pose estimates in extracted facial image and eye region is located. This result enables to us more informations about face. The experiment measured success rate and process time using the Squared Mahalanobis distance. Both of still images and sequence images was experimented and in case of skin color segment, the result shows different success rate whether or not camera setting. Pose estimation experiments was carried out under same conditions and existence or nonexistence glasses shows different result in eye region detection. The experiment results show satisfactory detection rate and process time for real time system.