• Title/Summary/Keyword: smiling facial expression

Search Result 9, Processing Time 0.019 seconds

The Effects of Emotional Contexts on Infant Smiling (정서 유발 맥락이 영아의 미소 얼굴 표정에 미치는 영향)

  • Hong, Hee Young;Lee, Young
    • Korean Journal of Child Studies
    • /
    • v.24 no.6
    • /
    • pp.15-31
    • /
    • 2003
  • This study examined the effects of emotion inducing contexts on types of infants smiling. Facial expressions of forty-five 11-to 15-month-old infants were videotaped in an experimental lab with positive and negative emotional contests. Infants' smiling was identified as the Duchenne smile or non-Duchenne smile based on FACS(Facial Action Coding System, Ekman & Friesen, 1978). Duration of smiling types was analyzed. Overall, infants showed more smiling in the positive than in the negative emotional context. Occurrence of Duchenne smiling was more likely in the positive than in the negative context and in the peek-a-boo than in the melody toy condition within the same positive context. Non-Duchenne smiling did not differ by context.

  • PDF

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Happy Applicants Achieve More: Expressed Positive Emotions Captured Using an AI Interview Predict Performances

  • Shin, Ji-eun;Lee, Hyeonju
    • Science of Emotion and Sensibility
    • /
    • v.24 no.2
    • /
    • pp.75-80
    • /
    • 2021
  • Do happy applicants achieve more? Although it is well established that happiness predicts desirable work-related outcomes, previous findings were primarily obtained in social settings. In this study, we extended the scope of the "happiness premium" effect to the artificial intelligence (AI) context. Specifically, we examined whether an applicant's happiness signal captured using an AI system effectively predicts his/her objective performance. Data from 3,609 job applicants showed that verbally expressed happiness (frequency of positive words) during an AI interview predicts cognitive task scores, and this tendency was more pronounced among women than men. However, facially expressed happiness (frequency of smiling) recorded using AI could not predict the performance. Thus, when AI is involved in a hiring process, verbal rather than the facial cues of happiness provide a more valid marker for applicants' hiring chances.

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.

Product Images Attracting Attention: Eye-tracking Analysis

  • Pavel Shin;Kil-Soo Suh;Hyunjeong Kang
    • Asia pacific journal of information systems
    • /
    • v.29 no.4
    • /
    • pp.731-751
    • /
    • 2019
  • This study examined the impact of various product photo features on the attention of potential consumers in online apparel retailers' environment. Recently, the method of apparel's product photo representation in online shopping stores has been changed a lot from the classic product photos in the early days. In order to investigate if this shift is effective in attracting consumers' attention, we examined the related theory and verified its effect through laboratory experiments. In particular, experiment data was collected and analyzed using eye tracking technology. According to the results of this study, it was shown that the product photos with asymmetry are more attractive than symmetrical photos, well emphasized object within a photo more attractive than partially emphasized, smiling faces are more attractive for customer than emotionless and sad, and photos with uncentered models focus more consumer's attention than photos with model in the center. These results are expected to help design internet shopping stores to gaze more customers' attention.

Dynamic Reconstruction with Temporalis Muscle Transfer in Mobius Syndrome (뫼비우스 증후군에서 측두근 전위술을 이용한 역동적 재건)

  • Kim, Baek Kyu;Lee, Yoon Ho
    • Archives of Plastic Surgery
    • /
    • v.34 no.3
    • /
    • pp.325-329
    • /
    • 2007
  • Purpose: Mobius syndrome is a rare congenital disorder characterized by facial diplegia and bilateral abducens palsy, which occasionally combines with other cranial nerve dysfunction. The inability to show happiness, sadness or anger by facial expression frequently results in social dysfunction. The classic concept of cross facial nerve grafting and free muscle transplantation, which is standard in unilateral developmental facial palsy, cannot be used in these patients without special consideration. Our experience in the treatment of three patients with this syndrome using transfer of muscles innervated by trigeminal nerve showed rewarding results. Methods: We used bilateral temporalis muscle elevated from the bony temporal fossa. Muscles and their attached fascia were folded down over the anterior surface of the zygomatic arch. The divided strips from the attached fascia were passed subcutaneously and anchored to the medial canthus and the nasolabial crease for smiling and competence of mouth and eyelids. For the recent 13 years the authors applied this method in 3 Mobius syndrome cases- 45 year-old man and 13 year-old boy, 8 year-old girl. Results: One month after the surgery the patients had good support and already showed voluntary movement at the corner of their mouth. They showed full closure of both eyelids. There was no scleral showing during eyelid closure. Also full closure of the mouth was achieved. After six months, the reconstructed movements of face were maintained. Conclusion: Temporalis muscle transfer for Mobius syndrome is an excellent method for bilateral reconstruction at one stage, is easy to perform, and has a wide range of reconstruction and reproducibility.

Multifactorial Approaches for Correction of the Drooping Tip of a Long Nose in East Asians

  • Park, Seong Geun;Jeong, Hoijoon;Ye, Choon Ho
    • Archives of Plastic Surgery
    • /
    • v.41 no.6
    • /
    • pp.630-637
    • /
    • 2014
  • A long nose with a drooping tip is a major aesthetic problem. It creates a negative and aged appearance and looks worse when smiling. In order to rectify this problem, the underlying anatomical causes should be understood and corrected simultaneously to optimize surgical outcomes. The causes of a drooping tip of a long nose are generally classified into two mechanisms. Static causes usually result from malposition and incorrect innate shape of the nasal structure: the nasal septum, upper and lower lateral cartilages, and the ligaments in between. The dynamic causes result from the facial expression muscles, the depressor septi nasi muscle, and the levator labii superioris alaeque nasi muscle. The depressor septi nasi depresses the nasal tip and the levator labii superioris alaeque nasi pulls the alar base upwards. Many surgical methods have been introduced, but partial approaches to correct such deformities generally do not satisfy East Asians, making the problem more challenging to surgeons. Typically, East Asians have thick nasal tip soft tissue and skin, and a depressed columella and alar bases. The authors suggest that multifactorial approaches to static and dynamic factors along with ancillary causes should be considered for correcting the drooping tip of the long noses of East Asians.

The Symbolic Meaning and Values Portrayed In Models' Characteristics in Fashion Advertisements

  • Kwon, Gi-Young;Helvenston, Sally I.
    • International Journal of Human Ecology
    • /
    • v.7 no.2
    • /
    • pp.29-41
    • /
    • 2006
  • Various current events provide evidence that society is undergoing changes in perceptions of social relationships. Specifically, visual media in the form of advertisements can convey images which reflect society's values and concepts about role relationships. The purpose of this research was to examine ads in fashion magazines to determine what types of model roles and role relationships typically appear in fashion advertising which can mirror society's values. A content analysis was conducted of ads obtained from US Vogue and US GQ for the year 2002. Six kinds of roles/relationships were found: (1) Narcissism (representing self absorption), (2) sexually enticing opposite-sex relationships, (3) close/romantic same-sex relationships, (4) friend relationships, (5) family relationships, and (6) independent relationships. Of these, narcissism predominated, however, a small number of sexually provocative ads appeared as well as same-sex romantic relationships. Because sole (single) models were more typical, they also were examined to determine ways in which they relate to the audience. Characteristics examined included body presentation & pose, eye gaze, and facial expression. Direct eye gaze was the typical way to engage the audience. Gender differences were apparent: smiling was more typical of women, indifference for men. The symbolic meaning and values investigated from this research are the blurring of gender identity portrayed in homosexual imagery, family values, and the value of youth. The consistency of models' race in ads does not portray the diversity reflected in the demographic census.

Discrimination between spontaneous and posed smile: Humans versus computers (자발적 웃음과 인위적 웃음 간의 구분: 사람 대 컴퓨터)

  • Eom, Jin-Sup;Oh, Hyeong-Seock;Park, Mi-Sook;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.95-106
    • /
    • 2013
  • The study compares accuracies between humans and computer algorithms in the discrimination of spontaneous smiles from posed smiles. For this purpose, subjects performed two tasks, one was judgment with single pictures and the other was judgment with pair comparison. At the task of judgment with single pictures, in which pictures of smiling facial expression were presented one by one, subjects were required to judge whether smiles in the pictures were spontaneous or posed. In the task for judgment with pair comparison, in which two kinds of smiles from one person were presented simultaneously, subjects were to select spontaneous smile. To calculate the discrimination algorithm accuracy, 8 kinds of facial features were used. To calculate the discriminant function, stepwise linear discriminant analysis (SLDA) was performed by using approximately 50 % of pictures, and the rest of pictures were classified by using the calculated discriminant function. In the task of single pictures, the accuracy rate of SLDA was higher than that of humans. In the analysis of accuracy on pair comparison, the accuracy rate of SLDA was also higher than that of humans. Among the 20 subjects, none of them showed the above accuracy rate of SLDA. The facial feature contributed to SLDA effectively was angle of inner eye corner, which was the degree of the openness of the eyes. According to Ekman's FACS system, this feature corresponds to AU 6. The reason why the humans had low accuracy while classifying two kinds of smiles, it appears that they didn't use the information coming from the eyes enough.

  • PDF