• 제목/요약/키워드: smiling facial expression

검색결과 9건 처리시간 0.019초

정서 유발 맥락이 영아의 미소 얼굴 표정에 미치는 영향 (The Effects of Emotional Contexts on Infant Smiling)

  • 홍희영;이영
    • 아동학회지
    • /
    • 제24권6호
    • /
    • pp.15-31
    • /
    • 2003
  • This study examined the effects of emotion inducing contexts on types of infants smiling. Facial expressions of forty-five 11-to 15-month-old infants were videotaped in an experimental lab with positive and negative emotional contests. Infants' smiling was identified as the Duchenne smile or non-Duchenne smile based on FACS(Facial Action Coding System, Ekman & Friesen, 1978). Duration of smiling types was analyzed. Overall, infants showed more smiling in the positive than in the negative emotional context. Occurrence of Duchenne smiling was more likely in the positive than in the negative context and in the peek-a-boo than in the melody toy condition within the same positive context. Non-Duchenne smiling did not differ by context.

  • PDF

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Happy Applicants Achieve More: Expressed Positive Emotions Captured Using an AI Interview Predict Performances

  • Shin, Ji-eun;Lee, Hyeonju
    • 감성과학
    • /
    • 제24권2호
    • /
    • pp.75-80
    • /
    • 2021
  • Do happy applicants achieve more? Although it is well established that happiness predicts desirable work-related outcomes, previous findings were primarily obtained in social settings. In this study, we extended the scope of the "happiness premium" effect to the artificial intelligence (AI) context. Specifically, we examined whether an applicant's happiness signal captured using an AI system effectively predicts his/her objective performance. Data from 3,609 job applicants showed that verbally expressed happiness (frequency of positive words) during an AI interview predicts cognitive task scores, and this tendency was more pronounced among women than men. However, facially expressed happiness (frequency of smiling) recorded using AI could not predict the performance. Thus, when AI is involved in a hiring process, verbal rather than the facial cues of happiness provide a more valid marker for applicants' hiring chances.

StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구 (A study on age distortion reduction in facial expression image generation using StyleGAN Encoder)

  • 이희열;이승호
    • 전기전자학회논문지
    • /
    • 제27권4호
    • /
    • pp.464-471
    • /
    • 2023
  • 본 논문에서는 StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡을 감소시키는 방법을 제안한다. 표정 이미지 생성 과정은 StyleGAN Encoder를 사용하여 얼굴 이미지를 생성하고, SVM을 이용하여 학습된 boundary를 잠재 벡터에 적용하여 표정을 변화시킨다. 그러나 웃는 표정의 boundary를 학습할 때 표정 변화에 따른 연령 왜곡이 발생한다. 웃는 표정에 대한 SVM 학습에서 생성된 smile boundary는 표정 변화로 인해 생긴 주름이 학습 요소로 포함되어 있으며 연령에 대한 특성도 함께 학습된 것으로 판단한다. 이를 해결하기 위해, 제안된 방법에서는 smile boundary와 age boundary의 상관계수를 계산하고, 이를 이용하여 smile boundary에서 age boundary를 상관계수에 비례하여 조절하는 방식을 도입한다. 제안된 방법의 효과를 확인하기 위해 공개된 표준 얼굴 데이터셋인 FFHQ 데이터셋을 사용하고 FID score를 측정하여 실험한 결과는 다음과 같다. Smile 이미지에서는 기존 방법에 비하여, Ground Truth와 제안된 방법으로 생성된 smile 이미지의 FID score가 약 0.46 향상되었다. 또한, Smile 이미지에서 기존 방법에 비하여, StyleGAN Encoder로 생성된 이미지와 제안된 방법으로 생성된 smile 이미지의 FID score가 약 1.031 향상되었다. Non-smile 이미지에서는 기존 방법에 비하여, Ground Truth와 본 논문에서 제안된 방법으로 생성된 non-smile 이미지의 FID score가 약 2.25 향상되었다. 또한, Non-smile 이미지에서 기존 방법에 비하여, StyleGAN Encoder로 생성된 이미지와 제안된 방법으로 생성된 non-smile 이미지의 FID score가 약 약 1.908 향상됨을 확인하였다. 한편, 각 생성된 표정 이미지의 연령을 추정하여 StyleGAN Encoder로 생성된 이미지의 추정된 연령과 MSE를 측정한 결과, 기존방법 대비 제안하는 방법이 smile 이미지에서 약 1.5, non-smile 이미지에서 약 1.63의 성능 향상되어 제안한 방법에 대한 성능의 효율성이 입증되었다.

Product Images Attracting Attention: Eye-tracking Analysis

  • Pavel Shin;Kil-Soo Suh;Hyunjeong Kang
    • Asia pacific journal of information systems
    • /
    • 제29권4호
    • /
    • pp.731-751
    • /
    • 2019
  • This study examined the impact of various product photo features on the attention of potential consumers in online apparel retailers' environment. Recently, the method of apparel's product photo representation in online shopping stores has been changed a lot from the classic product photos in the early days. In order to investigate if this shift is effective in attracting consumers' attention, we examined the related theory and verified its effect through laboratory experiments. In particular, experiment data was collected and analyzed using eye tracking technology. According to the results of this study, it was shown that the product photos with asymmetry are more attractive than symmetrical photos, well emphasized object within a photo more attractive than partially emphasized, smiling faces are more attractive for customer than emotionless and sad, and photos with uncentered models focus more consumer's attention than photos with model in the center. These results are expected to help design internet shopping stores to gaze more customers' attention.

뫼비우스 증후군에서 측두근 전위술을 이용한 역동적 재건 (Dynamic Reconstruction with Temporalis Muscle Transfer in Mobius Syndrome)

  • 김백규;이윤호
    • Archives of Plastic Surgery
    • /
    • 제34권3호
    • /
    • pp.325-329
    • /
    • 2007
  • Purpose: Mobius syndrome is a rare congenital disorder characterized by facial diplegia and bilateral abducens palsy, which occasionally combines with other cranial nerve dysfunction. The inability to show happiness, sadness or anger by facial expression frequently results in social dysfunction. The classic concept of cross facial nerve grafting and free muscle transplantation, which is standard in unilateral developmental facial palsy, cannot be used in these patients without special consideration. Our experience in the treatment of three patients with this syndrome using transfer of muscles innervated by trigeminal nerve showed rewarding results. Methods: We used bilateral temporalis muscle elevated from the bony temporal fossa. Muscles and their attached fascia were folded down over the anterior surface of the zygomatic arch. The divided strips from the attached fascia were passed subcutaneously and anchored to the medial canthus and the nasolabial crease for smiling and competence of mouth and eyelids. For the recent 13 years the authors applied this method in 3 Mobius syndrome cases- 45 year-old man and 13 year-old boy, 8 year-old girl. Results: One month after the surgery the patients had good support and already showed voluntary movement at the corner of their mouth. They showed full closure of both eyelids. There was no scleral showing during eyelid closure. Also full closure of the mouth was achieved. After six months, the reconstructed movements of face were maintained. Conclusion: Temporalis muscle transfer for Mobius syndrome is an excellent method for bilateral reconstruction at one stage, is easy to perform, and has a wide range of reconstruction and reproducibility.

Multifactorial Approaches for Correction of the Drooping Tip of a Long Nose in East Asians

  • Park, Seong Geun;Jeong, Hoijoon;Ye, Choon Ho
    • Archives of Plastic Surgery
    • /
    • 제41권6호
    • /
    • pp.630-637
    • /
    • 2014
  • A long nose with a drooping tip is a major aesthetic problem. It creates a negative and aged appearance and looks worse when smiling. In order to rectify this problem, the underlying anatomical causes should be understood and corrected simultaneously to optimize surgical outcomes. The causes of a drooping tip of a long nose are generally classified into two mechanisms. Static causes usually result from malposition and incorrect innate shape of the nasal structure: the nasal septum, upper and lower lateral cartilages, and the ligaments in between. The dynamic causes result from the facial expression muscles, the depressor septi nasi muscle, and the levator labii superioris alaeque nasi muscle. The depressor septi nasi depresses the nasal tip and the levator labii superioris alaeque nasi pulls the alar base upwards. Many surgical methods have been introduced, but partial approaches to correct such deformities generally do not satisfy East Asians, making the problem more challenging to surgeons. Typically, East Asians have thick nasal tip soft tissue and skin, and a depressed columella and alar bases. The authors suggest that multifactorial approaches to static and dynamic factors along with ancillary causes should be considered for correcting the drooping tip of the long noses of East Asians.

The Symbolic Meaning and Values Portrayed In Models' Characteristics in Fashion Advertisements

  • Kwon, Gi-Young;Helvenston, Sally I.
    • International Journal of Human Ecology
    • /
    • 제7권2호
    • /
    • pp.29-41
    • /
    • 2006
  • Various current events provide evidence that society is undergoing changes in perceptions of social relationships. Specifically, visual media in the form of advertisements can convey images which reflect society's values and concepts about role relationships. The purpose of this research was to examine ads in fashion magazines to determine what types of model roles and role relationships typically appear in fashion advertising which can mirror society's values. A content analysis was conducted of ads obtained from US Vogue and US GQ for the year 2002. Six kinds of roles/relationships were found: (1) Narcissism (representing self absorption), (2) sexually enticing opposite-sex relationships, (3) close/romantic same-sex relationships, (4) friend relationships, (5) family relationships, and (6) independent relationships. Of these, narcissism predominated, however, a small number of sexually provocative ads appeared as well as same-sex romantic relationships. Because sole (single) models were more typical, they also were examined to determine ways in which they relate to the audience. Characteristics examined included body presentation & pose, eye gaze, and facial expression. Direct eye gaze was the typical way to engage the audience. Gender differences were apparent: smiling was more typical of women, indifference for men. The symbolic meaning and values investigated from this research are the blurring of gender identity portrayed in homosexual imagery, family values, and the value of youth. The consistency of models' race in ads does not portray the diversity reflected in the demographic census.

자발적 웃음과 인위적 웃음 간의 구분: 사람 대 컴퓨터 (Discrimination between spontaneous and posed smile: Humans versus computers)

  • 엄진섭;오형석;박미숙;손진훈
    • 감성과학
    • /
    • 제16권1호
    • /
    • pp.95-106
    • /
    • 2013
  • 본 연구에서는 자발적인 웃음과 인위적인 웃음을 변별하는 데 있어서 일반 사람들의 정확도와 컴퓨터를 이용한 분류 알고리즘의 정확도를 비교하였다. 실험참가자들은 단일 영상 판단 과제와 쌍비교 판단과제를 수행하였다. 단일 영상판단 과제는 웃음 영상을 한 장씩 제시하면서 이 영상의 웃음이 자발적인 것인지 인위적인 것인지를 판단하는 것이었으며, 쌍비교 판단과제는 동일한 사람에게서 얻은 두 종류의 웃음 영상을 동시에 제시하면서 자발적인 웃음 영상이 어떤 것인지 판단하는 것이었다. 분류 알고리즘의 정확도를 산출하기 위하여 웃음 영상 각각에서 8 종류의 얼굴 특성치들을 추출하였다. 약 50%의 영상을 사용하여 단계적 선형판별분석을 수행하였으며, 여기서 산출된 판별함수를 이용하여 나머지 영상을 분류하였다. 단일 영상에 대한 판단결과, 단계적 선형판별분석의 정확도가 사람들의 정확도보다 높았다. 쌍비교에 대한 판단결과도 단계적 선형판별분석의 정확도가 사람들의 정확도보다 높았다. 20명의 실험참가자 중 선형판별분석의 정확도를 넘어서는 사람은 없었다. 판별분석에 중요하게 사용된 얼굴 특성치는 눈머리의 각도로, 눈을 가늘게 뜬 정도를 나타낸다. Ekman의 FACS에 따르면, 이 특성치는 AU 6에 해당한다. 사람들의 정확도가 낮은 이유는 두 종류의 웃음을 구별할 때, 눈에 관한 정보를 충분히 사용하지 않았기 때문으로 추론되었다.

  • PDF