• Title/Summary/Keyword: Face image synthesis

Search Result 32, Processing Time 0.02 seconds

Smart Mirror to support Hair Styling (헤어 스타일링 지원 스마트 미러)

  • Noh, Hye-Min;Joo, Hye-Won;Moon, Young-Suk;Kong, Ki-Sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.127-133
    • /
    • 2020
  • This paper deals with the development of a smart mirror to support changing hair styles. A key function of the service is the ability to synthesize the image into the user's face when the user chooses a desired hair image and virtually styling the hair. To check the effectiveness of the hair image synthesis function, the success rate measurement experiment of Haar-cascade algorithm's facial recognition was conducted. Experiments have confirmed that the facial recognition succeeds with a 95 percent probability, with both eyes and eyebrows visible to the subjects. It is the highest success rate. It confirmed that if either of the eyebrows of the subjects are not visible or one eyeball is covered, the success rate of facial recognition is 50% and 0% respectively.

Synthesis of Realistic Facial Expression using a Nonlinear Model for Skin Color Change (비선형 피부색 변화 모델을 이용한 실감적인 표정 합성)

  • Lee Jeong-Ho;Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-75
    • /
    • 2006
  • Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures should also be deformed to achieve more realistic expression. The existing methods such as the expression ratio image have drawbacks, in that detailed changes of complexion by lighting can not be generated properly. In this paper, we propose a nonlinear model for skin color change and a model-based synthesis method for facial expression that can apply realistic expression details under different lighting conditions. The proposed method is composed of the following three steps; automatic extraction of facial features using active appearance model and geometric deformation of expression using warping, generation of facial expression using a model for nonlinear skin color change, and synthesis of original face with generated expression using a blending ratio that is computed by the Euclidean distance transform. Experimental results show that the proposed method generate realistic facial expressions under various lighting conditions.