• Title/Summary/Keyword: 얼굴 이미지 합성

Search Result 36, Processing Time 0.022 seconds

Smart Mirror to support Hair Styling (헤어 스타일링 지원 스마트 미러)

  • Noh, Hye-Min;Joo, Hye-Won;Moon, Young-Suk;Kong, Ki-Sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.127-133
    • /
    • 2020
  • This paper deals with the development of a smart mirror to support changing hair styles. A key function of the service is the ability to synthesize the image into the user's face when the user chooses a desired hair image and virtually styling the hair. To check the effectiveness of the hair image synthesis function, the success rate measurement experiment of Haar-cascade algorithm's facial recognition was conducted. Experiments have confirmed that the facial recognition succeeds with a 95 percent probability, with both eyes and eyebrows visible to the subjects. It is the highest success rate. It confirmed that if either of the eyebrows of the subjects are not visible or one eyeball is covered, the success rate of facial recognition is 50% and 0% respectively.

A design and implementation of Face Detection hardware (얼굴 검출을 위한 SoC 하드웨어 구현 및 검증)

  • Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.4
    • /
    • pp.43-54
    • /
    • 2007
  • This paper presents design and verification of a face detection hardware for real time application. Face detection algorithm detects rough face position based on already acquired feature parameter data. The hardware is composed of five main modules: Integral Image Calculator, Feature Coordinate Calculator, Feature Difference Calculator, Cascade Calculator, and Window Detection. It also includes on-chip Integral Image memory and Feature Parameter memory. The face detection hardware was verified by using S3C2440A CPU of Samsung Electronics, Virtex4LX100 FPGA of Xilinx, and a CCD Camera module. Our design uses 3,251 LUTs of Xilinx FPGA and takes about 1.96${\sim}$0.13 sec for face detection depending on sliding-window step size, when synthesized for Virtex4LX100 FPGA. When synthesized on Magnachip 0.25um ASIC library, it uses about 410,000 gates (Combinational area about 345,000 gates, Noncombinational area about 65,000 gates) and takes less than 0.5 sec for face realtime detection. This size and performance shows that it is adequate to use for embedded system applications. It has been fabricated as a real chip as a part of XF1201 chip and proven to work.

A Study of Facial Organs Classification System Based on Fusion of CNN Features and Haar-CNN Features

  • Hao, Biao;Lim, Hye-Youn;Kang, Dae-Seong
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.105-113
    • /
    • 2018
  • In this paper, we proposed a method for effective classification of eye, nose, and mouth of human face. Most recent image classification uses Convolutional Neural Network(CNN). However, the features extracted by CNN are not sufficient and the classification effect is not too high. We proposed a new algorithm to improve the classification effect. The proposed method can be roughly divided into three parts. First, the Haar feature extraction algorithm is used to construct the eye, nose, and mouth dataset of face. The second, the model extracts CNN features of image using AlexNet. Finally, Haar-CNN features are extracted by performing convolution after Haar feature extraction. After that, CNN features and Haar-CNN features are fused and classify images using softmax. Recognition rate using mixed features could be increased about 4% than CNN feature. Experiments have demonstrated the performance of the proposed algorithm.

GAN-based avatar generation and animation for video conferencing service (화상회의 서비스를 위한 GAN 기반 아바타 생성 및 애니메이션 구현 기술)

  • Moon, Ji-Eun;Kim, Ji-Yun;Park, Ji-Hye;Ahn, Hyo-Won;Lee, Kyoung-Mi
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.761-763
    • /
    • 2022
  • 코로나19 이후 화상회의 빈도가 높아지면서 줌 피로라는 신조어가 등장할 만큼 상대방을 가까이 마주하며 회의를 진행하는 것이 사람들의 피로도를 상승시키고 있다. 본 논문에서는 얼굴 합성과 이미지 애니메이션을 이용한 아바타를 통해 사용자가 화상회의에 참가할 수 있는 시스템을 제안한다. 사용자와 닮은 개성 있는 캐릭터는 실시간으로 사용자의 표정 및 움직임을 반영하여 화상회의에 적용될 수 있고 채팅과 커뮤니티에서 캐릭터의 이모티콘으로 감정을 표현할 수 있다.

Applying Caricature Concept for Face Recognizable Humanoid Creature Design (인물인식을 위한 휴머노이드 크리처 디자인의 캐리커처 컨셉 적용)

  • Suk, Hae-Jung;Lee, Yun-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.19-27
    • /
    • 2011
  • The humanoid creatures like Na'Vi in 'Avatar' remind the audience of a few actors' faces in the movie, enhance realism in film through combining CG and Live action which are related to the story. The design of face-recognizable humanoid creature is expected to be used in many ways; not only in film, but also in newest media. This research proposes to apply the main concept of Caricature which is 'exaggeration' of distinctive face elements from their prototypes for 'likeness' with the subjects. Also, it provides the idea of extracting the distinctive features of a face and combining the exaggerated subject as those features with an imaginary creature-other species, alien, etc. Finally, it proves the effectiveness of the design process with some experiments.

A Driver's Condition Warning System using Eye Aspect Ratio (눈 영상비를 이용한 운전자 상태 경고 시스템)

  • Shin, Moon-Chang;Lee, Won-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.2
    • /
    • pp.349-356
    • /
    • 2020
  • This paper introduces the implementation of a driver's condition warning system using eye aspect ratio to prevent a car accident. The proposed driver's condition warning system using eye aspect ratio consists of a camera, that is required to detect eyes, the Raspberrypie that processes information on eyes from the camera, buzzer and vibrator, that are required to warn the driver. In order to detect and recognize driver's eyes, the histogram of oriented gradients and face landmark estimation based on deep-learning are used. Initially the system calculates the eye aspect ratio of the driver from 6 coordinates around the eye and then gets each eye aspect ratio values when the eyes are opened and closed. These two different eye aspect ratio values are used to calculate the threshold value that is necessary to determine the eye state. Because the threshold value is adaptively determined according to the driver's eye aspect ratio, the system can use the optimal threshold value to determine the driver's condition. In addition, the system synthesizes an input image from the gray-scaled and LAB model images to operate in low lighting conditions.