• Title/Summary/Keyword: Facial segmentation

Search Result 47, Processing Time 0.023 seconds

Multiple Face Segmentation and Tracking Based on Robust Hausdorff Distance Matching

  • Park, Chang-Woo;Kim, Young-Ouk;Sung, Ha-Gyeong;Park, Mignon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.87-92
    • /
    • 2003
  • This paper describes a system for tracking multiple faces in an input video sequence using facial convex hull based facial segmentation and robust hausdorff distance. The algorithm adapts skin color reference map in YCbCr color space and hair color reference map in RGB color space for classifying face region. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, this algorithm computes displacement of the point set between frames using a robust hausdorff distance and the best possible displacement is selected. Finally, the initial face model is updated using the displacement. We provide an example to illustrate the proposed tracking algorithm, which efficiently tracks rotating and zooming faces as well as existing multiple faces in video sequences obtained from CCD camera.

Multi-attribute Face Editing using Facial Masks (얼굴 마스크 정보를 활용한 다중 속성 얼굴 편집)

  • Ambardi, Laudwika;Park, In Kyu;Hong, Sungeun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2022
  • Although face recognition and face generation have been growing in popularity, the privacy issues of using facial images in the wild have been a concurrent topic. In this paper, we propose a face editing network that can reduce privacy issues by generating face images with various properties from a small number of real face images and facial mask information. Unlike the existing methods of learning face attributes using a lot of real face images, the proposed method generates new facial images using a facial segmentation mask and texture images from five parts as styles. The images are then trained with our network to learn the styles and locations of each reference image. Once the proposed framework is trained, we can generate various face images using only a small number of real face images and segmentation information. In our extensive experiments, we show that the proposed method can not only generate new faces, but also localize facial attribute editing, despite using very few real face images.

Implementation of Hair Style Recommendation System Based on Big data and Deepfakes (빅데이터와 딥페이크 기반의 헤어스타일 추천 시스템 구현)

  • Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.13-19
    • /
    • 2023
  • In this paper, we investigated the implementation of a hairstyle recommendation system based on big data and deepfake technology. The proposed hairstyle recommendation system recognizes the facial shapes based on the user's photo (image). Facial shapes are classified into oval, round, and square shapes, and hairstyles that suit each facial shape are synthesized using deepfake technology and provided as videos. Hairstyles are recommended based on big data by applying the latest trends and styles that suit the facial shape. With the image segmentation map and the Motion Supervised Co-Part Segmentation algorithm, it is possible to synthesize elements between images belonging to the same category (such as hair, face, etc.). Next, the synthesized image with the hairstyle and a pre-defined video are applied to the Motion Representations for Articulated Animation algorithm to generate a video animation. The proposed system is expected to be used in various aspects of the beauty industry, including virtual fitting and other related areas. In future research, we plan to study the development of a smart mirror that recommends hairstyles and incorporates features such as Internet of Things (IoT) functionality.

A New Face Tracking Algorithm Using Convex-hull and Hausdorff Distance (Convex hull과 Robust Hausdorff Distance를 이용한 실시간 얼굴 트래킹)

  • Park, Min-Sik;Park, Chang-U;Park, Min-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.438-441
    • /
    • 2001
  • This paper describes a system for tracking a face in a input video sequence using facial convex hull based facial segmentation and a robust hausdorff distance. The algorithm adapts YCbCr color model for classifying face region by [l]. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, a Robust Hausdorff distance is computed and the best possible displacement is selected. Finally, the previous face model is updated using the displacement t. It is robust to some noises and outliers. We provide an example to illustrate the proposed tracking algorithm in video sequences obtained from CCD camera.

  • PDF

A novel method to extract the region of five sensory organ and Myungdang from a facial image for facial ocular inspection (얼굴 영상에서 망진을 위한 오관기관 및 명당 부위의 추출)

  • Min, Byong-Seok;Cho, Dong-Uk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.6
    • /
    • pp.1257-1263
    • /
    • 2006
  • Many automatic medical devices have been invented and developed mostly for the western medicine, not for the oriental medicine. Facial ocular inspection is one of the four diagnosis methods of oriental medicine, which makes a diagnosis of disease by observing the shape and color of patient's vital organs. In facial ocular inspection, the regions of five sensory organs and Myungdang are specially important. In this paper, we propose a novel method to extract the five sensory organ and Myungdang from a facial image for facial ocular inspection. Finally, we show the usefulness of the proposed method by experiments.

  • PDF

Object Segmentation for Image Transmission Services and Facial Characteristic Detection based on Knowledge (화상전송 서비스를 위한 객체 분할 및 지식 기반 얼굴 특징 검출)

  • Lim, Chun-Hwan;Yang, Hong-Young
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.3
    • /
    • pp.26-31
    • /
    • 1999
  • In this paper, we propose a facial characteristic detection algorithm based on knowledge and object segmentation method for image communication. In this algorithm, under the condition of the same lumination and distance from the fixed video camera to human face, we capture input images of 256 $\times$ 256 of gray scale 256 level and then remove the noise using the Gaussian filter. Two images are captured with a video camera, One contains the human face; the other contains only background region without including a face. And then we get a differential image between two images. After removing noise of the differential image by eroding End dilating, divide background image into a facial image. We separate eyes, ears, a nose and a mouth after searching the edge component in the facial image. From simulation results, we have verified the efficiency of the Proposed algorithm.

  • PDF

A New Face Detection Method by Hierarchical Color Histogram Analysis

  • Kwon, Ji-Woong;Park, Myoung-Soo;Kim, Mun-Hyuk;Park, Jin-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.138.3-138
    • /
    • 2001
  • Because face has non-rigid structure and is influenced by illumination, we need robust face detection algorithm with the variations of external environments (orientation of lighting and face, complex background, etc.). In this paper we develop a new face detection algorithm to achieve robustness. First we transform RGB color into other color space, in which we can reduce lighting effect much. Second, hierarchical image segmentation technique is used for dividing a image into homogeneous regions. This process uses not only color information, but also spatial information. One of them is used in segmentation by histogram analysis, the other is used in segmentation by grouping. And we can select face region among the homogeneous regions by using facial features.

  • PDF

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX (인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석)

  • Jeon, Ho-Beom;Ko, Hyun-kwan;Lee, Seon-Gyeong;Song, Bok-Deuk;Kim, Chae-Kyu;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.

Improved STGAN for Facial Attribute Editing by Utilizing Mask Information

  • Yang, Hyeon Seok;Han, Jeong Hoon;Moon, Young Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.1-9
    • /
    • 2020
  • In this paper, we propose a model that performs more natural facial attribute editing by utilizing mask information in the hair and hat region. STGAN, one of state-of-the-art research of facial attribute editing, has shown results of naturally editing multiple facial attributes. However, editing hair-related attributes can produce unnatural results. The key idea of the proposed method is to additionally utilize information on the face regions that was lacking in the existing model. To do this, we apply three ideas. First, hair information is supplemented by adding hair ratio attributes through masks. Second, unnecessary changes in the image are suppressed by adding cycle consistency loss. Third, a hat segmentation network is added to prevent hat region distortion. Through qualitative evaluation, the effectiveness of the proposed method is evaluated and analyzed. The method proposed in the experimental results generated hair and face regions more naturally and successfully prevented the distortion of the hat region.