• Title/Summary/Keyword: Facial Model

Search Result 529, Processing Time 0.044 seconds

Applications of Morphing on Facial Model Reconstruction and Surgical Simulation

  • Lee, Tong-Yee;Sun, Yung-Nein;Weng, Tzu-Lun;Lin, Yung-Ching
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.103.2-110
    • /
    • 1999
  • Facial model reconstruction and surgical simulation are essential parts in the computer-aided surgical system. Plastic surgeons use it to design appropriate repair plans and procedures before actual surgery is operated. In this work, the exploration of 3-D metamorphosis to them presents new results in these two parts.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Feature-Point Extraction by Dynamic Linking Model bas Wavelets and Fuzzy C-Means Clustering Algorithm (Gabor 웨이브렛과 FCM 군집화 알고리즘에 기반한 동적 연결모형에 의한 얼굴표정에서 특징점 추출)

  • Sin, Yeong Suk
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.10-10
    • /
    • 2003
  • This paper extracts the edge of main components of face with Gabor wavelets transformation in facial expression images. FCM(Fuzzy C-Means) clustering algorithm then extracts the representative feature points of low dimensionality from the edge extracted in neutral face. The feature-points of the neutral face is used as a template to extract the feature-points of facial expression images. To match point to Point feature points on an expression face against each feature point on a neutral face, it consists of two steps using a dynamic linking model, which are called the coarse mapping and the fine mapping. This paper presents an automatic extraction of feature-points by dynamic linking model based on Gabor wavelets and fuzzy C-means(FCM) algorithm. The result of this study was applied to extract features automatically in facial expression recognition based on dimension[1].

Performance Analysis for Accuracy of Personality Recognition Models based on Setting of Margin Values at Face Region Extraction (얼굴 영역 추출 시 여유값의 설정에 따른 개성 인식 모델 정확도 성능 분석)

  • Qiu Xu;Gyuwon Han;Bongjae Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.141-147
    • /
    • 2024
  • Recently, there has been growing interest in personalized services tailored to an individual's preferences. This has led to ongoing research aimed at recognizing and leveraging an individual's personality traits. Among various methods for personality assessment, the OCEAN model stands out as a prominent approach. In utilizing OCEAN for personality recognition, a multi modal artificial intelligence model that incorporates linguistic, paralinguistic, and non-linguistic information is often employed. This paper examines the impact of the margin value set for extracting facial areas from video data on the accuracy of a personality recognition model that uses facial expressions to determine OCEAN traits. The study employed personality recognition models based on 2D Patch Partition, R2plus1D, 3D Patch Partition, and Video Swin Transformer technologies. It was observed that setting the facial area extraction margin to 60 resulted in the highest 1-MAE performance, scoring at 0.9118. These findings indicate the importance of selecting an optimal margin value to maximize the efficiency of personality recognition models.

Fully Automatic Facial Recognition Algorithm By Using Gabor Feature Based Face Graph (가버 피쳐기반 얼굴 그래프를 이용한 완전 자동 안면 인식 알고리즘)

  • Kim, Jin-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.2
    • /
    • pp.31-39
    • /
    • 2011
  • The facial recognition algorithms using Gabor wavelet based face graph produce very good performance while they have some weakness such as a large amount of computation and an irregular result depend on initial location. We proposed a fully automatic facial recognition algorithm using a Gabor feature based geometric deformable face graph matching. The initial location and size of a face graph can be selected using Adaboost detection results for speed-up. To find the best face graph with the face model graph by updating the size and location of the graph, the geometric transformable parameters are defined. The best parameters for an optimal face graph are derived using an optimization technique. The simulation results show that the proposed algorithm can produce very good performance with recognition rate 96.7% and recognition speed 0.26 sec for FERET database.

Facial Contour Extraction in PC Camera Images using Active Contour Models (동적 윤곽선 모델을 이용한 PC 카메라 영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.633-638
    • /
    • 2005
  • The extraction of a face is a very important part for human interface, biometrics and security. In this paper, we applies DCM(Dilation of Color and Motion) filter and Active Contour Models to extract facial outline. First, DCM filter is made by applying morphology dilation to the combination of facial color image and differential image applied by dilation previously. This filter is used to remove complex background and to detect facial outline. Because Active Contour Models receive a large effect according to initial curves, we calculate rotational degree using geometric ratio of face, eyes and mouth. We use edgeness and intensity as an image energy, in order to extract outline in the area of weak edge. We acquire various head-pose images with both eyes from five persons in inner space with complex background. As an experimental result with total 125 images gathered by 25 per person, it shows that average extraction rate of facial outline is 98.1% and average processing time is 0.2sec.

  • PDF

Research about the Abstraction of Area Typicality of Emotions for Systematization of Human's Sensitivity Symbol (인간의 감성기호 체계화를 위한 감정영역범주화에 관한 연구)

  • Yun Bong-Shik
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.137-145
    • /
    • 2005
  • This study is a model of research for the developing 3D character contents about facial expression as a sort of non-linguistic signs, focusing on an expression of emotion factors of a person. It contributes a framework for symbolic analysis about Human's emotions along with a general review of expression. The human face is the most complex and versatile of all species. For humans, the face is a ich and versatile instrument serving many different functions. It serves as a window to display one's own motivational state. This makes one's behavior more predictable and understandable to others and improves communication. The face can be used to supplement verbal communication. A prompt facial display can reveal the speaker's attitude about the information being conveyed. Alternatively, the face can be used to complement verbal communication, such as lifting of eyebrows to lend additional emphasis to stressed word. The facial expression plays a important role under the digital visual context. This study will present a frame of facial expression categories for effective manufacture of cartoon and animation that appeal to the visual emotion of the human.

  • PDF

A Study on Preferred Morphologic Feature and Proportion of Facial Aesthetic Subunit by Korean General Public (일반인이 선호하는 얼굴의 미적 단위별 형태와 비율 연구)

  • Yoon, Yong-Il;Lee, Dong-Lark;Yoo, Jung-Seok;Rhee, Seung-Chul;Hur, Gi-Yeun;Kim, Ju-Yeon
    • Archives of Plastic Surgery
    • /
    • v.37 no.4
    • /
    • pp.351-360
    • /
    • 2010
  • Purpose: As the influence of mass media increases, the general standard of attractiveness or beauty of a face also changes. The primary purpose of the study is to find out the factors of the attractive and beautiful face recognized by public. Methods: We picked out standard model photography and operated with Adobe$^{(R)}$ Photoshop$^{(R)}$ and Monariza$^{(R)}$ virtual plastic surgery program. The contour of face, eye, nose, forehead, zygoma, chin and proportion of upper, middle, lower face were changed. The interview survey was conducted through structured standard photo for 310 respondents. That was utilized in the final analysis. Multiple regression analysis was executed by SPSS 12.0. It was used to deal with statistical data and all the other necessary analysis. Results: According to general characteristics of the respondents, many differences were found in preferred face and facial aesthetic subunits. The younger generation preferred the lozenge and inverted triangle shape contour. The respondents over 40 of age preferred the egg shape contour. In chin and zygoma contour, the respondents at the age of 20 preferred distinctly small chin and relatively small lower face. On the other hand, the respondents over 40 of age preferred the wide zygoma relatively. In the proportion of upper, middle, lower face, 51.0% of respondents answered 1 : 1 : 1. If they want to have an aesthetic operation, they preferred protruding forehead. Also they preferred the small chin and V-shaped chin in frontal view. Conclusion: Many respondents preferred to have a plastic surgery for the better facial subunit. The statistical evidence from this study suggests that the harmony and balance of facial aesthetic subunits make attractive and beautiful face.