• 제목/요약/키워드: Facial Model

검색결과 529건 처리시간 0.027초

효과적인 얼굴 가상성형 모델을 위한 얼굴 변형 기법 (Face Deformation Technique for Efficient Virtual Aesthetic Surgery Models)

  • 박현;문영식
    • 전자공학회논문지CI
    • /
    • 제42권3호
    • /
    • pp.63-72
    • /
    • 2005
  • 본 논문에서는 가상성형 시스템에 적합한 Radial Basis Function(RBF) 기반의 변형 기법과 변형된 얼굴 구성 요소를 얼굴 영상에 혼합하는 기법을 제시한다. 가상성형을 위한 변형 기법은 유동적인 얼굴 구성 요소들을 변형함에 있어 부드러움과 정확성을 가져야 하고 변형 부위 이외의 다른 얼굴 구성 요소에는 왜곡을 주지 않는 지역성도 가져야 한다. 이를 위해 제안된 가상성형 시스템은 자유형태 변형 모델을 기반으로 RBF에 의해 격자들의 변형 정도를 계산한다. 성형의 정확성을 위해 변형 오차는 기준곡선 정점들의 목표 위치와 실제 변형된 위치 사이의 오차제곱합을 이용하여 Singular Value Decomposition(SVD)에 의해 반복적으로 RBF 매핑 함수의 계수들을 계산하여 보정한다. 변형된 얼굴 구성 요소는 Euclidean Distance Transform(EDT)에 의해 계산된 혼합 비율을 사용하여 원본 얼굴 영상과 합성된다. 제안된 변형 기법과 합성 기법은 가상성형 결과의 정확도와 왜곡 측면에서 우수한 성능을 보인다는 것을 실험적으로 확인하였다.

가토의 안면-두피 피판 동종이식을 위한 실험용 모델 연구 (An Experimental Study about flap Viability after Harvesting of the Composite Face/Scalp flap for Allotransplantation in Rabbit Model)

  • 서영민;정승문
    • Archives of Reconstructive Microsurgery
    • /
    • 제14권2호
    • /
    • pp.95-104
    • /
    • 2005
  • The aim of this study was to investigate the major vascular system to supply flap, flap survival rate and complications after flap elevation in order to evaluate possibility of the vascularized face/scalp allotransplantation. Forty New Zealand white rabbits were divided into two groups: control group and experimental group. Individuals of control group had a face/scalp composite unit which was composed of skin, subcutaneous tissue and platysma muscle, supplying by bilateral facial artery, temporal artery and auricular artery and draining by external jugular vein. After a flap was elevated, bilateral facial artery, temporal artery and auricular artery were ligated. On the other hand, those of experimental group had the same composite unit as control group with bilateral facial artery, temporal artery and auricular artery being not ligated. We had measured survival area of flaps of the sixteen individuals survived for four weeks in the control group and fourteen in the experimental group by Grid method. The mean survival durations of the flap were 3.7days in the control group, 20.0days in the experimental group. The significant differences in the mean survival durations and survival rate at the 28days were found between the control and experimental group (p<0.05). Mean values about the survival area's fractions of all were $1.3{\pm}4.%$ in the control group and $63.1{\pm}4.8%$ in the experimental group. Those of experimental group was significantly higher than control group statistically (p<0.05). The composite face/scalp flap which we have elevated, supplied by bilateral facial artery, temporal artery, auricular artery and drained by external jugular vein has flap viability enough to be transplanted after its elevation.

  • PDF

영유아 이상징후 감지를 위한 표정 인식 알고리즘 개선 (The improved facial expression recognition algorithm for detecting abnormal symptoms in infants and young children)

  • 김윤수;이수인;석종원
    • 전기전자학회논문지
    • /
    • 제25권3호
    • /
    • pp.430-436
    • /
    • 2021
  • 비접촉형 체온 측정 시스템은 광학 및 열화상 카메라를 활용하여 집단시설의 발열성 질병을 관리하는 핵심 요소 중 하나이다. 기존 체온 측정 시스템은 딥러닝 기반 얼굴검출 알고리즘이 사용되어 얼굴영역의 단순 체온 측정에는 활용할 수 있지만, 의사표현이 어려운 영유아의 이상 징후를 인지하는데 한계가 있다. 본 논문에서는 기존의 체온 측정 시스템에서 영유아의 이상징후 감지를 위해 표정인식 알고리즘을 개선한다. 제안된 방법은 객체탐지 모델을 사용하여 영상에서 영유아를 검출한 후 얼굴영역을 추출하고 표정인식의 핵심 요소인 눈, 코, 입의 좌표를 획득한다. 이후 획득된 좌표를 기반으로 선택적 샤프닝 필터를 적용하여 표정인식을 진행한다. 실험결과에 따르면 제안된 알고리즘은 UTK 데이터셋에서 무표정, 웃음, 슬픔 3가지 표정에 대해 각각 2.52%, 1.12%, 2.29%가 향상되었다.

한-일 수화 영상통신을 위한 3차원 모델 (3D model for korean-japanese sign language image communication)

  • 신성효;김상운
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1998년도 하계종합학술대회논문집
    • /
    • pp.929-932
    • /
    • 1998
  • In this paper we propose a method of representing emotional experessions and lip shapes for sign language communication using 3-dimensional model. At first we employ the action units (AU) of facial action coding system(FACS) to display all shapes. Then we define 11 basic lip shapes and sounding times of each components in a syllable in order to synthesize the lip shapes more precisely for korean characters. Experimental results show that the proposed method could be used efficiently for the sign language image communication between different languages.

  • PDF

모바일 디스플레이에서 TS 알고리즘을 이용한 실시간 얼굴영역 검출 (Real Time Face Detection with TS Algorithm in Mobile Display)

  • 이용환;김영섭;이상범;강정원;박진양
    • 반도체디스플레이기술학회지
    • /
    • 제4권1호
    • /
    • pp.61-64
    • /
    • 2005
  • This study presents a new algorithm to detect the facial feature in a color image entered from the mobile device with complex backgrounds and undefined distance between camera's location and the face. Since skin color model with Hough transformation spent approximately 90$\%$ of running time to extract the fitting ellipse for detection of the facial feature, we have changed the approach to the simple geometric vector operation, called a TS(Triangle-Square) transformation. As the experimental results, this gives benefit of reduced run time. We have similar ratio of face detection to other methods with fast speed enough to be used on real-time identification system in mobile environments.

  • PDF

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.

얼굴영상의 초해상도화 및 Tanh-polar 변환 기반의 인지나이 예측 (Perceived Age Prediction from Face Image Based on Super-resolution and Tanh-polar Transform)

  • 안일구 ;이시우
    • 대한의용생체공학회:의공학회지
    • /
    • 제44권5호
    • /
    • pp.329-335
    • /
    • 2023
  • Perceived age is defined as age estimated based on physical appearance. Perceived age is an important indicator of the overall health status of the elderly. This is because people who appear older tend to have higher rates of morbidity and mortality than people of the same chronological age. Although perceived age is an important indicator, there is a lack of objective methods to quantify perceived age. In this paper, we construct a quantified perceived age model from face images using a convolutional neural network. The face images are enlarged to super-resolution and the skin, an important feature in perceived age, is made clear. Moreover, through Tanh-polar transformation, the central area of the face occupies a relatively larger area than the boundary area, helping the neural network better recognize facial skin features. The experimental results show mean absolute error (MAE) of 6.59, showing that the proposed model is superior to existing method.

Positional symmetry of porion and external auditory meatus in facial asymmetry

  • Choi, Ji Wook;Jung, Seo Yeon;Kim, Hak-Jin;Lee, Sang-Hwy
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • 제37권
    • /
    • pp.33.1-33.9
    • /
    • 2015
  • Background: The porion (Po) is used to construct the Frankfort horizontal (FH) plane for cephalometrics, and the external auditory meatus (EAM) is to transfer and mount the dental model with facebow. The classical assumption is that EAM represents Po by the parallel positioning. However, we are sometimes questioning about the possible positional disparity between Po and EAM, when the occlusal cant or facial midline is different from our clinical understandings. The purpose of this study was to evaluate the positional parallelism of Po and EAM in facial asymmetries, and also to investigate their relationship with the maxillary occlusal cant. Methods: The 67 subjects were classified into three groups. Group I had normal subjects with facial symmetry ($1.05{\pm}0.52mm$ of average chin deviation) with minimal occlusal cant (<1.5 mm). Asymmetry group II-A had no maxillary occlusal cant (average $0.60{\pm}0.36$), while asymmetry group II-B had occlusal cant (average $3.72{\pm}1.47$). The distances of bilateral Po, EAM, and mesiobuccal cusp tips of the maxillary first molars (Mx) from the horizontal orbital plane (Orb) and the coronal plane were measured on the three-dimensional computed tomographic images. Their right and left side distance discrepancies were calculated and statistically compared. Results: EAM was located 10.3 mm below and 2.3 mm anterior to Po in group I. The vertical distances from Po to EAM of both sides were significantly different in group II-B (p=0.001), while other groups were not. Interside discrepancy of the vertical distances from EAM to Mx in group II-B also showed the significant differences, as compared with those from Po to Mx and from Orb to Mx. Conclusions: The subjects with facial asymmetry and prominent maxillary occlusal cant tend to have the symmetric position of Po but asymmetric EAM. Some caution or other measures will be helpful for them to be used during the clinical procedures.

다중 얼굴 특징 추적을 이용한 복지형 인터페이스 (Welfare Interface using Multiple Facial Features Tracking)

  • 주진선;신윤희;김은이
    • 대한전자공학회논문지SP
    • /
    • 제45권1호
    • /
    • pp.75-83
    • /
    • 2008
  • 본 논문에서는 얼굴의 다중 특징을 이용하여 마우스의 다양한 동작을 효율적으로 구현할 수 있는 복지형 인터페이스를 제안한다. 제안된 시스템은 5개의 모듈로 구성 된다 : 얼굴의 검출(Face detection), 눈의 검출(eye detection), 입의 검출(mouth detection), 얼굴특징 추적(lariat feature tracking), 마우스의 제어(mouse control). 첫 단계에서는 피부색 모델과 연결 성분 분석을 이용하여 얼굴 영역을 검출한다. 그 후 얼굴영역으로부터 정확히 눈을 검출하기 위하여 신경망 기반의 텍스처 분류기를 사용하여 얼굴 영역에서 눈 영역과 비 눈 영역을 구분한다. 일단 눈 영역이 검출되면 눈의 위치에 기반 하여 에지 검출기(edge detector)를 이용하여 입 영역을 찾는다. 눈 영역과 입 영역을 찾으면 각각 mean shift 알고리즘과 template matching을 사용하여 정확하게 추적되고, 그 결과에 기반 하여 마우스의 움직임 또는 클릭의 기능이 수행된다. 제안된 시스템의 효율성을 검증하기 위하여 제안된 인터페이스 시스템을 다양한 응용분야에 적용 하였다. 장애인과 비장애인으로 나누어 제안된 시스템을 실험한 결과 모두에게 실시간으로 보다 편리하고 친숙한 인터페이스로 활용 될 수 있다는 것이 증명 되었다.