• Title/Summary/Keyword: Facial Model

Search Result 523, Processing Time 0.026 seconds

Influence of thickness and incisal extension of indirect veneers on the biomechanical behavior of maxillary canine teeth

  • Costa, Victoria Luswarghi Souza;Tribst, Joao Paulo Mendes;Uemura, Eduardo Shigueyuki;de Morais, Dayana Campanelli;Borges, Alexandre Luiz Souto
    • Restorative Dentistry and Endodontics
    • /
    • v.43 no.4
    • /
    • pp.48.1-48.13
    • /
    • 2018
  • Objectives: To analyze the influence of thickness and incisal extension of indirect veneers on the stress and strain generated in maxillary canine teeth. Materials and Methods: A 3-dimensional maxillary canine model was validated with an in vitro strain gauge and exported to computer-assisted engineering software. Materials were considered homogeneous, isotropic, and elastic. Each canine tooth was then subjected to a 0.3 and 0.8 mm reduction on the facial surface, in preparations with and without incisal covering, and restored with a lithium disilicate veneer. A 50 N load was applied at $45^{\circ}$ to the long axis of the tooth, on the incisal third of the palatal surface of the crown. Results: The results showed a mean of $218.16{\mu}strain$ of stress in the in vitro experiment, and $210.63{\mu}strain$ in finite element analysis (FEA). The stress concentration on prepared teeth was higher at the palatal root surface, with a mean value of 11.02 MPa and varying less than 3% between the preparation designs. The veneers concentrated higher stresses at the incisal third of the facial surface, with a mean of 3.88 MPa and a 40% increase in less-thick veneers. The incisal cover generated a new stress concentration area, with values over 48.18 MPa. Conclusions: The mathematical model for a maxillary canine tooth was validated using FEA. The thickness (0.3 or 0.8 mm) and the incisal covering showed no difference for the tooth structure. However, the incisal covering was harmful for the veneer, of which the greatest thickness was beneficial.

Differences in the heritability of craniofacial skeletal and dental characteristics between twin pairs with skeletal Class I and II malocclusions

  • Park, Heon-Mook;Kim, Pil-Jong;Sung, Joohon;Song, Yun-Mi;Kim, Hong-Gee;Kim, Young Ho;Baek, Seung-Hak
    • The korean journal of orthodontics
    • /
    • v.51 no.6
    • /
    • pp.407-418
    • /
    • 2021
  • Objective: To investigate differences in the heritability of skeletodental characteristics between twin pairs with skeletal Class I and Class II malocclusions. Methods: Forty Korean adult twin pairs were divided into Class I (C-I) group (0° ≤ angle between point A, nasion, and point B [ANB]) ≤ 4°; mean age, 40.7 years) and Class II (C-II) group (ANB > 4°; mean age, 43.0 years). Each group comprised 14 monozygotic and 6 dizygotic twin pairs. Thirty-three cephalometric variables were measured using lateral cephalograms and were categorized as the anteroposterior, vertical, dental, mandible, and cranial base characteristics. The ACE model was used to calculate heritability (A > 0.7, high heritability). Thereafter, principal component analysis (PCA) was performed. Results: Twin pairs in C-I group exhibited high heritability values in the facial anteroposterior characteristics, inclination of the maxillary and mandibular incisors, mandibular body length, and cranial base angles. Twin pairs in C-II group showed high heritability values in vertical facial height, ramus height, effective mandibular length, and cranial base length. PCA extracted eight components with 88.3% in the C-I group and seven components with 91.0% cumulative explanation in the C-II group. Conclusions: Differences in the heritability of skeletodental characteristics between twin pairs with skeletal Class I and II malocclusions might provide valuable information for growth prediction and treatment planning.

Identification of cranial nerve ganglia using sectioned images and three-dimensional models of a cadaver

  • Kim, Chung Yoh;Park, Jin Seo;Chung, Beom Sun
    • The Korean Journal of Pain
    • /
    • v.35 no.3
    • /
    • pp.250-260
    • /
    • 2022
  • Background: Cranial nerve ganglia, which are prone to viral infections and tumors, are located deep in the head, so their detailed anatomy is difficult to understand using conventional cadaver dissection. For locating the small ganglia in medical images, their sectional anatomy should be learned by medical students and doctors. The purpose of this study is to elucidate cranial ganglia anatomy using sectioned images and three-dimensional (3D) models of a cadaver. Methods: One thousand two hundred and forty-six sectioned images of a male cadaver were examined to identify the cranial nerve ganglia. Using the real color sectioned images, real color volume model having a voxel size of 0.4 × 0.4 × 0.4 mm was produced. Results: The sectioned images and 3D models can be downloaded for free from a webpage, anatomy.dongguk.ac.kr/ganglia. On the images and model, all the cranial nerve ganglia and their whole course were identified. In case of the facial nerve, the geniculate, pterygopalatine, and submandibular ganglia were clearly identified. In case of the glossopharyngeal nerve, the superior, inferior, and otic ganglia were found. Thanks to the high resolution and real color of the sectioned images and volume models, detailed observation of the ganglia was possible. Since the volume models can be cut both in orthogonal planes and oblique planes, advanced sectional anatomy of the ganglia can be explained concretely. Conclusions: The sectioned images and 3D models will be helpful resources for understanding cranial nerve ganglia anatomy, for performing related surgical procedures.

Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services (지능형 전시 서비스 구현을 위한 멀티모달 감정 상태 추정 모형)

  • Lee, Kichun;Choi, So Yun;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2014
  • Both researchers and practitioners are showing an increased interested in interactive exhibition services. Interactive exhibition services are designed to directly respond to visitor responses in real time, so as to fully engage visitors' interest and enhance their satisfaction. In order to install an effective interactive exhibition service, it is essential to adopt intelligent technologies that enable accurate estimation of a visitor's emotional state from responses to exhibited stimulus. Studies undertaken so far have attempted to estimate the human emotional state, most of them doing so by gauging either facial expressions or audio responses. However, the most recent research suggests that, a multimodal approach that uses people's multiple responses simultaneously may lead to better estimation. Given this context, we propose a new multimodal emotional state estimation model that uses various responses including facial expressions, gestures, and movements measured by the Microsoft Kinect Sensor. In order to effectively handle a large amount of sensory data, we propose to use stratified sampling-based MRA (multiple regression analysis) as our estimation method. To validate the usefulness of the proposed model, we collected 602,599 responses and emotional state data with 274 variables from 15 people. When we applied our model to the data set, we found that our model estimated the levels of valence and arousal in the 10~15% error range. Since our proposed model is simple and stable, we expect that it will be applied not only in intelligent exhibition services, but also in other areas such as e-learning and personalized advertising.

A Study on the Facial Image Synthesis Using Texture Mapping and Shading Effect (명암효과와 질감매핑을 이용한 얼굴영상 합성에 관한 연구)

  • 김상현;정성환;김신환;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.7
    • /
    • pp.913-921
    • /
    • 1993
  • Texture mapping is mostly used as an image synthesis method in the model-based coding system. An image synthesis using this method uses only the texture information of a front face-view. Therefore, when the model is rotated, texture mapping may produce an awkward image in point of shading. In this paper. a new texture mapping method considering shading effect is studied, and also the ear's wireframe and changes of hair are suplemented for the relation. The experimental results show that the proposed method yields the synthesized images with reasonably natural quality.

  • PDF

Model ins based on Muscle Model for Three-Dimensional Facial Expression Animalion (3차원 얼굴 표정 애니메이션을 위한 근육모델 기반의 모델링)

  • 이혜진;정현숙;이일병
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04a
    • /
    • pp.742-744
    • /
    • 2002
  • 얼굴 애니메이션은 개인을 쉽게 구분하고 의사소통을 효율적으로 할 수 있는 보조도구로써 최근 연구가 활발하다. 본 논문에서는 얼굴 표정생성을 위해서 실제얼굴의 피부조직 얼굴 근육 등 해부학적 구조에 기반한 근육 기반 모델 방법을 사용하여 현실감 있고 자연스러운 얼굴 애니메이션이 이루어지도록 한다. 또한 부드러운 얼굴모델을 구현하기 위하여 폴리곤 메쉬를 분할하고 얼굴 표정에 중요한 영향을 미치는 얼굴근육을 추가하여 다양하고 자연스러운 표정을 연출하는 방법을 제시하고자 한다. 제안된 방법을 water〔3〕의 모델에 적용해 봄으로서 더 실감 있는 얼굴 애니메이션에 접근할 수 있는 결과를 얻을 수 있었다. 이 결과는 화상회의나 가상현실, 원격교육, 영화 등 많은 분야에서 활용될 수 있다.

  • PDF

Hierarchical Age Estimation based on Dynamic Grouping and OHRank

  • Zhang, Li;Wang, Xianmei;Liang, Yuyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2480-2495
    • /
    • 2014
  • This paper describes a hierarchical method for image-based age estimation that combines age group classification and age value estimation. The proposed method uses a coarse-to-fine strategy with different appearance features to describe facial shape and texture. Considering the damage to continuity between neighboring groups caused by fixed divisions during age group classification, a dynamic grouping technique is employed to allow non-fixed groups. Based on the given group, an ordinal hyperplane ranking (OHRank) model is employed to transform age estimation into a series of binary enquiry problems that can take advantage of the intrinsic correlation and ordinal information of age. A set of experiments on FG-NET are presented and the results demonstrate the validity of our solution.

Face detection using active contours

  • Chang, Jae-Sik;Lee, Mu-Youl;Moon, Chae-Hyun;Park, Hye-Sun;Lee, Kyung-Mi;Kim, Hang-Joon
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1515-1518
    • /
    • 2002
  • This paper proposes an active contour model to detect facial regions in a given image. Accordingly we use the color information human faces which is represented by a skin color model. We evolve the active contour using the level set method which allows for cusps, corners, and automatic topological changes. Experimental results show the effectiveness of the proposed method.

  • PDF

Facial Expression Recognition using Model-based Feature Extraction in Image Sequence (동영상에서의 모델기반 특징추출을 이용한 얼굴 표정인식)

  • Park Mi-Ae;Choi Sung-In;Im Don-Gak;Ko Je-Pil
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.06b
    • /
    • pp.343-345
    • /
    • 2006
  • 본 논문에서는 ASM(Active Shape Model)과 상태 기반 모델을 사용하여 동영상으로부터 얼굴 표정을 인식하는 방법을 제안한다. ASM을 이용하여 하나의 입력영상에 대한 얼굴요소 특징점들을 정합하고 그 과정에서 생성되는 모양 파라미터 벡터를 추출한다. 동영상에 대해 추출되는 모양 파라미터 벡터 집합을 세 가지상태 중 한 가지를 가지는 상태 벡터로 변환하고 분류기를 통해 얼굴의 표정을 인식한다. 분류단계에서는 분류성능을 높이기 위해 새로운 개체 기반 학습 방법을 제안한다. 실험에서는 새로이 제안한 개체 기반 학습 방법이 KNN 분류기보다 더 좋은 인식률을 나타내는 것을 보인다.

  • PDF

Facial Feature Extraction using an Active Shape Model with an Adaptive Mean Shape (적응적인 평균 모양을 이용한 동적 모양 모델 기반 얼굴 특징점 추출)

  • Kim Hyun-Chul;Kim Hyoung-Joon;Hwang Wonjun;Kee Seok-Cheol;Kim Whoi-Yul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.868-870
    • /
    • 2005
  • 본 논문은 포즈가 취해진 얼굴의 정확한 특징점 추출을 위하여 적응적인 평균 모양 방법을 이용한 ASM(Active Shape Model)을 제안한다. ASM은 사람 얼굴의 모양을 모델링하기 위하여 통계학상의 모양 모델을 이용한다. 통계학상의 모양 모델의 평균 모양은 입력 영상의 얼굴 포즈와 관계없이 하나로 고정되어 있으며, 이는 모양 모델 제한 조건 검사 및 복원과정에서 잘못된 결과를 만드는 원인이 된다. 이러한 문제를 해결하기 위하여 입력 영상의 얼굴 모양에 적응적인 평균 모양을 제안하며, 실험을 통해 제안한 방법이 고정된 평균 모양 방법의 문제를 해결하고 특징점 추출 성능을 향상시킴을 보였다.

  • PDF