• Title/Summary/Keyword: Eyebrows

Search Result 119, Processing Time 0.028 seconds

PROSTHODONTIC AND ESTHETIC RESTORATION OF ECTODERMAL DYSPLASIA WITH ANODONTIA : A CASE REPORT (Anodontia 소견을 보이는 외배엽 이형성증 환자에서 교합기능, 심미기능 회복에 관한 치험증례)

  • Lee, Min-Ha;Yang, Kyu-Ho
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.21 no.2
    • /
    • pp.570-576
    • /
    • 1994
  • Ectodermal dysplasia is characterized by a partial or complete lack of primary and permanent teeth, other ectodermal structures that may be affected include the skin, hair, and sweat glands. The patient with the so-called anhidrotic type of ectodermal dysplasia exhibits dry skin, lack of sweat glands, sparse eyebrows, body hair, saddle nose, and everted lips. Genetic basis of anhidrotic ectodermal dysplasia is recessive and sex-linked, being manifested chiefly in males, but this is debatable. A 6-year-old boy, with typical signs of anhidrotic ectodermal dysplasia, was presented. Prosthetic restoratoins are of great value to these patients, both from the standpoint of function and for psychologic reasons. The need for complete denture is critical during preschool periods and continues into adulthood. The following case report is an approach to the management of a patient with anhidrotic ectodermal dysplasia.

  • PDF

Emotion Recognition by Vision System (비젼에 의한 감성인식)

  • 이상윤;오재흥;주영훈;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.203-207
    • /
    • 2001
  • In this Paper, we propose the neural network based emotion recognition method for intelligently recognizing the human's emotion using CCD color image. To do this, we first acquire the color image from the CCD camera, and then propose the method for recognizing the expression to be represented the structural correlation of man's feature Points(eyebrows, eye, nose, mouse) It is central technology that the Process of extract, separate and recognize correct data in the image. for representation is expressed by structural corelation of human's feature Points In the Proposed method, human's emotion is divided into four emotion (surprise, anger, happiness, sadness). Had separated complexion area using color-difference of color space by method that have separated background and human's face toughly to change such as external illumination in this paper. For this, we propose an algorithm to extract four feature Points from the face image acquired by the color CCD camera and find normalization face picture and some feature vectors from those. And then we apply back-prapagation algorithm to the secondary feature vector. Finally, we show the Practical application possibility of the proposed method.

  • PDF

Facial Expression Classification through Covariance Matrix Correlations

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.5
    • /
    • pp.505-509
    • /
    • 2011
  • This paper attempts to classify known facial expressions and to establish the correlations between two regions (eye + eyebrows and mouth) in identifying the six prototypic expressions. Covariance is used to describe region texture that captures facial features for classification. The texture captured exhibit the pattern observed during the execution of particular expressions. Feature matching is done by simple distance measure between the probe and the modeled representations of eye and mouth components. We target JAFFE database in this experiment to validate our claim. A high classification rate is observed from the mouth component and the correlation between the two (eye and mouth) components. Eye component exhibits a lower classification rate if used independently.

Global Feature Extraction and Recognition from Matrices of Gabor Feature Faces

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.207-211
    • /
    • 2011
  • This paper presents a method for facial feature representation and recognition from the Covariance Matrices of the Gabor-filtered images. Gabor filters are a very powerful tool for processing images that respond to different local orientations and wave numbers around points of interest, especially on the local features on the face. This is a very unique attribute needed to extract special features around the facial components like eyebrows, eyes, mouth and nose. The Covariance matrices computed on Gabor filtered faces are adopted as the feature representation for face recognition. Geodesic distance measure is used as a matching measure and is preferred for its global consistency over other methods. Geodesic measure takes into consideration the position of the data points in addition to the geometric structure of given face images. The proposed method is invariant and robust under rotation, pose, or boundary distortion. Tests run on random images and also on publicly available JAFFE and FRAV3D face recognition databases provide impressively high percentage of recognition.

Past, Present, and Future of Brain Imaging Studies in Trichotillomania (발모광 뇌영상 연구의 과거, 현재와 미래)

  • Lee, Ji-Ah;Kim, Chul-Kwon;Kim, Yoon-Jung;Bahn, Geon-Ho
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.20 no.3
    • /
    • pp.115-121
    • /
    • 2009
  • Trichotillomania (TTM) is a disorder characterized by repetitive hair pulling, frequently from the scalp and/or eyebrows, leading to noticeable hair loss and functional impairment. TTM remains a poorly understood and inadequately treated disorder despite increased recognition of its prevalence. We review available neuroimaging studies conducted in patients with TTM, covering structural and functional neuroimaging in turn. Data from patients' structural and functional neuroimaging results enabled us to identify the neural circuitry involved in the manifestation of hair pulling. Finally, we highlighted the future importance of neuroimaging studies in children and adolescents with TTM.

  • PDF

Drowsiness Detection using Eye-blink Patterns (눈 깜박임 패턴을 이용한 졸음 검출)

  • Choi, Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.2
    • /
    • pp.94-102
    • /
    • 2011
  • In this paper, a novel drowsiness detection algorithm using eye-blink pattern is proposed. The proposed drowsiness detection model using finite automata makes it easy to detect eye-blink, drowsiness and sleep by checking the number of input symbols standing for closed eye state only. Also it increases the accuracy by taking vertical projection histogram after locating the eye region using the feature of horizontal projection histogram, and minimizes the external effects such as eyebrows or black-framed glasses. Experimental results in eye-blinks detection using the JZU eye-blink database show that our approach achieves more than 93% precision and high performance.

Acromegaloid Facial Appearance Syndrome - A New Case in India

  • Rai, Arpita;Sattur, Atul P.;Naikmasur, Venkatesh G.
    • Journal of Genetic Medicine
    • /
    • v.10 no.1
    • /
    • pp.57-61
    • /
    • 2013
  • Acromegaloid Facial Appearance syndrome is a very rare syndrome combining acromegaloid-like facial appearance, thickened lips and oral mucosa and acral enlargement. Progressive facial dysmorphism is characterized by a coarse facies, a long bulbous nose, high-arched eyebrows, and thickening of the lips, oral mucosa leading to exaggerated rugae and frenula, furrowed tongue and narrow palpebral fissures. We report a case of acromegaloid facial appearance syndrome in a 19-year-old male patient who presented with all the characteristic features of the syndrome along with previously unreported anomalies like dystrophic nails, postaxial polydactyly and incisal notching of teeth.

Comic Emotional Expression for Effective Sign-Language Communications (효율적인 수화 통신을 위한 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.651-654
    • /
    • 1999
  • In this paper we propose an emotional expression method using a comic model and special marks for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. In the comic model, we can use some special marks for the purpose of emphasizing various emotions. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS) and define Special Unit(SU) for emphasizing the emotions. Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF

A Comic Emotional Expression for 3D Facial Model (3D 얼굴 모델의 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.536-539
    • /
    • 1999
  • In this paper we propose a 3D emotional expression method using a comic model for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS). Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.