• Title/Summary/Keyword: 얼굴형 분석

Search Result 96, Processing Time 0.023 seconds

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

A study on the avatar modelling of Korean 3D facial features in twenties (한국인 20대 안면의 3차원 형태소에 의한 아바타 모델링)

  • Lee, Mi-Seung;Kim, Chang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.10 no.1
    • /
    • pp.29-39
    • /
    • 2004
  • 사이버상의 의사소통의 도구로 사용되는 아바타나 캐릭터의 3차원 얼굴 모델링에 대한 연구로서 한국인의 안면형태소를 지닌 모델생성으로 인터넷을 즐겨 사용하는 현대인과 청소년들에게 민족적 구체성과 국민적 정체성을 지닌 아바타의 활용에 도움이 되고자 한다. 임의의 20대 남, 녀 각각 40인으로부터 스켄을 하고 머리뼈와 근육 구조를 바탕으로 눈, 코, 눈썹, 뺨, 입, 턱, 아래턱, 이마, 귀 등 각 형태소로 나누고 참조모델을 생성한다. 임의로 생성된 안면형태소 3차원 모델이 한국인의 형상을 갖는지에 관한 평가를 정량적인 치수측정에 의해서 검증 분석하여 입증한다. 이들 안부, 비부, 구순부, 얼굴형의 각 형태소로부터 각 형태소틀 간에 보간 되어 변형된 형태의 형태소 생성이 가능하고, 이 변형 형태소들 간의 임의의 조합된 모델의 안면 생성이 가능하게 한다.

  • PDF

Audio-Based Human-Robot Interaction Technology (오디오 기반 인간로봇 상호작용 기술)

  • Kwak, K.C.;Kim, H.J.;Bae, K.S.;Yoon, H.S.
    • Electronics and Telecommunications Trends
    • /
    • v.22 no.2 s.104
    • /
    • pp.31-37
    • /
    • 2007
  • 인간로봇 상호작용 기술(human-robot interaction)은 다양한 의사소통 채널인 로봇카메라, 마이크로폰, 기타 센서를 통해 인지 및 정서적으로 상호작용할 수 있도록 로봇시스템 및 상호작용 환경을 디자인하고 구현 및 평가하는 지능형 서비스 로봇의 핵심기술이다. 본 고에서는 오디오 기반 인간로봇 상호작용 기술 중에서 음원 추적(sound localization)과 화자인식(speaker recognition) 기술의 국내외 기술동향을 살펴보고 최근 ETRI 지능형로봇연구단에서 상용화를 추진중인 시청각 기반 음원 추적(audio visual sound localization)과 문장독립 화자인식(text-independent speaker recognition)기술들을 다룬다. 또한 이들 기술들을 가정환경에서 효과적으로 사용하기 위해 음성인식, 얼굴검출, 얼굴인식 등을 결합한 시나리오에 대해서 살펴본다.

Photogrammetric Study on Facial Shape Analysis of Female College Students (영상계측 프로그램을 이용한 여대생 얼굴의 유형분석)

  • 김진숙;이경화
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.28 no.11
    • /
    • pp.1470-1481
    • /
    • 2004
  • The purpose of this study was to research on facial shape to suggest a quantified data for the domestic apparel and beauty industry. Conducted a measurement research of 278 female college students, We took the photographs of front view and lateral view of the subjects by digital camera and obtained the 69 measurements through the facial measurement program. 264 ,subjects' measurement data were analyzed by various statistical methods such as descriptive analysis, factor analysis and cluster analysis. Using the 69 measurement items,4 factors were selected as key factors for the factor analysis of facial shape, the factors are: \circled1 Front face height \circled2 Side face radial length \circled3 Front face breadth \circled4 Ear height and Gnathion radial length. We categorized the facial shape into four types by cluster analysis. Type 4 is the most common facial shape in female college students: \circled1 Type 1: Round face \circled2 Type 2: Oval face \circled3 Type 3: Square face \circled4 Type 4: Heart shaped face According to the facial shape analysis, facial shape of female college students are consisting of Heart shaped face(34.8%), Round face(29.2%), Square face(23.5%), oval face(12.5%).

Intelligent Drowsiness Drive Warning System (지능형 졸음 운전 경고 시스템)

  • Joo, Young-Hoon;Kim, Jin-Kyu;Ra, In-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.223-229
    • /
    • 2008
  • In this paper. we propose the real-time vision system which judges drowsiness driving based on levels of drivers' fatigue. The proposed system is to prevent traffic accidents by warning the drowsiness and carelessness using face-image analysis and fuzzy logic algorithm. We find the face position and eye areas by using fuzzy skin filter and virtual face model in order to develop the real-time face detection algorithm, and we measure the eye blinking frequency and eye closure duration by using their informations. And then we propose the method for estimating the levels of drivel's fatigue based on measured data by using the fuzzy logic and for deciding whether drowsiness driving is or not. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Intelligent electric wheelchair operated by bio-signals to guarantee the right of movement for the physically handicapped (지체장애인 이동권 보장을 위해 생체신호로 조작하는 지능형 전동휠체어)

  • Min-Cheol Kang;Dong-Kyun Seo;Wook-Hyun Jung;Jin-Won Jung;Hee-Sang Hwang;In-Soo Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.980-981
    • /
    • 2023
  • 본 논문은 생체신호 분석과 인공지능으로 전동휠체어 제어 시스템을 개발한다. 얼굴 근육 움직임에서 나오는 생체신호를 분석하고, 인공지능 모델로 생체신호 패턴을 학습하여 눈동자 및 얼굴 움직임을 해석하고 이를 토대로 전진, 후진, 좌회전, 우회전, 정지, 제어와 같은 6가지 기능을 전동휠체어에 적용하고 신체 제한자의 이용 용이성 및 삶의 질 향상을 목표로 한다.

A Attendance-Absence Checking System using the Self-organizing Face Recognition (자기조직형 얼굴 인식에 의한 학생 출결 관리 시스템)

  • Lee, Woo-Beom
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.3
    • /
    • pp.72-79
    • /
    • 2010
  • A EAARS(Electronic Attendance-Absence Recording System) is the important LSS(Learning Support System) for blending a on-line learning in the face-to-face classroom. However, the EAARS based on the smart card can not identify a real owner of the checked card. Therefore, we develop the CS(Client-Sever) system that manages the attendance-absence checking automatically, which is used the self-organizing neural network for the face recognition. A client system creates the ID file by extracting the face feature, a server system analyzes the ID file sent from client system, and performs a student identification by using the Recognized weight file saved in Database. As a result, The proposed CS EAARS shows the 92% efficiency in the CS environment that includes the various face image database of the real classroom.

Side Face Features' Biometrics for Sasang Constitution (사상체질 판별을 위한 측면 얼굴 이미지에서의 특징 검출)

  • Zhang, Qian;Lee, Ki-Jung;WhangBo, Taeg-Keun
    • Journal of Internet Computing and Services
    • /
    • v.8 no.6
    • /
    • pp.155-167
    • /
    • 2007
  • There are four types of human beings according to the Sasang Typology, Oriental medical doctors frequently prescribe healthcare information and treatment depending on one's type, The feature ratios (Table 1) on the human face are the most important criterions to decide which type a patient is. In this paper, we proposed a system to extract these feature ratios from the people's side face, There are two challenges in acquiring the feature ratio: one that selecting representative features; the other, that detecting region of interest from human profile facial image effectively and calculating the feature ratio accurately. In our system, an adaptive color model is used to separate human side face from background, and the method based on geometrical model is designed for region of interest detection. Then we present the error analysis caused by image variation in terms of image size and head pose, To verify the efficiency of the system proposed in this paper, several experiments are conducted using about 173 korean's left side facial photographs. Experiment results shows that the accuracy of our system is increased 17,99% after we combine the front face features with the side face features, instead of using the front face features only.

  • PDF

Welfare Interface using Multiple Facial Features Tracking (다중 얼굴 특징 추적을 이용한 복지형 인터페이스)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.75-83
    • /
    • 2008
  • We propose a welfare interface using multiple fecial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial feature tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis(CCs). Thereafter the eye regions are localized using neutral network(NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 21 frame/sec on PC for the $320{\times}240$ size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

Knowledge based Text to Facial Sequence Image System for Interaction of Lecturer and Learner in Cyber Universities (가상대학에서 교수자와 학습자간 상호작용을 위한 지식기반형 문자-얼굴동영상 변환 시스템)

  • Kim, Hyoung-Geun;Park, Chul-Ha
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.179-188
    • /
    • 2008
  • In this paper, knowledge based text to facial sequence image system for interaction of lecturer and learner in cyber universities is studied. The system is defined by the synthesis of facial sequence image which is synchronized the lip according to the text information based on grammatical characteristic of hangul. For the implementation of the system, the transformation method that the text information is transformed into the phoneme code, the deformation rules of mouse shape which can be changed according to the code of phonemes, and the synthesis method of facial sequence image by using deformation rules of mouse shape are proposed. In the proposed method, all syllables of hangul are represented 10 principal mouse shape and 78 compound mouse shape according to the pronunciation characteristics of the basic consonants and vowels, and the characteristics of the articulation rules, respectively. To synthesize the real time facial sequence image able to realize the PC, the 88 mouth shape stored data base are used without the synthesis of mouse shape in each frame. To verify the validity of the proposed method the various synthesis of facial sequence image transformed from the text information is accomplished, and the system that can be applied the PC is implemented using the proposed method.