• Title/Summary/Keyword: 놀람

Search Result 81, Processing Time 0.098 seconds

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Characteristics of Noise Radiated at Dental Clinic (치과병원에서 치료시 발생하는 소음특성)

  • Ji, Dong-Ha;Choi, Mi-Suk
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.31 no.12
    • /
    • pp.1123-1128
    • /
    • 2009
  • Noise radiated from medical treatment at dental clinic will affect the patients. On such point of view, We investigated the noise characteristics in case of medical treatment (scaling, tooth eliminating) and non-medical examination (idling) and also evaluated the degree of indoor noise using the evaluation index such as PSIL, NRN and made up a questionnaire about the reactions to noise. As a result of noise evaluation, it shows that the range of noise level is 67.7~78.3 dB(A) and frequency is very high (above 4 k Hz) and respondents are affected by noise (unpleasantness, hesitation to visit dental clinic, shivering with noise, being astonishment). Analysis by PSIL showed that it was no problem to conversation between worker and patient. But it exceeded the noise permit level in working space by NR-curve. To relieve a fear of noise in patients, they are considered to offer the ear protection, choose the low noisevibration equipment and use the masking effect. They are of great advantage to dental clinics to i prove dental service and competitiveness.

Measurement of Emotional Transition Using Physiological Signals of Audiences (관객의 생체신호 분석을 통한 감성 변화)

  • Kim, Wan-Suk;Ham, Jun-Seok;Sohn, Choong-Yeon;Yun, Jae-Sun;Lim, Chan;Ko, Il-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.168-176
    • /
    • 2010
  • Audience observing visual media with care experience lots of emotional transition according to characteristics of media. Enjoy, sadness, surprising, etc, a variety of emotional state of audiences is often arranged by James Russell's 'A circumplex model of affect' utilized on psychology. Especially, in various emotions, 'Uncanny' mentioned by Sigmund Freud is represented a sharp medium existing in a crack of clearly emotional conception. Uncanny phenomenon is an emotional state of changing from unpleasant to pleasant on an audience observing visual media is been aware of immoral media generally, therefore, because this is a positive state on a social taboo, we need to analyze with a scientific analysis clearly. Therefore, this study will organize James Russell's 'A circumplex model of affect' and uncanny phenomenon, will be progressed to establish a hypothesis about a state of uncanny on audiences observing visual media and analyze results of the physiological signals experiment based on ECG(Electronic Cardiogram), GSR(Galvanic Skin Response) signals with distribution, distance, and moving time in a circumplex model of affect.

Exploration of the Emotion for Daily Conversation on Facebook (페이스북 일상담화의 감정 탐색)

  • Hwang, Yoosun
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.2
    • /
    • pp.1-13
    • /
    • 2016
  • The purpose of this study is to explore the emotions of Facebook. Various types of emotions are being exchanged on Facebook. The emotional reactions make the Facebook different from previous electronic bulletin board. According to previous researches, computer-mediated communication can deliver visual symbols and non-verbal cues to enhance the abundance of meanings. Data were collected from 205 Facebook users and the number of users' posts were total 10308. The contents analysis was conducted to explore emotions of the 10308 Facebook posts. The results showed that the most frequent emotion was pleasure. The emotional distributions were different according to the contents types; text, video, photo, and link. For the text content type, emotion of curiosity was apparent and for the photo content type, emotion of love was more frequent than others, and for the video content type, emotion of surprise was salient. The results of the analysis for the shared contents also revealed that pleasure and hope were more frequent emotions than other emotions.

Risk Situation Recognition Using Facial Expression Recognition of Fear and Surprise Expression (공포와 놀람 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-Jong;Song, Teuk Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.523-528
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image to recognize facial expression. The DB has 6 kinds of facial expressions of human being that are basic facial expressions such as smile, sadness, surprise, anger, disgust, and fear expression. The proposed method produces good results of facial expression and discriminates risk situation well.

Development of Interactive Content Services through an Intelligent IoT Mirror System (지능형 IoT 미러 시스템을 활용한 인터랙티브 콘텐츠 서비스 구현)

  • Jung, Wonseok;Seo, Jeongwook
    • Journal of Advanced Navigation Technology
    • /
    • v.22 no.5
    • /
    • pp.472-477
    • /
    • 2018
  • In this paper, we develop interactive content services for preventing depression of users through an intelligent Internet of Things(IoT) mirror system. For interactive content services, an IoT mirror device measures attention and meditation data from an EEG headset device and also measures facial expression data such as "sad", "angery", "disgust", "neutral", " happy", and "surprise" classified by a multi-layer perceptron algorithm through an webcam. Then, it sends the measured data to an oneM2M-compliant IoT server. Based on the collected data in the IoT server, a machine learning model is built to classify three levels of depression (RED, YELLOW, and GREEN) given by a proposed merge labeling method. It was verified that the k-nearest neighbor (k-NN) model could achieve about 93% of accuracy by experimental results. In addition, according to the classified level, a social network service agent sent a corresponding alert message to the family, friends and social workers. Thus, we were able to provide an interactive content service between users and caregivers.

Smart Emotional lighting control method using a wheel interface of the smart watch (스마트워치의 휠 인터페이스를 이용한 스마트 감성 조명 제어)

  • Kim, Bo-Ram;Kim, Dong-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1503-1510
    • /
    • 2016
  • In this study, we implemented the emotional light controlling system by using the wheel interface built in the smart-watch devices. Most previous light controlling systems have been adopted the direct switches or smart-phone applications for presenting individual emotion in lighting systems. However, in order to control color properties, these studies have some complicated user-interfaces in systems and limitation to present various color spectrums. Therefore, we need to user-friendly interfaces and functions for controlling properties of the lightning systems such as color, tone, color temperature, brightness, and saturation in detail with the wheel interface built in the smart-watch devices. The system proposed in the study is given to choose the user's selecting the emotional status information for providing the emotional lights. The selectable emotional status such as "stable", "surprise", "tired", "angry", etc. can be among 11 kinds of emotional states. In addition, the designed system processed the user's information such as user's emotional status information, local time, location information.

Facial Expression Classification Using Deep Convolutional Neural Network (깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법)

  • Choi, In-kyu;Song, Hyok;Lee, Sangyong;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.162-172
    • /
    • 2017
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. To overcome the disadvantages of existing facial expression databases, various databases are used. In the proposed technique, we construct six facial expression data sets such as 'expressionless', 'happiness', 'sadness', 'angry', 'surprise', and 'disgust'. Pre-processing and data augmentation techniques are also applied to improve efficient learning and classification performance. In the existing CNN structure, the optimal CNN structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of fully-connected layer nodes. Experimental results show that the proposed scheme achieves the highest classification performance of 96.88% while it takes the least time to pass through the CNN structure compared to other models.

The Influence of Background Color on Perceiving Facial Expression (배경색채가 얼굴 표정에서 전달되는 감성에 미치는 영향)

  • Son, Ho-Won;Choe, Da-Mi;Seok, Hyeon-Jeong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.51-54
    • /
    • 2009
  • 다양한 미디어에서 인물과 색채는 가장 중심적인 요소로서 활용되므로 인물의 표정에서 느껴지는 감성과 색채 자극에 대한 감성적 반응에 연구는 심리학 분야에서 각각 심도 있게 연구되어왔다. 본 연구에서는 감성 자극물로서의 얼굴 표정과 색채가 상호 작용을 하였을 때 이에 대한 감성적 반응에 대하여 조사하는데 그 목적이 있다. 즉, 인물의 표정과 배경 색상을 배치하였을 때 인물의 표정에서 느껴지는 감성이 어떻게 변하는지에 관한 실험 연구를 진행하여 이를 미디어에서 활용할 수 있는 방안을 제시하고자 한다. 60명의 피실험자들을 대상으로 진행한 실험연구에서는 Ekman의 7가지의 universal facial expression 중 증오(Contempt)의 표정을 제외한 분노(Anger), 공포(Fear), 역겨움(Disgusting), 기쁨(Happiness), 슬픔(Sadness), 놀람(Surprising) 등의 6가지의 표정의 이미지를 인물의 표정으로 활용하였다. 그리고, 배경 색채로서 빨강, 노랑, 파랑, 초록의 색상들을 기준으로 각각 밝은(light), 선명한(vivid), 둔탁한(dull), 그리고 어두운(dark) 등의 4 가지 톤(tone)의 영역에서 색채를 추출하였고, 추가로 무채색의 5 가지 색상이 적용되었다. 총 120 장(5 가지 얼굴표정 ${\times}$ 20 가지 색채)의 표정에서 나타나는 감성적 표현을 평가하도록 하였으며, 각각의 피실험자는 무작위 순위로 60개의 자극물을 평가하였다. 실험에서 측정된 데이터는 각 표정별로 분류되었으며 배경에 적용된 색채에 따라 얼굴 표현에서 나타나는 감성적 표현이 다름을 보여주었다. 특히 색채에 대한 감성적 반응에 대한 기존연구에서 제시하고 있는 자료를 토대로 색채와 얼굴표정의 감성이 상반되는 경우, 얼굴표정에서 나타나는 감성적 표현이 약하게 전달되었음을 알 수 있었으며, 이는 부정적인 얼굴표정일수록 더 두드러지는 것으로 나타났다. 이러한 현상은 색상과 톤의 경우 공통적으로 나타나는 현상으로서 광고 및 시각 디자인 분야의 실무에서 활용될 수 있다.

  • PDF

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.