• Title/Summary/Keyword: expression recognition

Search Result 717, Processing Time 0.022 seconds

A Study on the Expression Recognition of the Experience of the Sinmyung and the Movement in the Korean Dance of College Students Majoring in Musical: A Qualitative (뮤지컬 전공대학생들의 한국 춤 신명체험(神明體驗)과 움직임 표현인식;질적 접근)

  • Jeong, Tae-seon;Ahn, Byoung-Soon
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.12
    • /
    • pp.383-393
    • /
    • 2018
  • The purpose of this paper is to study on the elements of the Sinmyung and the expression recognition of body movement in Korean dance of college students majoring in musical. The participants were 12 male and female college students in musical major who experienced in dance, song and acting. The program was composed of the experience of the Sinmyung: recognition of sound and dance, breathing and movement in the Korean dance, 8 hours twice a week for four weeks. As a qualitative approach is the discovery of the center of the process, we carried out an inductive analysis of the area on the basis of observation, in-depth interview and student report. The core of this analysis is to attempt to analyze contents concentrating on the recognition exploration of the Sinmyung sentiment and the body expression through sound and breathing. In conclusion, for college students majoring in musical, the expression recognition of the experience of the Sinmyung and the movement in the Korean dance contributes to the improvement of creative thinking through body perception, and the practical use of the capacity of image expression through concentration of sound and breathing. Finally, the results of this research could articulate with the value of body expression and the creative factors of college students majoring in musical.

Lightweight CNN-based Expression Recognition on Humanoid Robot

  • Zhao, Guangzhe;Yang, Hanting;Tao, Yong;Zhang, Lei;Zhao, Chunxiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1188-1203
    • /
    • 2020
  • The human expression contains a lot of information that can be used to detect complex conditions such as pain and fatigue. After deep learning became the mainstream method, the traditional feature extraction method no longer has advantages. However, in order to achieve higher accuracy, researchers continue to stack the number of layers of the neural network, which makes the real-time performance of the model weak. Therefore, this paper proposed an expression recognition framework based on densely concatenated convolutional neural networks to balance accuracy and latency and apply it to humanoid robots. The techniques of feature reuse and parameter compression in the framework improved the learning ability of the model and greatly reduced the parameters. Experiments showed that the proposed model can reduce tens of times the parameters at the expense of little accuracy.

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

A Study on Non-Contact Care Robot System through Deep Learning

  • Hyun-Sik Ham;Sae Jun Ko
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.33-40
    • /
    • 2023
  • As South Korea enters the realm of an super-aging society, the demand for elderly welfare services has been steadily rising. However, the current shortage of welfare personnel has emerged as a social issue. To address this challenge, there is active research underway on elderly care robots designed to mitigate the social isolation of the elderly and provide emergency contact capabilities in critical situations. Nonetheless, these functionalities require direct user contact, which represents a limitation of conventional elderly care robots. In this paper, we propose a solution to overcome these challenges by introducing a care robot system capable of interacting with users without the need for direct physical contact. This system leverages commercialized elderly care robots and cameras. We have equipped the care robot with an edge device that incorporates facial expression recognition and action recognition models. The models were trained and validated using public available data. Experimental results demonstrate high accuracy rates, with facial expression recognition achieving 96.5% accuracy and action recognition reaching 90.9%. Furthermore, the inference times for these processes are 50ms and 350ms, respectively. These findings affirm that our proposed system offers efficient and accurate facial and action recognition, enabling seamless interaction even in non-contact situations.

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

A Survey of Objective Measurement of Fatigue Caused by Visual Stimuli (시각자극에 의한 피로도의 객관적 측정을 위한 연구 조사)

  • Kim, Young-Joo;Lee, Eui-Chul;Whang, Min-Cheol;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.195-202
    • /
    • 2011
  • Objective: The aim of this study is to investigate and review the previous researches about objective measuring fatigue caused by visual stimuli. Also, we analyze possibility of alternative visual fatigue measurement methods using facial expression recognition and gesture recognition. Background: In most previous researches, visual fatigue is commonly measured by survey or interview based subjective method. However, the subjective evaluation methods can be affected by individual feeling's variation or other kinds of stimuli. To solve these problems, signal and image processing based visual fatigue measurement methods have been widely researched. Method: To analyze the signal and image processing based methods, we categorized previous works into three groups such as bio-signal, brainwave, and eye image based methods. Also, the possibility of adopting facial expression or gesture recognition to measure visual fatigue is analyzed. Results: Bio-signal and brainwave based methods have problems because they can be degraded by not only visual stimuli but also the other kinds of external stimuli caused by other sense organs. In eye image based methods, using only single feature such as blink frequency or pupil size also has problem because the single feature can be easily degraded by other kinds of emotions. Conclusion: Multi-modal measurement method is required by fusing several features which are extracted from the bio-signal and image. Also, alternative method using facial expression or gesture recognition can be considered. Application: The objective visual fatigue measurement method can be applied into the fields of quantitative and comparative measurement of visual fatigue of next generation display devices in terms of human factor.

Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application (심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법)

  • Ryu, Jeong Tak;Yang, Jeen Mo;Choi, Young Sook;Park, Se Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.2
    • /
    • pp.57-63
    • /
    • 2015
  • Compared to other emotion recognition technology, facial expression recognition technology has the merit of non-contact, non-enforceable and convenience. In order to apply to a psychological robot, vision technology must be able to quickly and accurately extract the face region in the previous step of facial expression recognition. In this paper, we remove the background from any image using the YCbCr skin color technology, and use Haar-like Feature technology for robust face detection. We got the result of improved processing speed and robust face detection by removing the background from the input image.

Pregnancy Recognition Signaling for Establishment and Maintenance of Pregnancy

  • Bazer, Fuller W.
    • Korean Journal of Animal Reproduction
    • /
    • v.23 no.4
    • /
    • pp.365-369
    • /
    • 1999
  • Interferon tau (IFN$\tau$), the pregnancy recognition signal in ruminants, suppresses transcription of the estrogen receptor (ER) gene in the endometrial luminal (LE) and superficial glandular epithelium (sGE) to prevent oxytocin receptor (OTR) expression and pulsatile release of luteolytic prostaglandin $F_{2{\alpha}}$ (PGF), Interferon regulatory factors one (IRF-l) and two (IRF-2) are transcription factors induced by IFN$\tau$ that activate and silence gene expression, respectively. Available results suggest that IFN$\tau$ acts directly on LE and sGE during pregnancy to induce sequentially IRF-l and then IRF-2 gene expression to silence transcription of ER and OTR genes, block the luteolytic mechanism to maintenance a functional corpus luteum (CL) and, signal maternal recognition of pregnancy. The theory for maternal recognition of pregnancy in pigs is that the uterine endometrium of cyclic gilts secretes PGF in an endocrine direction, toward the uterine vasculature for transport to the CL to exert its luteolytic effect. However, in pregnant pigs, estrogens secreted by the conceptuses are responsible, perhaps in concert with effects of prolactin and calcium, for exocrine secretion of PGF into the uterine lumen where it is sequestered to exert biological effects and / or be metabolized to prevent luteolysis.

  • PDF

Image Recognition based on Adaptive Deep Learning (적응적 딥러닝 학습 기반 영상 인식)

  • Kim, Jin-Woo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.113-117
    • /
    • 2018
  • Human emotions are revealed by various factors. Words, actions, facial expressions, attire and so on. But people know how to hide their feelings. So we can not easily guess its sensitivity using one factor. We decided to pay attention to behaviors and facial expressions in order to solve these problems. Behavior and facial expression can not be easily concealed without constant effort and training. In this paper, we propose an algorithm to estimate human emotion through combination of two results by gradually learning human behavior and facial expression with little data through the deep learning method. Through this algorithm, we can more comprehensively grasp human emotions.

A Study on the System of Facial Expression Recognition for Emotional Information and Communication Technology Teaching (감성ICT 교육을 위한 얼굴감성 인식 시스템에 관한 연구)

  • Song, Eun Jee
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.4 no.2
    • /
    • pp.171-175
    • /
    • 2012
  • Recently, the research on ICT (Information and Communication Technology), which cognizes and communicates human's emotion through information technology, is increasing. For instance, there are researches on phones and services that perceive users' emotions through detecting people's voices, facial emotions, and biometric data. In short, emotions which were used to be predicted only by humans are, now, predicted by digital equipment instead. Among many ICT researches, research on emotion recognition from face is fully expected as the most effective and natural human interface. This paper studies about sensitivity ICT and examines mechanism of facial expression recognition system as an example of sensitivity ICT.

  • PDF