• Title/Summary/Keyword: Non-Verbal Expressions

Search Result 22, Processing Time 0.021 seconds

A Study on Non-Verbal Expressions for the Realization of Narrative Visualization -Focusing on a 3D Cat Character, "Puss" (내러티브 시각화 구현을 위한 비언어적 표현 연구-3D 고양이 캐릭터 "Puss"를 중심으로)

  • Lee, Young-Suk;Kim, Sang-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.3
    • /
    • pp.659-672
    • /
    • 2016
  • In animated films, characters materialize narratives through acting. The narrative is an element to materialize accurate delivery of lines and emotions. The non-verbal actions should express lots of emotions and lines in scenes, and also they can be used as a way of empathy. This study analyzed the visualization factors of narrative focusing on a cat character frequently shown in animated films. For this, the visualization factors of non-verbal actions expressed in characters' personal space and dynamic space were extracted. Based on this, it aims to suggest the emotion expressing method of characters to realize effective narrative visualization. In the future, it aims to be used as reference data in case when producing non-verbal communication for 3D characters.

Japanese Political Interviews: The Integration of Conversation Analysis and Facial Expression Analysis

  • Kinoshita, Ken
    • Asian Journal for Public Opinion Research
    • /
    • v.8 no.3
    • /
    • pp.180-196
    • /
    • 2020
  • This paper considers Japanese political interviews to integrate conversation and facial expression analysis. The behaviors of political leaders will be disclosed by analyzing questions and responses by using the turn-taking system in conversation analysis. Additionally, audiences who cannot understand verbal expressions alone will understand the psychology of political leaders by analyzing their facial expressions. Integral analyses promote understanding of the types of facial and verbal expressions of politicians and their effect on public opinion. Politicians have unique techniques to convince people. If people do not know these techniques and ways of various expressions, they will become confused, and politics may fall into populism as a result. To avoid this, a complete understanding of verbal and non-verbal behaviors is needed. This paper presents two analyses. The first analysis is a qualitative analysis that deals with Prime Minister Shinzō Abe and shows that differences between words and happy facial expressions occur. That result indicates that Abe expresses disgusted facial expressions when faced with the same question from an interviewer. The second is a quantitative multiple regression analysis where the dependent variables are six facial expressions: happy, sad, angry, surprised, scared, and disgusted. The independent variable is when politicians have a threat to face. Political interviews that directly inform audiences are used as a tool by politicians. Those interviews play an important role in modelling public opinion. The audience watches political interviews, and these mold support to the party. Watching political interviews contributes to the decision to support the political party when they vote in a coming election.

The Impact of Airport Staff Communication Types and Nonverbal Communication Factors on Passenger Satisfaction after the Pandemic

  • Sunmi LEE
    • The Journal of Industrial Distribution & Business
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • Purpose: The purpose is to investigate the types of communication between aviation industry workers and passengers according to environmental changes following the COVID-19 pandemic. This study analyzes the impact of verbal and non-verbal communication styles of airport staff, especially those working at airline check-in counters, on passenger satisfaction. Research Design: The research design focuses on the impact of verbal communication styles and non-verbal communication factors of airline check-in counter staff, who represent the initial point of contact with passengers among airport staff, on passenger satisfaction. The survey period for sample collection was from July 1 to July 30, 2023, and the study was conducted targeting passengers boarding aircraft through Incheon Airport and Gimpo Airport. Result: First, it is important for airport staff to recognize all passengers, especially corporate customers, as corporate customers rather than simply as individuals boarding an airplane. Second, as the importance of non-verbal expressions increases due to the impact of COVID-19, physical and verbal responses are necessary. Third, it is important to check which language the passenger understands. Conclusions: Since communication through nonverbal expressions has become more important since COVID-19, airport employees need to recognize the importance of nonverbal communication. This awareness can serve as a foundation for building trust between airport staff and passengers.

3D Avatar Gesture Representation for Collaborative Virtual Environment Design (CVE 디자인을 위한 3D 아바타의 동작 표현 연구)

  • Lee Kyung-Won;Jang Sun-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.122-132
    • /
    • 2005
  • CVE(Collaborative Virtual Environment) is the virtually shared area where people cannot come together physically, but wish to discuss, collaborate on, or even dispute certain matters. In CVEs, in habitants are usually represented by humanoid embodiments, generally referred to as avatars. But most current graphical CVE systems fail to reflect the natural relationship between the avatar's gesture and the conversation that is taking place. More than 65% of the information exchanged during a person to person conversation is carried on the nonverbal band. Therefore, it is expected to be beneficial to provide such communication channels in CVEs in some way. To address this issue, this study proposes a scheme to represent avatar's gestures that can support the CVE users' communication. In the first level, this study classifies the non-verbal communication forms that can be applicable to avatar gesture design. In the second level, this study categorizes the body language according to the types of interaction with verbal language. And in the third level, this study examines gestures with relevant verbal expressions according to the body parts-from head to feet. One bodily gesture can be analyzed in the description of gesture representation, the meaning of gesture and the possible expressions, which can be used in gestural situation.

  • PDF

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Ae-Sun;Kwon, Hyuk-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.157-175
    • /
    • 2010
  • Understanding dialogue participant's emotion is important as well as decoding the explicit message in human communication. It is well known that non-verbal elements are more suitable for conveying speaker's emotions than verbal elements. Written texts, however, contain a variety of linguistic units that express emotions. This study aims at analyzing components for constructing an emotion ontology, that provides us with numerous applications in Human Language Technology. A majority of the previous work in text-based emotion processing focused on the classification of emotions, the construction of a dictionary describing emotion, and the retrieval of those lexica in texts through keyword spotting and/or syntactic parsing techniques. The retrieved or computed emotions based on that process did not show good results in terms of accuracy. Thus, more sophisticate components analysis is proposed and the linguistic factors are introduced in this study. (1) 5 linguistic types of emotion expressions are differentiated in terms of target (verbal/non-verbal) and the method (expressive/descriptive/iconic). The correlations among them as well as their correlation with the non-verbal expressive type are also determined. This characteristic is expected to guarantees more adaptability to our ontology in multi-modal environments. (2) As emotion-related components, this study proposes 24 emotion types, the 5-scale intensity (-2~+2), and the 3-scale polarity (positive/negative/neutral) which can describe a variety of emotions in more detail and in standardized way. (3) We introduce verbal expression-related components, such as 'experiencer', 'description target', 'description method' and 'linguistic features', which can classify and tag appropriately verbal expressions of emotions. (4) Adopting the linguistic tag sets proposed by ISO and TEI and providing the mapping table between our classification of emotions and Plutchik's, our ontology can be easily employed for multilingual processing.

  • PDF

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

인도네시아어 반복법의 도상성에 관한 연구

  • Jeon Tae Hyeon
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.502-509
    • /
    • 1996
  • This paper is a survey on the characteristics of reduplication in Bahasa Indonesia(B1). B1 abound in reduplicated sound-symbolic expressions. like Japanes and Korean, as such reduplication is considered as one of the significant morphological processes in B1. Despite the huge number of these expressions in B1. scholarship has not hitherto paid much of attentions to their non-arbitrary characteristics nor has not explained their iconicity systematically so far. This study concerns about the needs to describe the iconic patterns of reduplication in the grammar of B1. Firstly, tense-iconicity could be shown in verbal reduplicatives. Secondly, idiomatic reduplicatives could be considered as the remnants of diachronic reduplicated sound-symbolid expressions. The iconicity of reduplication of B1 must be described in a distinct component of the grammar of B1. As one of the simple-structured languages in the world. B1 shows iconic patterns, being fundamentally language-specific, in the grammar. But at the moment, we do not have the formal lingustic tools necessary for describing iconicity. This problem could probably be solved by modifying formal conventions about rules and features.

  • PDF

Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries (감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현)

  • Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

Aural-visual two-stream based infant cry recognition (Aural-visual two-stream 기반의 아기 울음소리 식별)

  • Bo, Zhao;Lee, Jonguk;Atif, Othmane;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.354-357
    • /
    • 2021
  • Infants communicate their feelings and needs to the outside world through non-verbal methods such as crying and displaying diverse facial expressions. However, inexperienced parents tend to decode these non-verbal messages incorrectly and take inappropriate actions, which might affect the bonding they build with their babies and the cognitive development of the newborns. In this paper, we propose an aural-visual two-stream based infant cry recognition system to help parents comprehend the feelings and needs of crying babies. The proposed system first extracts the features from the pre-processed audio and video data by using the VGGish model and 3D-CNN model respectively, fuses the extracted features using a fully connected layer, and finally applies a SoftMax function to classify the fused features and recognize the corresponding type of cry. The experimental results show that the proposed system classification exceeds 0.92 in F1-score, which is 0.08 and 0.10 higher than the single-stream aural model and single-stream visual model.

Research on Attribute of Postdramatic Theatre from (2019) by Theater Group "Mul-Kyul" (극단 '물결'의 <밑바닥에서>(2019)에 나타난 포스트드라마 연극 특성 연구)

  • Ra, Kyung-Min
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.295-306
    • /
    • 2020
  • In 21st century, theater evolves into a complex aspects. Advanced visual media, such as photography and movies has brought crisis to theater's position, and that crisis led contemporary theater seek for distinctive strategy by repeatedly pondering over the format in which it can be more competitive than other arts. And postdramatic theatre is one of distinctive characteristics of this trend in contemporary theater. Among these flows, The aim of thesis is to study the phenomenon of postdramatic theatre and its practical application in the recently performed (2019) by Theater Group "Mul-Kyul". (2019) puts the body at the front, one of the features of the postdramatic theatre. When creating stage, or developing narratives, the process of characterization, or even highlighting dramatic themes, non-verbal focused theatrical expressions hold a dominant position over verbal expressions. Also, by combining various non-verbal elements like object, with body language, it builds a complex Scenography and creates a metaphorical expression. In this regards, I would like to classify the postdramatic theatre phenomenon shown in the (2019) into 'Disorganization of text through Scenography' and 'Collage of Body Language and Object' and consider its characteristics and meanings.