• Title/Summary/Keyword: 감성 표현 영역 추출

Search Result 10, Processing Time 0.024 seconds

Extracting multiple sentiment expression areas using BERT+CRF (BERT+CRF를 이용한 다중 감성 표현 영역 추출)

  • Park, Ji-Eun;Lee, Ju-Sang;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.571-575
    • /
    • 2021
  • 감성분석이란 텍스트에 들어있는 의견이나 감성, 평가, 태도 등의 주관적인 정보를 컴퓨터를 통해 분석하는 과정이다. 본 논문은 다양한 감성분석 실험 중 감성이 드러나는 부분을 파악하여 서술어 중심의 구 혹은 절 단위로 감성 표현 영역을 추출하는 모델을 개발하고자 한다. 제안하는 모델은 BERT에 classification layer와 CRF layer를 결합한 것이고 baseline은 일반 BERT 모델이다. 실험 결과는 기존의 baseline 모델의 f1-score이 33.44%이고 제안한 BERT+CRF 모델의 f1-score이 40.99%이다. BERT+CRF 모델이 7.55% 더 좋은 성능을 보인다.

  • PDF

A study on Connection between Creativity Development and Emotional Quotient in Cartoon Learning (만화학습에 있어서 창의성개발과 감성지능의 관계에 관한 연구)

  • Choi, Mi-Ran;Cho, Kwang-Soo
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.183-192
    • /
    • 2012
  • This study aims at expressing the correlation of 'creativity' and 'emotional intelligence' in cartoon expression learning through literary research and correlation analysis. Analyses were made on each sub-factor for the self emotional intelligence evaluation and the creativity evaluation made by experts through cartoon expressions by elementary school students, who are the learners. Studies on preceding research showed that creativity and emotional intelligence had a correlation and that it is common preception that higher creativity is equivalent to higher emotional intelligence. However, results of correlation analysis in this study showed that while there is a relation between creativity evaluation and emotional intelligence in cartoon expression learning, not all factors were correlated. Furthermore, the results of emotional evaluation of the upper and lower group learners did not show similar results in the creativity evaluation. Through this study, it can be said that for emotional intelligence and creativity factors, finding the appropriate emotional intelligence development method would be the way to enhance creativity. Therefore, in order to develop creativity through cartoon expression learning, systematic research should be performed for extracting the relative emotional intelligence factors.

  • PDF

The Influence of Background Color on Perceiving Facial Expression (배경색채가 얼굴 표정에서 전달되는 감성에 미치는 영향)

  • Son, Ho-Won;Choe, Da-Mi;Seok, Hyeon-Jeong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.51-54
    • /
    • 2009
  • 다양한 미디어에서 인물과 색채는 가장 중심적인 요소로서 활용되므로 인물의 표정에서 느껴지는 감성과 색채 자극에 대한 감성적 반응에 연구는 심리학 분야에서 각각 심도 있게 연구되어왔다. 본 연구에서는 감성 자극물로서의 얼굴 표정과 색채가 상호 작용을 하였을 때 이에 대한 감성적 반응에 대하여 조사하는데 그 목적이 있다. 즉, 인물의 표정과 배경 색상을 배치하였을 때 인물의 표정에서 느껴지는 감성이 어떻게 변하는지에 관한 실험 연구를 진행하여 이를 미디어에서 활용할 수 있는 방안을 제시하고자 한다. 60명의 피실험자들을 대상으로 진행한 실험연구에서는 Ekman의 7가지의 universal facial expression 중 증오(Contempt)의 표정을 제외한 분노(Anger), 공포(Fear), 역겨움(Disgusting), 기쁨(Happiness), 슬픔(Sadness), 놀람(Surprising) 등의 6가지의 표정의 이미지를 인물의 표정으로 활용하였다. 그리고, 배경 색채로서 빨강, 노랑, 파랑, 초록의 색상들을 기준으로 각각 밝은(light), 선명한(vivid), 둔탁한(dull), 그리고 어두운(dark) 등의 4 가지 톤(tone)의 영역에서 색채를 추출하였고, 추가로 무채색의 5 가지 색상이 적용되었다. 총 120 장(5 가지 얼굴표정 ${\times}$ 20 가지 색채)의 표정에서 나타나는 감성적 표현을 평가하도록 하였으며, 각각의 피실험자는 무작위 순위로 60개의 자극물을 평가하였다. 실험에서 측정된 데이터는 각 표정별로 분류되었으며 배경에 적용된 색채에 따라 얼굴 표현에서 나타나는 감성적 표현이 다름을 보여주었다. 특히 색채에 대한 감성적 반응에 대한 기존연구에서 제시하고 있는 자료를 토대로 색채와 얼굴표정의 감성이 상반되는 경우, 얼굴표정에서 나타나는 감성적 표현이 약하게 전달되었음을 알 수 있었으며, 이는 부정적인 얼굴표정일수록 더 두드러지는 것으로 나타났다. 이러한 현상은 색상과 톤의 경우 공통적으로 나타나는 현상으로서 광고 및 시각 디자인 분야의 실무에서 활용될 수 있다.

  • PDF

The Development of Image Caption Generating Software for Auditory Disabled (청각장애인을 위한 동영상 이미지캡션 생성 소프트웨어 개발)

  • Lim, Kyung-Ho;Yoon, Joon-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1069-1074
    • /
    • 2007
  • 청각장애인이 PC환경에서 영화, 방송, 애니메이션 등의 동영상 콘텐츠를 이용할 때 장애의 정도에 따라 콘텐츠의 접근성에 있어서 시각적 수용 이외의 부분적 장애가 발생한다. 이러한 장애의 극복을 위해 수화 애니메이션이나 독화 교육과 같은 청각장애인의 정보 접근성 향상을 위한 콘텐츠와 기술이 개발된 사례가 있었으나 다소 한계점을 가지고 있다. 따라서 본 논문에서는 현대 뉴미디어 예술 작품의 예술적 표현 방법을 구성요소로서 추출하여, 기술과 감성의 조화가 어우러진 독창적인 콘텐츠를 생산할 수 있는 기술을 개발함으로써 PC환경에서 청각장애인의 동영상 콘텐츠에 대한 접근성 향상 방법을 추출하고, 실질적으로 청각적 효과의 시각적 변환 인터페이스 개발 및 이미지 캡션 생성 소프트웨어 개발을 통해 청각장애인의 동영상 콘텐츠 사용성을 극대화시킬 수 있는 방법론을 제시하고자 한다. 본 논문에서는 첫째, 청각장애인의 동영상 콘텐츠 접근성 분석, 둘째, 미디어아트 작품의 선별적 분석 및 유동요소 추출, 셋째, 인터페이스 및 콘텐츠 제작의 순서로 단계별 방법론을 제시하고 있다. 이 세번 째 단계에서 이미지 캡션 생성 소프트웨어가 개발되고, 비트맵 아이콘 형태의 이미지 캡션 콘텐츠가 생성된다. 개발한 이미지 캡션 생성 소프트웨어는 사용성에 입각한 일상의 언어적 요소와 예술 작품으로부터 추출한 청각 요소의 시각적요소로의 전환을 위한 인터페이스인 것이다. 이러한 기술의 개발은 기술적 측면으로는 청각장애인의 다양한 웹콘텐츠 접근 장애를 개선하는 독창적인 인터페이스 추출 환경을 확립하여 응용영역을 확대하고, 공학적으로 단언된 기술 영역을 콘텐츠 개발 기술이라는 새로운 영역으로 확장함으로써 간학제적 시도를 통한 기술영역을 유기적으로 확대하며, 문자와 오디오를 이미지와 시각적 효과로 전환하여 다각적인 미디어의 교차 활용 방안을 제시하여 콘텐츠를 형상화시키는 기술을 활성화 시키는 효과를 거둘 수 있다. 또한 청각장애인의 접근성 개선이라는 한정된 영역을 뛰어넘어 국가간 언어적인 장벽을 초월할 수 있는 다각적인 부가 동영상 콘텐츠에 대한 시도, 접근, 생산을 통해 글로벌 시대에 부응하는 새로운 방법론으로 발전 할 수 있다.

  • PDF

On the Implementation of a Facial Animation Using the Emotional Expression Techniques (FAES : 감성 표현 기법을 이용한 얼굴 애니메이션 구현)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.147-155
    • /
    • 2005
  • In this paper, we present a FAES(a Facial Animation with Emotion and Speech) system for speech-driven face animation with emotions. We animate face cartoons not only from input speech, but also based on emotions derived from speech signal. And also our system can ensure smooth transitions and exact representation in animation. To do this, after collecting the training data, we have made the database using SVM(Support Vector Machine) to recognize four different categories of emotions: neutral, dislike, fear and surprise. So that, we can make the system for speech-driven animation with emotions. Also, we trained on Korean young person and focused on only Korean emotional face expressions. Experimental results of our system demonstrate that more emotional areas expanded and the accuracies of the emotional recognition and the continuous speech recognition are respectively increased 7% and 5% more compared with the previous method.

  • PDF

A study on unstructured text mining algorithm through R programming based on data dictionary (Data Dictionary 기반의 R Programming을 통한 비정형 Text Mining Algorithm 연구)

  • Lee, Jong Hwa;Lee, Hyun-Kyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.2
    • /
    • pp.113-124
    • /
    • 2015
  • Unlike structured data which are gathered and saved in a predefined structure, unstructured text data which are mostly written in natural language have larger applications recently due to the emergence of web 2.0. Text mining is one of the most important big data analysis techniques that extracts meaningful information in the text because it has not only increased in the amount of text data but also human being's emotion is expressed directly. In this study, we used R program, an open source software for statistical analysis, and studied algorithm implementation to conduct analyses (such as Frequency Analysis, Cluster Analysis, Word Cloud, Social Network Analysis). Especially, to focus on our research scope, we used keyword extract method based on a Data Dictionary. By applying in real cases, we could find that R is very useful as a statistical analysis software working on variety of OS and with other languages interface.

The analysis of physical features and affective words on facial types of Korean females in twenties (얼굴의 물리적 특징 분석 및 얼굴 관련 감성 어휘 분석 - 20대 한국인 여성 얼굴을 대상으로 -)

  • 박수진;한재현;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.3
    • /
    • pp.1-10
    • /
    • 2002
  • This study was performed to analyze the physical attributes of the faces and affective words on the fares. For analyzing physical attributes inside of a face, 36 facial features were selected and almost of them were the lengths or distance values. For analyzing facial contour 14 points were selected and the lengths from nose-end to them were measured. The values of these features except ratio values normalized by facial vortical length or facial horizontal length because the face size of each person is different. The principal component analysis (PCA) was performed and four major factors were extracted: 'facial contour' component, 'vortical length of eye' component, 'facial width' component, 'eyebrow region' component. We supposed the five-dimensional imaginary space of faces using factor scores of PCA, and selected representative faces evenly in this space. On the other hand, the affective words on faces were collected from magazines and through surveys. The factor analysis and multidimensional scaling method were performed and two orthogonal dimensions for the affections on faces were suggested: babyish-mature and sharp-soft.

  • PDF

A Study on the Color Grouping System to Fashion (섬유컬러 그루핑 체계에 관한 연구)

  • 이재정;정재우
    • Archives of design research
    • /
    • v.17 no.3
    • /
    • pp.27-38
    • /
    • 2004
  • It is important for designers to be supported with their decision-making on colours which is often based on personal distinction rather than logical dialogue that may lead to confusion within communicating with others. To help these problems and to gain productivity, we would like to propose a way to define colour grouping method. In other words, the purpose of this study is to help to improve the communication and productivity within the design and designers. The grouping was based and inspired by from the studies of Kobayashi, Hideaki Chijiawa, Allis Westgate and Martha Gill. The study of grouping is based on the "tones" of each group, as they seem to reflect a designer s sentimentalism of chosen colours the best. Each of these groups will be named Bright , Pastel ,Deep and Neutral The general concept of each groups are: - Bright: high quality of pixels of primary colour - Pastel: primary colour with white - - Deep: Primary colour with gray or black - Neutral: colours that does not include any of above Each of the colour group has been allocated into Si-Hwa Jung's colour charts and colour prism to visualize the relationships between the colour groups. These four groups and the colours included in them will be broken down to smaller groups in order to make colour palette. This would break the barrier and result in using colours in groups as well as crossover coordination. This study would result in new ways of using colurs for designers designers

  • PDF

Specifying the Characteristics of Tangible User Interface: centered on the Science Museum Installation (실물형 인터렉션 디자인 특성 분석: 과학관 체험 전시물을 대상으로)

  • Cho, Myung Eun;Oh, Myung Won;Kim, Mi Jeong
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.553-564
    • /
    • 2012
  • Tangible user interfaces have been developed in the area of Human-Computer Interaction for the last decades, however, the applied domains recently have been extended into the product design and interactive art. Tangible User Interfaces are the combination of digital information and physical objects or environments, thus they provide tangible and intuitive interaction as input and output devices, often combined with Augmented Reality. The research developed a design guideline for tangible user interfaces based on key properties of tangible user interfaces defined previously in five representative research: Tangible Interaction, Intuitiveness and Convenience, Expressive Representation, Context-aware and Spatial Interaction, and Social Interaction. Using the guideline emphasizing user interaction, this research evaluated installation in a science museum in terms of the applied characteristics of tangible user interfaces. The selected 15 installations which were evaluated are to educate visitors for science by emphasizing manipulation and experience of interfaces in those installations. According to the input devices, they are categorized into four Types. TUI properties in Type 3 installation, which uses body motions for interaction, shows the highest score, where items for context-aware and spatial interaction were highly rated. The context-aware and spatial interaction have been recently emphasized as extended properties of tangible user interfaces. The major type of installation in the science museum is equipped with buttons and joysticks for physical manipulation, thus multimodal interfaces utilizing visual, aural, tactile senses etc need to be developed to provide more innovative interaction. Further, more installation need to be reconfigurable for embodied interaction between users and the interactive space. The proposed design guideline can specify the characteristics of tangible user interfaces, thus this research can be a basis for the development and application of installation involving more TUI properties in future.

  • PDF

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.