• Title/Summary/Keyword: Emotion Transformation

Search Result 50, Processing Time 0.023 seconds

Color Transformation of Images based on Emotion Using Interactive Genetic Algorithm (대화형 유전자 알고리즘을 이용한 감정 기반 영상의 색변환)

  • Woo, Hye-Yoon;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.17B no.2
    • /
    • pp.169-176
    • /
    • 2010
  • This paper proposes color transformation of images based on user's preference. Traditional color transformation method transforms only hue based on existing templates that define range of harmonious hue. It does not change saturation and intensity. Users would appreciate the resulting images that adjusted unnatural hue of images. Since color is closely related to peoples' emotion, we can enhance interaction of emotion-based contents and technologies. Therefore, in this paper, we define the range of color of each emotion for the transformation of color and perform the transformation of hue, saturation and intensity. However, the relationship of color and emotion depends on the culture and environment. To reflect these characteristics in color transformation, we propose the transformation of color that is based on user's preference and as a result, people would be more satisfied. We adopt interactive genetic algorithm to learn about user's preference. We surveyed the subject to analyze user's satisfaction about transformed images that are based on preference, and we found that people prefer transformed images to original images. Therefore, we conclude that people are more satisfied with the transformation of the templates which reflected user's preference than the one that did not.

Color Transformation of Images based on User Preference (사용자 취향을 반영한 영상의 색변환)

  • Woo, Hye-Yoon;Kang, Hang-Bong
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.986-995
    • /
    • 2009
  • Color affects people in their various combinations of hue, saturation and value. On the other hand, people may feel different emotion from the same color. If we can introduce these characteristics of color and people's emotion about color to emotion-based digital technologies and their contents, we can effectively draw users' interest and immersion to the contents. In this paper, we will show how people feel about color and present a method of image coloring that reflects the user's preference. First, we define basic templates that reflect the relationship between color and emotion, and then perform an image coloring. To reflect user's preference, we compute weights for hue, saturation and value through the experiments on each subject's preference about hue, saturation and value. The image coloring for each subject's taste will be drawn by updating the weights of hue, saturation and value. Through the results of experiments and surveys, we found that people were more satisfied with the transformation of the templates which reflected user's preference than the one that did not.

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

RESEARCH ON KANSEI COLOR DESIGN BY PLEASANT SOUND

  • Okamoto, Miyoshi;Mori, Akira
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.144-148
    • /
    • 2000
  • A new paradigm is urgently needed to create the textile product that appeal to human Kansei or Gosei. A future of textiles depends heavily on this new paradigm. In order to create new paradigm Kansei color designs by pleasant sound are tried. These computing color designs are treated by the method of Fast fourier Transformation. As several result good color designs are given in forms of ring color patterns and band. But these judgments depend finally on human kansei. These new technology give us good hints in order to create new paradigm that appeal to Kansei goods. This new concept should be developed to higher level by additional improvements.

  • PDF

A Study on Visual Emotion Classification using Balanced Data Augmentation (균형 잡힌 데이터 증강 기반 영상 감정 분류에 관한 연구)

  • Jeong, Chi Yoon;Kim, Mooseop
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.880-889
    • /
    • 2021
  • In everyday life, recognizing people's emotions from their frames is essential and is a popular research domain in the area of computer vision. Visual emotion has a severe class imbalance in which most of the data are distributed in specific categories. The existing methods do not consider class imbalance and used accuracy as the performance metric, which is not suitable for evaluating the performance of the imbalanced dataset. Therefore, we proposed a method for recognizing visual emotion using balanced data augmentation to address the class imbalance. The proposed method generates a balanced dataset by adopting the random over-sampling and image transformation methods. Also, the proposed method uses the Focal loss as a loss function, which can mitigate the class imbalance by down weighting the well-classified samples. EfficientNet, which is the state-of-the-art method for image classification is used to recognize visual emotion. We compare the performance of the proposed method with that of conventional methods by using a public dataset. The experimental results show that the proposed method increases the F1 score by 40% compared with the method without data augmentation, mitigating class imbalance without loss of classification accuracy.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

The Internet Design Framework for Improvement of Users' Positive Emotions

  • Wu, Chunmao;Li, Xuefei;Dong, Cui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2720-2735
    • /
    • 2022
  • This study proposes an internet design framework for users to improve their positive emotions when they are in a negative mood. First, the literature review focuses on the definition of emotion, positive emotional design in internet experiences, and emotion regulation. Second, in order to construct an internet design framework that improves positive emotion, this paper adopts a qualitative analysis method to analyze 70 collected studies in the area of regulating emotion and stimulating positive emotions. Additionally, bibliometrics and statistics are conducted to summarize the framework and strategies. Third, two cases of internet design are presented: (a) Internet design that improves users' positive emotions is examined under the background of extreme rainstorm as an example; an applet service design is provided by case study; (b) in the context of COVID-19, we developed an Internet of things interactive design that improves users' positive emotions. Fourth, the internet design framework and the results of the case studies are analyzed and discussed. Finally, an internet design framework is proposed to improve users' positive emotions when they are in a negative mood, which includes the Detachment-empathy framework, External-protection framework, Ability-strengthen framework, Perspective-transformation framework, and Macro-cognitive framework. The framework can help designers to generate design ideas accurately and quickly when users are in a negative mood, to improve subjective well-being, and contribute to the development of internet experience design.

The Fashion Formative Characteristics and Meanings in the Tech Fatale Types of the Post Digital Generation - Focusing on the Female Models of the Mobile Advertisements - (포스트 디지털 시대의 Tech Fatale 유형에 나타난 패션 조형특성과 내적 의미 - 휴대전화 광고의 여성모델을 중심으로 -)

  • Kim, Ha-Lim;Kwon, Gi-Young;Lee, Shin-Hee
    • Fashion & Textile Research Journal
    • /
    • v.10 no.5
    • /
    • pp.721-730
    • /
    • 2008
  • The purpose of this study is to analyze the tech fatale fashion image in female models of mobile advertisements and to find the fashion characteristics. The tech fatale emerging as new culture code is a compound word of the femme fatale and technology. It is characteristics of the femme fatale, the post digital culture and the female leadership. The findings of the study were as follows : The tech fatale types were the independence, the transformation and the tradition. The independence was a self expression, appealed to visual image, was showed the coating fabric, denim, space look and street fashion and reflected the creativeness and digital generation. The transformation appealed to sexuality and was showed luster fabric, exposure, body-conscious, glam look. The voluptuous beauty represented the pride of the post digital generation. The tradition appealed to emotion and was showed pale color, simple line, soft texture fabric and a feminine Image. The meanings of tech fatale were the imagination, the public, the duality, the game, the purity and the recurrence. The formative characteristics reflect the mind of post digital generation who is against authority and pursues the human being worth such as the identity establishment and the pure emotion.

Analysis of Sensibility for Color Transformation using Apparel Fabric Sound (직물 소리의 색 변환을 위한 감성분석)

  • 이명은;최순남;조길수
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2003.11a
    • /
    • pp.1341-1345
    • /
    • 2003
  • 직물의 소리에 의해 유발되는 감성을 토대로 직물의 소리를 색채와 매치시키는 실험을 실시하여 직물의 소리를 색으로 변환해 봄으로써 시각과 생각에 의한 복합감성을 활용한 직물 디자인을 제안해 보고자 하였다. 의류소재 30개의 소리를 녹음하여 군집분석한 후 각 군집별로 섬유의 종류를 고려하여 총 6개의 소리를 선택하여 주관적 감성평가와 색 변환실험에 사용하였다. 직물의 소리는 섬유의 조성에 관계없이 주로 Blue, Purple Blue 그리고 무채색으로 표현되었다. 그런, wool은 Gr(grayish), silk는 Dk(dark), polyester는 Dl(dull), nylon은 Dk(dark)등의 차분하고, 안정되고, 점잖은 느낌의 색조로, cotton과 flax는 P(pale)와 Vp(very pale) 같은 부드럽고 가벼운 느낌의 색조로 표현되었다. 따라서 직물의 소리 감성을 설명하는 요소는 색상보다는 색조에 있음을 알 수 있었다.

  • PDF