• Title/Summary/Keyword: Emotional Computing

Search Result 72, Processing Time 0.022 seconds

A Study on the Characteristics of AI Fashion based on Emotions -Focus on the User Experience- (감성을 기반으로 하는 AI 패션 특성 연구 -사용자 중심(UX) 관점으로-)

  • Kim, Minsun;Kim, Jinyoung
    • Journal of Fashion Business
    • /
    • v.26 no.1
    • /
    • pp.1-15
    • /
    • 2022
  • Digital transformation has induced changes in human life patterns; consumption patterns are also changing to digitalization. Entering the era of industry 4.0 with the 4th industrial revolution, it is important to pay attention to a new paradigm in the fashion industry, the shift from developer-centered to user-centered in the era of the 3rd industrial revolution. The meaning of storing users' changing life and consumption patterns and analyzing stored big data are linked to consumer sentiment. It is more valuable to read emotions, then develop and distribute products based on them, rather than developer-centered processes that previously started in the fashion market. An AI(Artificial Intelligence) deep learning algorithm that analyzes user emotion big data from user experience(UX) to emotion and uses the analyzed data as a source has become possible. By combining AI technology, the fashion industry can develop various new products and technologies that meet the functional and emotional aspects required by consumers and expect a sustainable user experience structure. This study analyzes clear and useful user experience in the fashion industry to derive the characteristics of AI algorithms that combine emotions and technologies reflecting users' needs and proposes methods that can be used in the fashion industry. The purpose of the study is to utilize information analysis using big data and AI algorithms so that structures that can interact with users and developers can lead to a sustainable ecosystem. Ultimately, it is meaningful to identify the direction of the optimized fashion industry through user experienced emotional fashion technology algorithms.

Promising Services Based on AI for Mental Health (정신건강을 위한 인공지능 활용과 유망 서비스)

  • Song, G.H.;Kim, M.K.;Park, A.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.6
    • /
    • pp.12-23
    • /
    • 2020
  • Because of economic polarization and difficulties, extreme personalization, and the complexity of social relationships, modern people are experiencing various mental disorders or pathologies. Accordingly, there is an urgent need to prepare more active countermeasures and support those with mental health difficulties to improve mental health and prevent abnormal pathologies. Artificial intelligence (AI) is expected to improve the mental health of individuals through emotional enhancement beyond affective computing. We investigated how to use AI to prevent and diagnose mental diseases or disorders, support treatment, and manage followup. In particular, promising services that can be used in daily life or medical clinics were discovered and active directions for realizing these services are suggested.

Tangible Media based on Interactive Technology;iT_Media

  • Yoon, Joong-Sun;Yoh, Myeung-Sook;Lee, Hye-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.794-799
    • /
    • 2004
  • Recent paradigm in technology shifts from object-based technology to environment-based technology. Issue here is interaction among humans and the world around humans, which is natural and artificial "space." Holistic interactions based on "Mom (embodiment)" suggest a good starting point for exploring this issue. Soft engineering, "Mom," holistic interactions, tangible space, ubiquitous computing, science of emotion, and interactive media are key concepts in interactive technology. Interactive tangible media "iT_Media" is proposed to explore and synthesize these ideas. Interactive technology initiative (ITI) is an interdisciplinary research group to search for the proper technology and the proper way of implementing technology: "interactive technology" or "soft engineering." Some experimental activities conducted by ITI are presented in this session, "Interactive Technology."

  • PDF

Tangible Media based on Interactive Technology: A Tutorial

  • Yoon, Joongsun;Yoh, Myeungsook;Lee, Hyewon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.241-248
    • /
    • 2004
  • Recent paradigm in technology shifts from object-based technology to environment-based technology. Issue here is interaction among humans and the world around humans. The world we consider includes natural and artificial "space." Interactive technology, which explore holistic interactions based on "Mom (embodiment)," suggests a good starting point for exploring this issue. Soft engineering, "Mom," holistic interactions, tangible space, ubiquitous computing, science of emotion, and interactive media are key concepts in interactive technology. Interactive tangible media "iT_Media" is proposed to explore and synthesize these ideas. Interactive technology initiative (ITI) is an interdisciplinary research group to search for the proper technology and the proper way of implementing technology: "interactive technology" or "soft engineering." Some experimental activities conducted by ITI are presented in this paper.tal activities conducted by ITI are presented in this paper.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Speech Emotion Recognition with SVM, KNN and DSVM

  • Hadhami Aouani ;Yassine Ben Ayed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.40-48
    • /
    • 2023
  • Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: Zero Crossing Rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.

An Exploratory Investigation on Visual Cues for Emotional Indexing of Image (이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구)

  • Chung, SunYoung;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.48 no.1
    • /
    • pp.53-73
    • /
    • 2014
  • Given that emotion-based computing environment has grown recently, it is necessary to focus on emotional access and use of multimedia resources including images. The purpose of this study aims to identify the visual cues for emotion in images. In order to achieve it, this study selected five basic emotions such as love, happiness, sadness, fear, and anger and interviewed twenty participants to demonstrate the visual cues for emotions. A total of 620 visual cues mentioned by participants were collected from the interview results and coded according to five categories and 18 sub-categories for visual cues. Findings of this study showed that facial expressions, actions / behaviors, and syntactic features were found to be significant in terms of perceiving a specific emotion of the image. An individual emotion from visual cues demonstrated distinctive characteristics. The emotion of love showed a higher relation with visual cues such as actions and behaviors, and the happy emotion is substantially related to facial expressions. In addition, the sad emotion was found to be perceived primarily through actions and behaviors and the fear emotion is perceived considerably through facial expressions. The anger emotion is highly related to syntactic features such as lines, shapes, and sizes. Findings of this study implicated that emotional indexing could be effective when content-based features were considered in combination with concept-based features.

A Study on the Expression of Interaction Space Design for User Experience - Focusing on the Digital Media - (사용자 경험을 위한 인터랙션 공간디자인 표현에 관한 연구 - 디지털 미디어를 중심으로 -)

  • Kim, Seyoung
    • Korean Institute of Interior Design Journal
    • /
    • v.21 no.4
    • /
    • pp.48-56
    • /
    • 2012
  • Digital technology in modern society is entering the era of digital convergence and ubiquitous computing, and is playing an important role to overcome the limitation of time and space. Based on new media method's rapid application development, a wide range of forms as digital design is made possible. A large part of our living such as information sharing, collaboration, production, recreation, working and various social activities has been realized from the space that digital media offer. As we see a much broader range, the digital media's diverse expressions affect, in interactive ways, not only the relationship between humans and things, between each individual human, and between humans and the environment, but also even emotional purification and realm of educational, cultural, and social aspect. In this study, the aim is to discuss the user-centered design considered for integration into the interaction space design method and is to concentrate on research on. Focusing on digital media, user-friendly interface features of the space environment, construction and utilization of digital media have been applied to try to analyze the interaction effect of space that is created for the design and application of various applications and will seek ways. Thus, various case-studies have been explored where interface space is developed, creating virtual reality through cognitive basis and 3-D interface space. For example, emotional expressions are embedded for the space of commerce, education and exhibition, enabling intercommunication through haptic interface, with changing sound and visual effects which are caused by the movement of people in a certain space. With consideration of the relationship between physical environment and objects, interactive design should be achieved by providing a human oriented interface based on social, cultural and environmental aspects.

  • PDF

An Expansion of Affective Image Access Points Based on Users' Response on Image (이용자 반응 기반 이미지 감정 접근점 확장에 관한 연구)

  • Chung, Eun Kyung
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.25 no.3
    • /
    • pp.101-118
    • /
    • 2014
  • Given the context of rapid developing ubiquitous computing environment, it is imperative for users to search and use images based on affective meanings. However, it has been difficult to index affective meanings of image since emotions of image are substantially subjective and highly abstract. In addition, utilizing low level features of image for indexing affective meanings of image has been limited for high level concepts of image. To facilitate the access points of affective meanings of image, this study aims to utilize user-provided responses of images. For a data set, emotional words are collected and cleaned from twenty participants with a set of fifteen images, three images for each of basic emotions, love, sad, fear, anger, and happy. A total of 399 unique emotion words are revealed and 1,093 times appeared in this data set. Through co-word analysis and network analysis of emotional words from users' responses, this study demonstrates expanded word sets for five basic emotions. The expanded word sets are characterized with adjective expression and action/behavior expression.

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.