• 제목/요약/키워드: Multimodal Learning

검색결과 73건 처리시간 0.028초

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • 제18권3호
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Multimodal Sentiment Analysis for Investigating User Satisfaction

  • 황교엽;송쯔한;박병권
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제32권3호
    • /
    • pp.1-17
    • /
    • 2023
  • Purpose The proliferation of data on the internet has created a need for innovative methods to analyze user satisfaction data. Traditional survey methods are becoming inadequate in dealing with the increasing volume and diversity of data, and new methods using unstructured internet data are being explored. While numerous comment-based user satisfaction studies have been conducted, only a few have explored user satisfaction through video and audio data. Multimodal sentiment analysis, which integrates multiple modalities, has gained attention due to its high accuracy and broad applicability. Design/methodology/approach This study uses multimodal sentiment analysis to analyze user satisfaction of iPhone and Samsung products through online videos. The research reveals that the combination model integrating multiple data sources showed the most superior performance. Findings The findings also indicate that price is a crucial factor influencing user satisfaction, and users tend to exhibit more positive emotions when content with a product's price. The study highlights the importance of considering multiple factors when evaluating user satisfaction and provides valuable insights into the effectiveness of different data sources for sentiment analysis of product reviews.

멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동 (Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion)

  • 최정현;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제12권9호
    • /
    • pp.407-418
    • /
    • 2023
  • MultiOn(Multi-Object Goal Visual Navigation)은 에이전트가 미지의 실내 환경 내 임의의 위치에 놓인 다수의 목표 물체들을 미리 정해준 일정한 순서에 따라 찾아가야 하는 매우 어려운 시각적 탐색 이동 작업이다. MultiOn 작업을 위한 기존의 모델들은 행동 선택을 위해 시각적 외관 지도나 목표 지도와 같은 단일 맥락 지도만을 이용할 뿐, 다양한 멀티모달 맥락정보에 관한 종합적인 관점을 활용할 수 없다는 한계성을 가지고 있다. 이와 같은 한계성을 극복하기 위해, 본 논문에서는 MultiOn 작업을 위한 새로운 심층 신경망 기반의 에이전트 모델인 MCFMO(Multimodal Context Fusion for MultiOn tasks)를 제안한다. 제안 모델에서는 입력 영상의 시각적 외관 특징외에 환경 물체의 의미적 특징, 목표 물체 특징도 함께 포함한 멀티모달 맥락 지도를 행동 선택에 이용한다. 또한, 제안 모델은 점-단위 합성곱 신경망 모듈을 이용하여 3가지 서로 이질적인 맥락 특징들을 효과적으로 융합한다. 이 밖에도 제안 모델은 효율적인 이동 정책 학습을 유도하기 위해, 목표 물체의 관측 여부와 방향, 그리고 거리를 예측하는 보조 작업 학습 모듈을 추가로 채용한다. 본 논문에서는 Habitat-Matterport3D 시뮬레이션 환경과 장면 데이터 집합을 이용한 다양한 정량 및 정성 실험들을 통해, 제안 모델의 우수성을 확인하였다.

AR기반 영어학습을 위한 효과적 콘텐츠 구성 방향에 대한 연구 (A study of effective contents construction for AR based English learning)

  • 김영섭;전수진;임상민
    • 정보통신설비학회논문지
    • /
    • 제10권4호
    • /
    • pp.143-147
    • /
    • 2011
  • The system using augmented reality can save the time and cost. It is verified in various fields under the possibility of a technology by solving unrealistic feeling in the virtual space. Therefore, augmented reality has a variety of the potential to be used. Generally, multimodal senses such as visual/auditory/tactile feed back are well known as a method for enhancing the immersion in case of interaction with virtual object. By adapting tangible object we can provide touch sensation to users. a 3D model of the same scale overlays the whole area of the tangible object; thus, the marker area is invisible. This contributes to enhancing immersive and natural images to users. Finally, multimodal feedback also creates better immersion. In this paper, sound feedback is considered. By further improving immersion learning augmented reality for children with the initial step learning content is presented. Augmented reality is in the intermediate stages between future world and real world as well as its adaptability is estimated more than virtual reality.

  • PDF

Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning

  • Gil-Sun Hong;Miso Jang;Sunggu Kyung;Kyungjin Cho;Jiheon Jeong;Grace Yoojin Lee;Keewon Shin;Ki Duk Kim;Seung Min Ryu;Joon Beom Seo;Sang Min Lee;Namkug Kim
    • Korean Journal of Radiology
    • /
    • 제24권11호
    • /
    • pp.1061-1080
    • /
    • 2023
  • Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.

감정 인지를 위한 음성 및 텍스트 데이터 퓨전: 다중 모달 딥 러닝 접근법 (Speech and Textual Data Fusion for Emotion Detection: A Multimodal Deep Learning Approach)

  • 에드워드 카야디;송미화
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.526-527
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing multi-modal speech emotion recognition system, we can get numerous benefits. This paper explain about fusing BERT as the text recognizer and CNN as the speech recognizer to built a multi-modal SER system.

효과적인 다봉 배경 모델링 및 물체 검출 (Efficient Multimodal Background Modeling and Motion Defection)

  • 박대용;변혜란
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제15권6호
    • /
    • pp.459-463
    • /
    • 2009
  • 배경 모델링 및 물체 검출 기술은 실시간 비디오 처리 기술에서 중요한 부분을 차지하고 있다. 그동안 많은 연구들이 진행되었지만 안정적인 성능을 위해서는 아직도 상당한 계산량을 요구한다. 이 때문에 고해상도 영상 처리나 객체 추적, 행동 분석 및 대상 인식 등의 알고리즘과 함께 사용되는 경우, 실시간 처리에 어려움이 있다. 본 논문에서는 가장 일반적으로 쓰이는 배경 모델링 기법 중의 하나인 혼합정규모델(mixtures of Gaussian)을 근사화한 효과적인 다봉(multimodal) 배경 모델링 및 물체 검출 방법을 제안한다. 근사화의 타당성과 각 과정들을 유도 및 검증하였고, 실험을 통해 제안하는 알고리즘이 기존 방법의 안정성과 유연성을 유지하면서 3배 이상의 처리 속도를 나타냄을 보였다.

Real-world multimodal lifelog dataset for human behavior study

  • Chung, Seungeun;Jeong, Chi Yoon;Lim, Jeong Mook;Lim, Jiyoun;Noh, Kyoung Ju;Kim, Gague;Jeong, Hyuntae
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.426-437
    • /
    • 2022
  • To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real-world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long-term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network-based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.

통합 CNN, LSTM, 및 BERT 모델 기반의 음성 및 텍스트 다중 모달 감정 인식 연구 (Enhancing Multimodal Emotion Recognition in Speech and Text with Integrated CNN, LSTM, and BERT Models)

  • 에드워드 카야디;한스 나타니엘 하디 수실로;송미화
    • 문화기술의 융합
    • /
    • 제10권1호
    • /
    • pp.617-623
    • /
    • 2024
  • 언어와 감정 사이의 복잡한 관계의 특징을 보이며, 우리의 말을 통해 감정을 식별하는 것은 중요한 과제로 인식된다. 이 연구는 음성 및 텍스트 데이터를 모두 포함하는 다중 모드 분류 작업을 통해 음성 언어의 감정을 식별하기 위해 속성 엔지니어링을 사용하여 이러한 과제를 해결하는 것을 목표로 한다. CNN(Convolutional Neural Networks)과 LSTM(Long Short-Term Memory)이라는 두 가지 분류기를 BERT 기반 사전 훈련된 모델과 통합하여 평가하였다. 논문에서 평가는 다양한 실험 설정 전반에 걸쳐 다양한 성능 지표(정확도, F-점수, 정밀도 및 재현율)를 다룬다. 이번 연구 결과는 텍스트와 음성 데이터 모두에서 감정을 정확하게 식별하는 두 모델의 뛰어난 능력을 보인다.

Image classification and captioning model considering a CAM-based disagreement loss

  • Yoon, Yeo Chan;Park, So Young;Park, Soo Myoung;Lim, Heuiseok
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.67-77
    • /
    • 2020
  • Image captioning has received significant interest in recent years, and notable results have been achieved. Most previous approaches have focused on generating visual descriptions from images, whereas a few approaches have exploited visual descriptions for image classification. This study demonstrates that a good performance can be achieved for both description generation and image classification through an end-to-end joint learning approach with a loss function, which encourages each task to reach a consensus. When given images and visual descriptions, the proposed model learns a multimodal intermediate embedding, which can represent both the textual and visual characteristics of an object. The performance can be improved for both tasks by sharing the multimodal embedding. Through a novel loss function based on class activation mapping, which localizes the discriminative image region of a model, we achieve a higher score when the captioning and classification model reaches a consensus on the key parts of the object. Using the proposed model, we established a substantially improved performance for each task on the UCSD Birds and Oxford Flowers datasets.