• Title/Summary/Keyword: Multimodal Learning

Search Result 73, Processing Time 0.024 seconds

Multimodal Interaction Framework for Collaborative Augmented Reality in Education

  • Asiri, Dalia Mohammed Eissa;Allehaibi, Khalid Hamed;Basori, Ahmad Hoirul
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.268-282
    • /
    • 2022
  • One of the most important technologies today is augmented reality technology, it allows users to experience the real world using virtual objects that are combined with the real world. This technology is interesting and has become applied in many sectors such as the shopping and medicine, also it has been included in the sector of education. In the field of education, AR technology has become widely used due to its effectiveness. It has many benefits, such as arousing students' interest in learning imaginative concepts that are difficult to understand. On the other hand, studies have proven that collaborative between students increases learning opportunities by exchanging information, and this is known as Collaborative Learning. The use of multimodal creates a distinctive and interesting experience, especially for students, as it increases the interaction of users with the technologies. The research aims at developing collaborative framework for developing achievement of 6th graders through designing a framework that integrated a collaborative framework with a multimodal input "hand-gesture and touch", considering the development of an effective, fun and easy to use framework with a multimodal interaction in AR technology that was applied to reformulate the genetics and traits lesson from the science textbook for the 6th grade, the first semester, the second lesson, in an interactive manner by creating a video based on the science teachers' consultations and a puzzle game in which the game images were inserted. As well, the framework adopted the cooperative between students to solve the questions. The finding showed a significant difference between post-test and pre-test of the experimental group on the mean scores of the science course at the level of remembering, understanding, and applying. Which indicates the success of the framework, in addition to the fact that 43 students preferred to use the framework over traditional education.

Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data (적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법)

  • Mirr Shin;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.174-180
    • /
    • 2024
  • In this paper, we explore an emotion classification method through multimodal learning utilizing wav2vec 2.0 and KcELECTRA models. It is known that multimodal learning, which leverages both speech and text data, can significantly enhance emotion classification performance compared to methods that solely rely on speech data. Our study conducts a comparative analysis of BERT and its derivative models, known for their superior performance in the field of natural language processing, to select the optimal model for effective feature extraction from text data for use as the text processing model. The results confirm that the KcELECTRA model exhibits outstanding performance in emotion classification tasks. Furthermore, experiments using datasets made available by AI-Hub demonstrate that the inclusion of text data enables achieving superior performance with less data than when using speech data alone. The experiments show that the use of the KcELECTRA model achieved the highest accuracy of 96.57%. This indicates that multimodal learning can offer meaningful performance improvements in complex natural language processing tasks such as emotion classification.

Multimodal Face Biometrics by Using Convolutional Neural Networks

  • Tiong, Leslie Ching Ow;Kim, Seong Tae;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.170-178
    • /
    • 2017
  • Biometric recognition is one of the major challenging topics which needs high performance of recognition accuracy. Most of existing methods rely on a single source of biometric to achieve recognition. The recognition accuracy in biometrics is affected by the variability of effects, including illumination and appearance variations. In this paper, we propose a new multimodal biometrics recognition using convolutional neural network. We focus on multimodal biometrics from face and periocular regions. Through experiments, we have demonstrated that facial multimodal biometrics features deep learning framework is helpful for achieving high recognition performance.

Convergence evaluation method using multisensory and matching painting and music using deep learning based on imaginary soundscape (Imaginary Soundscape 기반의 딥러닝을 활용한 회화와 음악의 매칭 및 다중 감각을 이용한 융합적 평가 방법)

  • Jeong, Hayoung;Kim, Youngjun;Cho, Jundong
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.175-182
    • /
    • 2020
  • In this study, we introduced the technique of matching classical music using deep learning to design soundscape that can help the viewer appreciate painting and proposed an evaluation index to evaluate how well matching painting and music. The evaluation index was conducted with suitability evaluation through the Likeard 5-point scale and evaluation in a multimodal aspect. The suitability evaluation score of the 13 test participants for the deep learning based best match between painting and music was 3.74/5.0 and band the average cosine similarity of the multimodal evaluation of 13 participants was 0.79. We expect multimodal evaluation to be an evaluation index that can measure a new user experience. In addition, this study aims to improve the experience of multisensory artworks by proposing the interaction between visual and auditory. The proposed matching of painting and music method can be used in multisensory artwork exhibition and furthermore it will increase the accessibility of visually impaired people to appreciate artworks.

Dialog-based multi-item recommendation using automatic evaluation

  • Euisok Chung;Hyun Woo Kim;Byunghyun Yoo;Ran Han;Jeongmin Yang;Hwa Jeon Song
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.277-289
    • /
    • 2024
  • In this paper, we describe a neural network-based application that recommends multiple items using dialog context input and simultaneously outputs a response sentence. Further, we describe a multi-item recommendation by specifying it as a set of clothing recommendations. For this, a multimodal fusion approach that can process both cloth-related text and images is required. We also examine achieving the requirements of downstream models using a pretrained language model. Moreover, we propose a gate-based multimodal fusion and multiprompt learning based on a pretrained language model. Specifically, we propose an automatic evaluation technique to solve the one-to-many mapping problem of multi-item recommendations. A fashion-domain multimodal dataset based on Koreans is constructed and tested. Various experimental environment settings are verified using an automatic evaluation method. The results show that our proposed method can be used to obtain confidence scores for multi-item recommendation results, which is different from traditional accuracy evaluation.

Driver Drowsiness Detection Model using Image and PPG data Based on Multimodal Deep Learning (이미지와 PPG 데이터를 사용한 멀티모달 딥 러닝 기반의 운전자 졸음 감지 모델)

  • Choi, Hyung-Tak;Back, Moon-Ki;Kang, Jae-Sik;Yoon, Seung-Won;Lee, Kyu-Chul
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.45-57
    • /
    • 2018
  • The drowsiness that occurs in the driving is a very dangerous driver condition that can be directly linked to a major accident. In order to prevent drowsiness, there are traditional drowsiness detection methods to grasp the driver's condition, but there is a limit to the generalized driver's condition recognition that reflects the individual characteristics of drivers. In recent years, deep learning based state recognition studies have been proposed to recognize drivers' condition. Deep learning has the advantage of extracting features from a non-human machine and deriving a more generalized recognition model. In this study, we propose a more accurate state recognition model than the existing deep learning method by learning image and PPG at the same time to grasp driver's condition. This paper confirms the effect of driver's image and PPG data on drowsiness detection and experiment to see if it improves the performance of learning model when used together. We confirmed the accuracy improvement of around 3% when using image and PPG together than using image alone. In addition, the multimodal deep learning based model that classifies the driver's condition into three categories showed a classification accuracy of 96%.

Multi-modal Representation Learning for Classification of Imported Goods (수입물품의 품목 분류를 위한 멀티모달 표현 학습)

  • Apgil Lee;Keunho Choi;Gunwoo Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.203-214
    • /
    • 2023
  • The Korea Customs Service is efficiently handling business with an electronic customs system that can effectively handle one-stop business. This is the case and a more effective method is needed. Import and export require HS Code (Harmonized System Code) for classification and tax rate application for all goods, and item classification that classifies the HS Code is a highly difficult task that requires specialized knowledge and experience and is an important part of customs clearance procedures. Therefore, this study uses various types of data information such as product name, product description, and product image in the item classification request form to learn and develop a deep learning model to reflect information well based on Multimodal representation learning. It is expected to reduce the burden of customs duties by classifying and recommending HS Codes and help with customs procedures by promptly classifying items.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

Human body learning system using multimodal and user-centric interfaces (멀티모달 사용자 중심 인터페이스를 적용한 인체 학습 시스템)

  • Kim, Ki-Min;Kim, Jae-Il;Park, Jin-Ah
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.85-90
    • /
    • 2008
  • This paper describes the human body learning system using the multi-modal user interface. Through our learning system, students can study about human anatomy interactively. The existing learning methods use the one-way materials like images, text and movies. But we propose the new learning system that includes 3D organ surface models, haptic interface and the hierarchical data structure of human organs to serve enhanced learning that utilizes sensorimotor skills.

  • PDF

Digital Multimodal Storytelling: Understanding Learner Perceptions (디지털 멀티모달 스토리텔링: 학습자 인식에 대한 이해)

  • Chung, Sun Joo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.3
    • /
    • pp.174-184
    • /
    • 2021
  • The present study intends to understand how multimodality can be implemented in a content course curriculum and how students perceive multimodal tasks. Twenty-eight students majoring in English were engaged in a digital storytelling assignment as a part of the content curriculum. Findings from the questionnaire and reflective essays that investigated students perceptions of digital storytelling showed that students felt that the assignment helped them engage in the task and felt motivated. In comparison to traditional writing tasks, students perceived digital storytelling to be more engaging and motivating, but felt that the assignment required more mental effort and caused more anxiety. By supporting students to explore technology and implement multimodal aspects in the learning process, digital storytelling can encourage engagement and autonomous learning to create meaningful works that are purposeful and enjoyable.