• Title/Summary/Keyword: multimodal system

Search Result 226, Processing Time 0.028 seconds

Multimodal layer surveillance map based on anomaly detection using multi-agents for smart city security

  • Shin, Hochul;Na, Ki-In;Chang, Jiho;Uhm, Taeyoung
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.183-193
    • /
    • 2022
  • Smart cities are expected to provide residents with convenience via various agents such as CCTV, delivery robots, security robots, and unmanned shuttles. Environmental data collected by various agents can be used for various purposes, including advertising and security monitoring. This study suggests a surveillance map data framework for efficient and integrated multimodal data representation from multi-agents. The suggested surveillance map is a multilayered global information grid, which is integrated from the multimodal data of each agent. To confirm this, we collected surveillance map data for 4 months, and the behavior patterns of humans and vehicles, distribution changes of elevation, and temperature were analyzed. Moreover, we represent an anomaly detection algorithm based on a surveillance map for security service. A two-stage anomaly detection algorithm for unusual situations was developed. With this, abnormal situations such as unusual crowds and pedestrians, vehicle movement, unusual objects, and temperature change were detected. Because the surveillance map enables efficient and integrated processing of large multimodal data from a multi-agent, the suggested data framework can be used for various applications in the smart city.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Designing a Framework of Multimodal Contents Creation and Playback System for Immersive Textbook (실감형 교과서를 위한 멀티모달 콘텐츠 저작 및 재생 프레임워크 설계)

  • Kim, Seok-Yeol;Park, Jin-Ah
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.1-10
    • /
    • 2010
  • For virtual education, the multimodal learning environment with haptic feedback, termed 'immersive textbook', is necessary to enhance the learning effectiveness. However, the learning contents for immersive textbook are not widely available due to the constraints in creation and playback environments. To address this problem, we propose a framework for producing and displaying the multimodal contents for immersive textbook. Our framework provides an XML-based meta-language to produce the multimodal learning contents in the form of intuitive script. Thus it can help the user, without any prior knowledge of multimodal interactions, produce his or her own learning contents. The contents are then interpreted by script engine and delivered to the user by visual and haptic rendering loops. Also we implemented a prototype based on the aforementioned proposals and performed user evaluation to verify the validity of our framework.

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

A Full Body Gumdo Game with an Intelligent Cyber Fencer using Multi-modal(3D Vision and Speech) Interface (멀티모달 인터페이스(3차원 시각과 음성 )를 이용한 지능적 가상검객과의 전신 검도게임)

  • 윤정원;김세환;류제하;우운택
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.420-430
    • /
    • 2003
  • This paper presents an immersive multimodal Gumdo simulation game that allows a user to experience the whole body interaction with an intelligent cyber fencer. The proposed system consists of three modules: (i) a nondistracting multimodal interface with 3D vision and speech (ii) an intelligent cyber fencer and (iii) an immersive feedback by a big screen and sound. First, the multimodal Interface with 3D vision and speech allows a user to move around and to shout without distracting the user. Second, an intelligent cyber fencer provides the user with intelligent interactions by perception and reaction modules that are created by the analysis of real Gumdo game. Finally, an immersive audio-visual feedback by a big screen and sound effects helps a user experience an immersive interaction. The proposed system thus provides the user with an immersive Gumdo experience with the whole body movement. The suggested system can be applied to various applications such as education, exercise, art performance, etc.

Multimodal biometrics system using PDA under ubiquitous environments (유비쿼터스 환경에서 PDA를 이용한 다중생체인식 시스템 구현)

  • Kwon Man-Jun;Yang Dong-Hwa;Kim Yong-Sam;Lee Dae-Jong;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.430-435
    • /
    • 2006
  • In this paper, we propose a method based on multimodal biometrics system using the face and signature under ubiquitous computing environments. First, the face and signature images are obtained by PDA and then these images with user ID and name are transmitted via WLAN(Wireless LAN) to the server and finally the PDA receives verification result from the server. The multimodal biometrics recognition system consists of two parts. In client part located in PDA, user interface program executes the user registration and verification process. The server consisting of the PCA and LDA algorithm shows excellent face recognition performance and the signature recognition method based on the Kernel PCA and LDA algorithm for signature image projected to vertical and horizontal axes by grid partition method. The proposed algorithm is evaluated with several face and signature images and shows better recognition and verification results than previous unimodal biometrics recognition techniques.

Design of the Multimodal Input System using Image Processing and Speech Recognition (음성인식 및 영상처리 기반 멀티모달 입력장치의 설계)

  • Choi, Won-Suk;Lee, Dong-Woo;Kim, Moon-Sik;Na, Jong-Whoa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.743-748
    • /
    • 2007
  • Recently, various types of camera mouse are developed using the image processing. The camera mouse showed limited performance compared to the traditional optical mouse in terms of the response time and the usability. These problems are caused by the mismatch between the size of the monitor and that of the active pixel area of the CMOS Image Sensor. To overcome these limitations, we designed a new input device that uses the face recognition as well as the speech recognition simultaneously. In the proposed system, the area of the monitor is partitioned into 'n' zones. The face recognition is performed using the web-camera, so that the mouse pointer follows the movement of the face of the user in a particular zone. The user can switch the zone by speaking the name of the zone. The multimodal mouse is analyzed using the Keystroke Level Model and the initial experiments was performed to evaluate the feasibility and the performance of the proposed system.

The influence of learning style in understanding analogies and 2D animations in embryology course

  • Narayanan, Suresh;Ananthy, Vimala
    • Anatomy and Cell Biology
    • /
    • v.51 no.4
    • /
    • pp.260-265
    • /
    • 2018
  • Undergraduate students struggle to comprehend embryology because of its dynamic nature. Studies have recommended using a combination of teaching methods to match the student's learning style. But there has been no study to describe the effect of such teaching strategy over the different types of learners. In the present study, an attempt has been made to teach embryology using the combination of analogies and simple 2D animations made with Microsoft powerpoint software. The objective of the study is to estimate the difference in academic improvement and perception scale between the different types of learners after introducing analogies and 2D animation in a lecture environment. Based on Visual, Aural, Read/Write, and Kinesthetic (VARK) scoring system the learners were grouped into unimodal and multimodal learners. There was significant improvement in post-test score among the unimodal (P<0.001) and multimodal learners (P<0.001). When the post-test score was compared between the two groups, the multimodal learners performed better the unimodal learners (P=0.018). But there was no difference in the perception of animations and analogies and long-term assessment between the groups. The multimodal learners performed better than unimodal learners in short term recollection, but in long term retention of knowledge the varied learning style didn't influence its outcome.

Implementation and Evaluation of Harmful-Media Filtering Techniques using Multimodal-Information Extraction

  • Yeon-Ji, Lee;Ye-Sol, Oh;Na-Eun, Park;Il-Gu, Lee
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.75-81
    • /
    • 2023
  • Video platforms, including YouTube, have a structure in which the number of video views is directly related to the publisher's profits. Therefore, video publishers induce viewers by using provocative titles and thumbnails to garner more views. The conventional technique used to limit such harmful videos has low detection accuracy and relies on follow-up measures based on user reports. To address these problems, this study proposes a technique to improve the accuracy of filtering harmful media using thumbnails, titles, and audio data from videos. This study analyzed these three pieces of multimodal information; if the number of harmful determinations was greater than the set threshold, the video was deemed to be harmful, and its upload was restricted. The experimental results showed that the proposed multimodal information extraction technique used for harmfulvideo filtering achieved a 9% better performance than YouTube's Restricted Mode with regard to detection accuracy and a 41% better performance than the YouTube automation system.

Automated detection of panic disorder based on multimodal physiological signals using machine learning

  • Eun Hye Jang;Kwan Woo Choi;Ah Young Kim;Han Young Yu;Hong Jin Jeon;Sangwon Byun
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.105-118
    • /
    • 2023
  • We tested the feasibility of automated discrimination of patients with panic disorder (PD) from healthy controls (HCs) based on multimodal physiological responses using machine learning. Electrocardiogram (ECG), electrodermal activity (EDA), respiration (RESP), and peripheral temperature (PT) of the participants were measured during three experimental phases: rest, stress, and recovery. Eleven physiological features were extracted from each phase and used as input data. Logistic regression (LoR), k-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), and multilayer perceptron (MLP) algorithms were implemented with nested cross-validation. Linear regression analysis showed that ECG and PT features obtained in the stress and recovery phases were significant predictors of PD. We achieved the highest accuracy (75.61%) with MLP using all 33 features. With the exception of MLP, applying the significant predictors led to a higher accuracy than using 24 ECG features. These results suggest that combining multimodal physiological signals measured during various states of autonomic arousal has the potential to differentiate patients with PD from HCs.