• 제목/요약/키워드: Learning Modalities

검색결과 45건 처리시간 0.031초

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

An Analysis of Collaborative Visualization Processing of Text Information for Developing e-Learning Contents

  • SUNG, Eunmo
    • Educational Technology International
    • /
    • 제10권1호
    • /
    • pp.25-40
    • /
    • 2009
  • The purpose of this study was to explore procedures and modalities on collaborative visualization processing of text information for developing e-Learning contents. In order to investigate, two research questions were explored: 1) what are procedures on collaborative visualization processing of text information, 2) what kinds of patterns and modalities can be found in each procedure of collaborative visualization of text information. This research method was employed a qualitative research approaches by means of grounded theory. As a result of this research, collaborative visualization processing of text information were emerged six steps: identifying text, analyzing text, exploring visual clues, creating visuals, discussing visuals, elaborating visuals, and creating visuals. Collaborative visualization processing of text information came out the characteristic of systemic and systematic system like spiral sequencing. Also, another result of this study, modalities in collaborative visualization processing of text information was divided two dimensions: individual processing by internal representation, social processing by external representation. This case study suggested that collaborative visualization strategy has full possibility of providing ideal methods for sharing cognitive system or thinking system as using human visual intelligence.

Human Action Recognition Using Pyramid Histograms of Oriented Gradients and Collaborative Multi-task Learning

  • Gao, Zan;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권2호
    • /
    • pp.483-503
    • /
    • 2014
  • In this paper, human action recognition using pyramid histograms of oriented gradients and collaborative multi-task learning is proposed. First, we accumulate global activities and construct motion history image (MHI) for both RGB and depth channels respectively to encode the dynamics of one action in different modalities, and then different action descriptors are extracted from depth and RGB MHI to represent global textual and structural characteristics of these actions. Specially, average value in hierarchical block, GIST and pyramid histograms of oriented gradients descriptors are employed to represent human motion. To demonstrate the superiority of the proposed method, we evaluate them by KNN, SVM with linear and RBF kernels, SRC and CRC models on DHA dataset, the well-known dataset for human action recognition. Large scale experimental results show our descriptors are robust, stable and efficient, and outperform the state-of-the-art methods. In addition, we investigate the performance of our descriptors further by combining these descriptors on DHA dataset, and observe that the performances of combined descriptors are much better than just using only sole descriptor. With multimodal features, we also propose a collaborative multi-task learning method for model learning and inference based on transfer learning theory. The main contributions lie in four aspects: 1) the proposed encoding the scheme can filter the stationary part of human body and reduce noise interference; 2) different kind of features and models are assessed, and the neighbor gradients information and pyramid layers are very helpful for representing these actions; 3) The proposed model can fuse the features from different modalities regardless of the sensor types, the ranges of the value, and the dimensions of different features; 4) The latent common knowledge among different modalities can be discovered by transfer learning to boost the performance.

Tumor Segmentation in Multimodal Brain MRI Using Deep Learning Approaches

  • Al Shehri, Waleed;Jannah, Najlaa
    • International Journal of Computer Science & Network Security
    • /
    • 제22권8호
    • /
    • pp.343-351
    • /
    • 2022
  • A brain tumor forms when some tissue becomes old or damaged but does not die when it must, preventing new tissue from being born. Manually finding such masses in the brain by analyzing MRI images is challenging and time-consuming for experts. In this study, our main objective is to detect the brain's tumorous part, allowing rapid diagnosis to treat the primary disease instantly. With image processing techniques and deep learning prediction algorithms, our research makes a system capable of finding a tumor in MRI images of a brain automatically and accurately. Our tumor segmentation adopts the U-Net deep learning segmentation on the standard MICCAI BRATS 2018 dataset, which has MRI images with different modalities. The proposed approach was evaluated and achieved Dice Coefficients of 0.9795, 0.9855, 0.9793, and 0.9950 across several test datasets. These results show that the proposed system achieves excellent segmentation of tumors in MRIs using deep learning techniques such as the U-Net algorithm.

U-Learning을 위한 E-Learning에서 M-Learning으로의 교육적 패러다임 전환 (Educational Paradigm Shift from E-Learning to Mobile Learning Toward Ubiquitous Learning)

  • 김혜진
    • 한국산학기술학회논문지
    • /
    • 제12권11호
    • /
    • pp.4788-4795
    • /
    • 2011
  • 본 연구는 전통적인 방식으로부터 U-Learning으로 학습 패러다임을 전환하는 효과 및 가능성을 검토하고 제안하기 위한 것이며, E-Learning으로부터 M-Learning 및 U-Learning으로의 교수법 플랫폼 전환을 고려하기 위한 것이다. 개인별 학습 프로세스에 학습 환경이 어떤 영향을 미치는가에 대한 적절한 연구없이는 양질의 교육을 제공하기 어려울 것이다. 현대는 언제 어디서나 교육을 받을 수 있는 새로운 학습 환경 시대를 맞이하고 있으며, 누구나 평생교육을 받을 수 있게 되었다. 이러한 경향의 장점을 최대화하고, 양질의 교육을 진행하기 위한 제한 사항들을 확인하여야 하는데, 이들 요소들은 U-Learning 및 이를 가능하게 하는 기술과 함께 논의되어야 한다. 보급교육 혹은 평생교육은 많은 연구기관의 관심을 받고 있는바, 본 논문에서는 학습 모드 및 학습 양상의 유형에 대한 논의도 포함하였다.

A Survey of Multimodal Systems and Techniques for Motor Learning

  • Tadayon, Ramin;McDaniel, Troy;Panchanathan, Sethuraman
    • Journal of Information Processing Systems
    • /
    • 제13권1호
    • /
    • pp.8-25
    • /
    • 2017
  • This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.

DNN 학습을 이용한 퍼스널 비디오 시퀀스의 멀티 모달 기반 이벤트 분류 방법 (A Personal Video Event Classification Method based on Multi-Modalities by DNN-Learning)

  • 이유진;낭종호
    • 정보과학회 논문지
    • /
    • 제43권11호
    • /
    • pp.1281-1297
    • /
    • 2016
  • 최근 스마트 기기의 보급으로 자유롭게 비디오 컨텐츠를 생성하고 이를 빠르고 편리하게 공유할 수 있는 네트워크 환경이 갖추어지면서, 퍼스널 비디오가 급증하고 있다. 그러나, 퍼스널 비디오는 비디오라는 특성 상 멀티 모달리티로 구성되어 있으면서 데이터가 시간의 흐름에 따라 변화하기 때문에 이벤트 분류를 할 때 이에 대한 고려가 필요하다. 본 논문에서는 비디오 내의 멀티 모달리티들로부터 고수준의 특징을 추출하여 시간 순으로 재배열한 것을 바탕으로 모달리티 사이의 연관관계를 Deep Neural Network(DNN)으로 학습하여 퍼스널 비디오 이벤트를 분류하는 방법을 제안한다. 제안하는 방법은 비디오에 내포된 이미지와 오디오를 시간적으로 동기화하여 추출한 후 GoogLeNet과 Multi-Layer Perceptron(MLP)을 이용하여 각각 고수준 정보를 추출한다. 그리고 이들을 비디오에 표현된 시간순으로 재 배열하여 비디오 한 편당 하나의 특징으로 재 생성하고 이를 바탕으로 학습한 DNN을 이용하여 퍼스널 비디오 이벤트를 분류한다.

뇌 종양 등급 분류를 위한 심층 멀티모달 MRI 통합 모델 (Deep Multimodal MRI Fusion Model for Brain Tumor Grading)

  • 나인예;박현진
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.416-418
    • /
    • 2022
  • 신경교종(glioma)은 신경교세포에서 발생하는 뇌 종양으로 low grade glioma와 예후가 나쁜 high grade glioma로 분류된다. 자기공명영상(magnetic Resonance Imaging, MRI)은 비침습적 수단으로 이를 이용한 신경교종 진단에 대한 연구가 활발히 진행되고 있다. 또한, 단일 modality의 정보 한계를 극복하기 위해 다중 modality를 조합하여 상호 보완적인 정보를 얻는 연구도 진행되고 있다. 본 논문은 네가지 modality(T1, T1Gd, T2, T2-FLAIR)의 MRI 영상에 입력단 fusion을 적용한 3D CNN 기반의 모델을 제안한다. 학습된 모델은 검증 데이터에 대해 정확도 0.8926, 민감도 0.9688, 특이도 0.6400, AUC 0.9467의 분류 성능을 보였다. 이를 통해 여러 modality 간의 상호관계를 학습하여 신경교종의 등급을 효과적으로 분류함을 확인하였다.

  • PDF

Clinical Implementation of Deep Learning in Thoracic Radiology: Potential Applications and Challenges

  • Eui Jin Hwang;Chang Min Park
    • Korean Journal of Radiology
    • /
    • 제21권5호
    • /
    • pp.511-525
    • /
    • 2020
  • Chest X-ray radiography and computed tomography, the two mainstay modalities in thoracic radiology, are under active investigation with deep learning technology, which has shown promising performance in various tasks, including detection, classification, segmentation, and image synthesis, outperforming conventional methods and suggesting its potential for clinical implementation. However, the implementation of deep learning in daily clinical practice is in its infancy and facing several challenges, such as its limited ability to explain the output results, uncertain benefits regarding patient outcomes, and incomplete integration in daily workflow. In this review article, we will introduce the potential clinical applications of deep learning technology in thoracic radiology and discuss several challenges for its implementation in daily clinical practice.

Predicting Session Conversion on E-commerce: A Deep Learning-based Multimodal Fusion Approach

  • Minsu Kim;Woosik Shin;SeongBeom Kim;Hee-Woong Kim
    • Asia pacific journal of information systems
    • /
    • 제33권3호
    • /
    • pp.737-767
    • /
    • 2023
  • With the availability of big customer data and advances in machine learning techniques, the prediction of customer behavior at the session-level has attracted considerable attention from marketing practitioners and scholars. This study aims to predict customer purchase conversion at the session-level by employing customer profile, transaction, and clickstream data. For this purpose, we develop a multimodal deep learning fusion model with dynamic and static features (i.e., DS-fusion). Specifically, we base page views within focal visist and recency, frequency, monetary value, and clumpiness (RFMC) for dynamic and static features, respectively, to comprehensively capture customer characteristics for buying behaviors. Our model with deep learning architectures combines these features for conversion prediction. We validate the proposed model using real-world e-commerce data. The experimental results reveal that our model outperforms unimodal classifiers with each feature and the classical machine learning models with dynamic and static features, including random forest and logistic regression. In this regard, this study sheds light on the promise of the machine learning approach with the complementary method for different modalities in predicting customer behaviors.