• Title/Summary/Keyword: 멀티모달정보

Search Result 187, Processing Time 0.026 seconds

Gait Type Classification Using Multi-modal Ensemble Deep Learning Network

  • Park, Hee-Chan;Choi, Young-Chan;Choi, Sang-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.29-38
    • /
    • 2022
  • This paper proposes a system for classifying gait types using an ensemble deep learning network for gait data measured by a smart insole equipped with multi-sensors. The gait type classification system consists of a part for normalizing the data measured by the insole, a part for extracting gait features using a deep learning network, and a part for classifying the gait type by inputting the extracted features. Two kinds of gait feature maps were extracted by independently learning networks based on CNNs and LSTMs with different characteristics. The final ensemble network classification results were obtained by combining the classification results. For the seven types of gait for adults in their 20s and 30s: walking, running, fast walking, going up and down stairs, and going up and down hills, multi-sensor data was classified into a proposed ensemble network. As a result, it was confirmed that the classification rate was higher than 90%.

The Implementation of Real-Time Speaker Localization Using Multi-Modality (멀티모달러티를 이용한 실시간 음원추적 시스템 구현)

  • Park, Jeong-Ok;Na, Seung-You;Kim, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.459-461
    • /
    • 2004
  • This paper presents an implementation of real-time speaker localization using audio-visual information. Four channels of microphone signals are processed to detect vertical as well as horizontal speaker positions. At first short-time average magnitude difference function(AMDF) signals are used to determine whether the microphone signals are human voices or not. And then the orientation and distance information of the sound sources can be obtained through interaural time difference and interaual level differences. Finally visual information by a camera helps get finer tuning of the speaker orientation. Experimental results of the real-time localization system show that the performance improves to 99.6% compared to the rate of 88.8% when only the audio information is used.

  • PDF

Emergency situations Recognition System Using Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템)

  • Kim, Young-Un;Kang, Sun-Kyung;So, In-Mi;Han, Dae-Kyung;Kim, Yoon-Jin;Jung, Sung-Tae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.757-758
    • /
    • 2008
  • This paper aims to propose an emergency recognition system using multimodal information extracted by an image processing module, a voice processing module, and a gravity sensor processing module. Each processing module detects predefined events such as moving, stopping, fainting, and transfer them to the multimodal integration module. Multimodal integration module recognizes emergency situation by using the transferred events and rechecks it by asking the user some question and recognizing the answer. The experiment was conducted for a faint motion in the living room and bathroom. The results of the experiment show that the proposed system is robust than previous methods and effectively recognizes emergency situations at various situations.

  • PDF

Walking Assistance System for Sight Impaired People Based on a Multimodal Information Transformation Technique (멀티모달 정보변환을 통한 시각장애우 보행 보조 시스템)

  • Yu, Jae-Hyoung;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.465-472
    • /
    • 2009
  • This paper proposes a multimodal information transformation system that converts the image information to the voice information to provide the sight impaired people with walking area and obstacles, which are extracted by an acquired image from a single CCD camera. Using a chain-code line detection algorithm, the walking area is found from the vanishing point and boundary of a sidewalk on the edge image. And obstacles are detected by Gabor filter of extracting vertical lines on the walking area. The proposed system expresses the voice information of pre-defined sentences, consisting of template words which mean walking area and obstacles. The multi-modal information transformation system serves the useful voice information to the sight impaired that intend to reach their destination. The experiments of the proposed algorithm has been implemented on the indoor and outdoor environments, and verified its superiority to exactly provide walking parameters sentences.

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Aesun;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2009.10a
    • /
    • pp.19-24
    • /
    • 2009
  • 의사소통에서 대화자 간 감정의 이해는 메시지의 내용만큼이나 중요하다. 비언어적 요소에 의해 감정에 관한 더 많은 정보가 전달되고 있기는 하지만, 텍스트에도 화자의 감정을 나타내는 언어적 표지가 다양하고 풍부하게 녹아 들어 있다. 본 연구의 목적은 인간언어공학에 활용할 수 있는 감정 온톨로지를 설계하는 데 있다. 텍스트 기반 감정 처리 분야의 선행 연구가 감정을 분류하고, 각 감정의 서술적 어휘 목록을 작성하고, 이를 텍스트에서 검색함으로써, 추출된 감정의 정확도가 높지 않았다. 이에 비해, 본 연구에서 제안하는 감정 온톨로지는 다음과 같은 장점을 갖는다. 첫째, 감정 표현의 범주를 기술 대상(언어적 vs. 비언어적)과 방식(표현적, 서술적, 도상적)으로 분류하고, 이질적 특성을 갖는 6개 범주 간 상호 대응관계를 설정함으로써, 멀티모달 환경에 적용할 수 있다. 둘째, 세분화된 감정을 분류할 수 있되, 감정 간 차별성을 가질 수 있도록 24개의 감정 명세를 선별하고, 더 섬세하게 감정을 분류할 수 있는 속성으로 강도와 극성을 설정하였다. 셋째, 텍스트에 나타난 감정 표현을 명시적으로 구분할 수 있도록, 경험자 기술 대상과 방식 언어적 자질에 관한 속성을 도입하였다. 이때 본 연구에서 제안하는 감정 온톨로지가 한국어 처리에 국한되지 않고, 다국어 처리에 활용할 수 있도록 확장성을 고려했다.

  • PDF

Interaction Human Model and Interface for Increasing User's Presence and Task Performance in Near-body Virtual Space (가상 근신(近身) 공간에서의 작업 성능과 사용자 존재감 향상을 위한 상호작용 신체 모델과 인터페이스)

  • Yang Ungyeon;Kim Yongwan;Son Wookho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.975-978
    • /
    • 2004
  • 본 논문에서 우리는 가상현실 시스템의 구축에 있어서, 사용자의 직접 상호작용 (direct interaction)을 기본으로 하는 근신 공간(near-body space) 작업에서 사용자의 존재감 (Presence) 향상과 작업 성능 향상을 위하여, 사용자와 공간적 및 감각적으로 일치된 가상 인체 모델의 구현을 중심으로 현재의 기술 현황 및 연구 개발 방향에 대하여 기술한다. 이상적인 가상현실 시스템을 구현하기 위해서 고려되어야 할 요소를 멀티모달 상호작용과 실감 일치형 인터페이스 개발 방법론의 관점에서 보면, 사용자가 접하는 가상 공간의 시각적 모델(visual perception)과 자기 동작 감각적(proprioceptive) 모델의 일치가 중요하다. 그러므로, 시각적으로 사용자의 움직임과 일치된 자신의 신체가 가시화 되어야 하고, 자연스러운 근신 공간 직접 상호작용을 지원하기 위해서는 사실적인 햅틱 피드백 요소가 중요하며, 공간적 정보를 표현 함에 있어서 동기화 된 사실적 청각 피드백 요소의 지원이 중요하다. 앞의 주요 3 가지 감각 인터페이스 방법(sensory channel, modality)는 현재의 불완전한 인터페이스 기술의 한계를 고려하여 상호 보완적인 관계로 응용 환경에 최적화된 적용 방법이 연구되어야 한다.

  • PDF

Combining Feature Fusion and Decision Fusion in Multimodal Biometric Authentication (다중 바이오 인증에서 특징 융합과 결정 융합의 결합)

  • Lee, Kyung-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.5
    • /
    • pp.133-138
    • /
    • 2010
  • We present a new multimodal biometric authentication method, which performs both feature-level fusion and decision-level fusion. After generating support vector machines for new features made by integrating face and voice features, the final decision for authentication is made by integrating decisions of face SVM classifier, voice SVM classifier and integrated features SVM clssifier. We justify our proposal by comparing our method with traditional one by experiments with XM2VTS multimodal database. The experiments show that our multilevel fusion algorithm gives higher recognition rate than the existing schemes.

Multimodal MRI analysis model based on deep neural network for glioma grading classification (신경교종 등급 분류를 위한 심층신경망 기반 멀티모달 MRI 영상 분석 모델)

  • Kim, Jonghun;Park, Hyunjin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.425-427
    • /
    • 2022
  • The grade of glioma is important information related to survival and thus is important to classify the grade of glioma before treatment to evaluate tumor progression and treatment planning. Glioma grading is mostly divided into high-grade glioma (HGG) and low-grade glioma (LGG). In this study, image preprocessing techniques are applied to analyze magnetic resonance imaging (MRI) using the deep neural network model. Classification performance of the deep neural network model is evaluated. The highest-performance EfficientNet-B6 model shows results of accuracy 0.9046, sensitivity 0.9570, specificity 0.7976, AUC 0.8702, and F1-Score 0.8152 in 5-fold cross-validation.

  • PDF

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

An On-line Speech and Character Combined Recognition System for Multimodal Interfaces (멀티모달 인터페이스를 위한 음성 및 문자 공용 인식시스템의 구현)

  • 석수영;김민정;김광수;정호열;정현열
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.2
    • /
    • pp.216-223
    • /
    • 2003
  • In this paper, we present SCCRS(Speech and Character Combined Recognition System) for speaker /writer independent. on-line multimodal interfaces. In general, it has been known that the CHMM(Continuous Hidden Markov Mode] ) is very useful method for speech recognition and on-line character recognition, respectively. In the proposed method, the same CHMM is applied to both speech and character recognition, so as to construct a combined system. For such a purpose, 115 CHMM having 3 states and 9 transitions are constructed using MLE(Maximum Likelihood Estimation) algorithm. Different features are extracted for speech and character recognition: MFCC(Mel Frequency Cepstrum Coefficient) Is used for speech in the preprocessing, while position parameter is utilized for cursive character At recognition step, the proposed SCCRS employs OPDP (One Pass Dynamic Programming), so as to be a practical combined recognition system. Experimental results show that the recognition rates for voice phoneme, voice word, cursive character grapheme, and cursive character word are 51.65%, 88.6%, 85.3%, and 85.6%, respectively, when not using any language models. It demonstrates the efficiency of the proposed system.

  • PDF