• 제목/요약/키워드: Multimodal Information

검색결과 257건 처리시간 0.026초

멀티모달 정보를 이용한 응급상황 인식 시스템 (Emergency situations Recognition System Using Multimodal Information)

  • 김영운;강선경;소인미;한대경;김윤진;정성태
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.757-758
    • /
    • 2008
  • This paper aims to propose an emergency recognition system using multimodal information extracted by an image processing module, a voice processing module, and a gravity sensor processing module. Each processing module detects predefined events such as moving, stopping, fainting, and transfer them to the multimodal integration module. Multimodal integration module recognizes emergency situation by using the transferred events and rechecks it by asking the user some question and recognizing the answer. The experiment was conducted for a faint motion in the living room and bathroom. The results of the experiment show that the proposed system is robust than previous methods and effectively recognizes emergency situations at various situations.

  • PDF

Multimodal Context Embedding for Scene Graph Generation

  • Jung, Gayoung;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • 제16권6호
    • /
    • pp.1250-1260
    • /
    • 2020
  • This study proposes a novel deep neural network model that can accurately detect objects and their relationships in an image and represent them as a scene graph. The proposed model utilizes several multimodal features, including linguistic features and visual context features, to accurately detect objects and relationships. In addition, in the proposed model, context features are embedded using graph neural networks to depict the dependencies between two related objects in the context feature vector. This study demonstrates the effectiveness of the proposed model through comparative experiments using the Visual Genome benchmark dataset.

AI 멀티모달 센서 기반 보행자 영상인식 알고리즘 (AI Multimodal Sensor-based Pedestrian Image Recognition Algorithm)

  • 신성윤;조승표;조광현
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2023년도 제67차 동계학술대회논문집 31권1호
    • /
    • pp.407-408
    • /
    • 2023
  • In this paper, we intend to develop a multimodal algorithm that secures recognition performance of over 95% in daytime illumination environments and secures recognition performance of over 90% in bad weather (rainfall and snow) and night illumination environments.

  • PDF

멀티모달 정보변환을 통한 시각장애우 보행 보조 시스템 (Walking Assistance System for Sight Impaired People Based on a Multimodal Information Transformation Technique)

  • 유재형;한영준;한헌수
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.465-472
    • /
    • 2009
  • This paper proposes a multimodal information transformation system that converts the image information to the voice information to provide the sight impaired people with walking area and obstacles, which are extracted by an acquired image from a single CCD camera. Using a chain-code line detection algorithm, the walking area is found from the vanishing point and boundary of a sidewalk on the edge image. And obstacles are detected by Gabor filter of extracting vertical lines on the walking area. The proposed system expresses the voice information of pre-defined sentences, consisting of template words which mean walking area and obstacles. The multi-modal information transformation system serves the useful voice information to the sight impaired that intend to reach their destination. The experiments of the proposed algorithm has been implemented on the indoor and outdoor environments, and verified its superiority to exactly provide walking parameters sentences.

복합운송경로 선정을 위한 평가기준에 관한 연구 (Study on Evaluation Criteria for Multimodal Transport Routing Selection)

  • 김소연;최형림;김현수;박남규;조재형;박용성;조민제
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2006년도 춘계학술대회 및 창립 30주년 심포지엄(논문집)
    • /
    • pp.265-271
    • /
    • 2006
  • 전 세계적으로 생산과 판매, 유통이 펼쳐져 세계경제는 글로벌화 되고 국제운송체계는 신속성과 부가가치 서비스를 중요시하는 운송체계로 변화함에 따라 국제운송체계는 해상운송, 항공운송 그리고 철도운송을 시스템적으로 연계하는 국제복합운송 중심체계로 전환되고 있다. 이러한 변화에 따라 생산과 판매, 유통이 적시에 제공되고, 글로벌 네트워크상의 소비자에게 다차원적인 물류서비스를 제공할 수 있는 국제복합운송경로가 필요하지만 국제운송을 위한 정보 연계 및 운송수단 간의 연계 시스템이 미비하여 활성화되지 못하고 있다. 특히 국내에서는 3자 물류업체, 운송업체 둥의 선정기준은 제시되고 있으나, 운송을 계획하고 수행하는 물류전문업체들이 국제복합운송경로를 선정하는데 있어 체계적인 평가기준이 제시되지 못하고 있다. 이에 본 연구에서는 복합운송경로선정에 대한 주요 문헌연구를 정리하고, 업체 담당자들의 인터뷰를 통해 복합운송경로 선정을 위한 평가기준을 도출하고, 이를 계층분석기법(AHP)을 이용하여 측정하여 복합운송경로 선정을 위한 평가기준을 제시하는데 목적이 있다.

  • PDF

Implementation and Evaluation of Harmful-Media Filtering Techniques using Multimodal-Information Extraction

  • Yeon-Ji, Lee;Ye-Sol, Oh;Na-Eun, Park;Il-Gu, Lee
    • Journal of information and communication convergence engineering
    • /
    • 제21권1호
    • /
    • pp.75-81
    • /
    • 2023
  • Video platforms, including YouTube, have a structure in which the number of video views is directly related to the publisher's profits. Therefore, video publishers induce viewers by using provocative titles and thumbnails to garner more views. The conventional technique used to limit such harmful videos has low detection accuracy and relies on follow-up measures based on user reports. To address these problems, this study proposes a technique to improve the accuracy of filtering harmful media using thumbnails, titles, and audio data from videos. This study analyzed these three pieces of multimodal information; if the number of harmful determinations was greater than the set threshold, the video was deemed to be harmful, and its upload was restricted. The experimental results showed that the proposed multimodal information extraction technique used for harmfulvideo filtering achieved a 9% better performance than YouTube's Restricted Mode with regard to detection accuracy and a 41% better performance than the YouTube automation system.

Multimodal System by Data Fusion and Synergetic Neural Network

  • Son, Byung-Jun;Lee, Yill-Byung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권2호
    • /
    • pp.157-163
    • /
    • 2005
  • In this paper, we present the multimodal system based on the fusion of two user-friendly biometric modalities: Iris and Face. In order to reach robust identification and verification we are going to combine two different biometric features. we specifically apply 2-D discrete wavelet transform to extract the feature sets of low dimensionality from iris and face. And then to obtain Reduced Joint Feature Vector(RJFV) from these feature sets, Direct Linear Discriminant Analysis (DLDA) is used in our multimodal system. In addition, the Synergetic Neural Network(SNN) is used to obtain matching score of the preprocessed data. This system can operate in two modes: to identify a particular person or to verify a person's claimed identity. Our results for both cases show that the proposed method leads to a reliable person authentication system.

An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam;Ko, Jong-Gook;SeungHo choi;Kim, Jin-Young;Kim, Ki-Jung
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -1
    • /
    • pp.249-252
    • /
    • 2000
  • An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

  • PDF

Multimodal Sentiment Analysis for Investigating User Satisfaction

  • 황교엽;송쯔한;박병권
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제32권3호
    • /
    • pp.1-17
    • /
    • 2023
  • Purpose The proliferation of data on the internet has created a need for innovative methods to analyze user satisfaction data. Traditional survey methods are becoming inadequate in dealing with the increasing volume and diversity of data, and new methods using unstructured internet data are being explored. While numerous comment-based user satisfaction studies have been conducted, only a few have explored user satisfaction through video and audio data. Multimodal sentiment analysis, which integrates multiple modalities, has gained attention due to its high accuracy and broad applicability. Design/methodology/approach This study uses multimodal sentiment analysis to analyze user satisfaction of iPhone and Samsung products through online videos. The research reveals that the combination model integrating multiple data sources showed the most superior performance. Findings The findings also indicate that price is a crucial factor influencing user satisfaction, and users tend to exhibit more positive emotions when content with a product's price. The study highlights the importance of considering multiple factors when evaluating user satisfaction and provides valuable insights into the effectiveness of different data sources for sentiment analysis of product reviews.

조합기법을 이용한 다중생체신호의 특징추출에 의한 실시간 인증시스템 개발 (Development of Real-Time Verification System by Features Extraction of Multimodal Biometrics Using Hybrid Method)

  • 조용현
    • 한국산업융합학회 논문집
    • /
    • 제9권4호
    • /
    • pp.263-268
    • /
    • 2006
  • This paper presents a real-time verification system by extracting a features of multimodal biometrics using hybrid method, which is combined the moment balance and the independent component analysis(ICA). The moment balance is applied to reduce the computation loads by extracting the validity signal due to exclude the needless backgrounds of multimodal biometrics. ICA is also applied to increase the verification performance by removing the overlapping signals due to extract the statistically independent basis of signals. Multimodal biometrics are used both the faces and the fingerprints which are acquired by Web camera and acquisition device, respectively. The proposed system has been applied to the fusion problems of 48 faces and 48 fingerprints(24 persons * 2 scenes) of 320*240 pixels, respectively. The experimental results show that the proposed system has a superior verification performances(speed, rate).

  • PDF