• 제목/요약/키워드: Multimodal Information

검색결과 255건 처리시간 0.024초

Multimodal 분포 데이터를 위한 Bhattacharyya distance 기반 분류 에러예측 기법 (Estimation of Classification Error Based on the Bhattacharyya Distance for Data with Multimodal Distribution)

  • 최의선;이철희
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.85-87
    • /
    • 2000
  • In pattern classification, the Bhattacharyya distance has been used as a class separability measure and provides useful information for feature selection and extraction. In this paper, we propose a method to predict the classification error for multimodal data based on the Bhattacharyya distance. In our approach, we first approximate the pdf of multimodal distribution with a Gaussian mixture model and find the bhattacharyya distance and classification error. Exprimental results showed that there is a strong relationship between the Bhattacharyya distance and the classification error for multimodal data.

  • PDF

복합운송의 물류경쟁력 강화를 위한 실천적 방안 (The Practical Strength of Logistics Competition Power for Efficiency of Combined Transport Transaction)

  • 이학승
    • 통상정보연구
    • /
    • 제9권2호
    • /
    • pp.285-303
    • /
    • 2007
  • As the interest about smooth logistics increases lately, the importance of multimodal transport, which performs the key role of logistics emerges, Through there are many issues respecting multimodal transport, the issue of the efficiency of multimodal transport seems to be the most importance. This paper examine as to the problems & systems of the multimodal transport including transportation document and customs clearance for door to door services. I wish our country will use total logistics automation systems for encouraging multimodal transport chance and make a partial amendment of commercial code including the customs clearance regulation. This study will assist in the development of logistic and the enlargement of multimodal transport transaction.

  • PDF

ISBP상의 복합운송서류의 일치성에 관한 심사기준 (Examination Criteria on the Compliance of Multimodal Transport Document in the ISBP)

  • 전순환
    • 통상정보연구
    • /
    • 제7권4호
    • /
    • pp.219-243
    • /
    • 2005
  • The Purpose of this Article is to analyze the examination criteria on the compliance of multimodal transport document in the ISBP. When the goods are taken in charge by the multimodal transport operator, he shall issue a multimodal transport document which, at the option of the consignor, shall be in either negotiable or non-negotiable form. The multimodal transport document shall be signed by the multimodal transport operator or by a person having authority from him. When the multimodal transport document is presented by the beneficiary to the bank in the letter of credit operations, the bank should examinate the bill of exchange and/or shipping documents, including multimodal transport document. There are two rules in connection with examination of the documents in the letter of credit operations. One is the "Uniform Customs and Practice for Documentary Credits(UCP 500)" approved by the Banking Commission in March 10, 1993, the other is the "International Standard Banking Practice for the Examination of Documents under Documentary Letters of Credits(ISBP)" approved by the ICC Banking Commission in October 2002. Therefore, this Article has studied the multimodal transport document presented under documentary credits on the basis of the UCP 500 and the ISBP it reflects.

  • PDF

Adaptive Multimodal In-Vehicle Information System for Safe Driving

  • Park, Hye Sun;Kim, Kyong-Ho
    • ETRI Journal
    • /
    • 제37권3호
    • /
    • pp.626-636
    • /
    • 2015
  • This paper proposes an adaptive multimodal in-vehicle information system for safe driving. The proposed system filters input information based on both the priority assigned to the information and the given driving situation, to effectively manage input information and intelligently provide information to the driver. It then interacts with the driver using an adaptive multimodal interface by considering both the driving workload and the driver's cognitive reaction to the information it provides. It is shown experimentally that the proposed system can promote driver safety and enhance a driver's understanding of the information it provides by filtering the input information. In addition, the system can reduce a driver's workload by selecting an appropriate modality and corresponding level with which to communicate. An analysis of subjective questionnaires regarding the proposed system reveals that more than 85% of the respondents are satisfied with it. The proposed system is expected to provide prioritized information through an easily understood modality.

조명을 위한 인간 자세와 다중 모드 이미지 융합 - 인간의 이상 행동에 대한 강력한 탐지 (Multimodal Image Fusion with Human Pose for Illumination-Robust Detection of Human Abnormal Behaviors)

  • ;공성곤
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.637-640
    • /
    • 2023
  • This paper presents multimodal image fusion with human pose for detecting abnormal human behaviors in low illumination conditions. Detecting human behaviors in low illumination conditions is challenging due to its limited visibility of the objects of interest in the scene. Multimodal image fusion simultaneously combines visual information in the visible spectrum and thermal radiation information in the long-wave infrared spectrum. We propose an abnormal event detection scheme based on the multimodal fused image and the human poses using the keypoints to characterize the action of the human body. Our method assumes that human behaviors are well correlated to body keypoints such as shoulders, elbows, wrists, hips. In detail, we extracted the human keypoint coordinates from human targets in multimodal fused videos. The coordinate values are used as inputs to train a multilayer perceptron network to classify human behaviors as normal or abnormal. Our experiment demonstrates a significant result on multimodal imaging dataset. The proposed model can capture the complex distribution pattern for both normal and abnormal behaviors.

Multimodal 데이터에 대한 분류 에러 예측 기법 (Error Estimation Based on the Bhattacharyya Distance for Classifying Multimodal Data)

  • 최의선;김재희;이철희
    • 대한전자공학회논문지SP
    • /
    • 제39권2호
    • /
    • pp.147-154
    • /
    • 2002
  • 본 논문에서는 multimodal 특성을 갖는 데이터에 대하여 패턴 분류 시 Bhattacharyya distance에 기반한 에러 예측 기법을 제안한다. 제안한 방법은 multimodal 데이터에 대하여 분류 에러와 Bhattacharyya distance를 각각 실험적으로 구하고 이 둘 사이의 관계를 유추하여 에러의 예측 가능성을 조사한다. 본 논문에서는 분류 에러 및 Bhattacharyya distance를 구하기 위하여 multimodal 데이터의 확률 밀도 함수를 정규 분포 특성을 갖는 부클래스들의 조합으로 추정한다. 원격 탐사 데이터를 이용하여 실험한 결과, multimodal 데이터의 분류 에러와 Bhattacharyya distance 사이에 밀접한 관련이 있음이 확인되었으며, Bhattacharyya distance를 이용한 에러 예측 가능성을 보여주었다.

Brain MR Multimodal Medical Image Registration Based on Image Segmentation and Symmetric Self-similarity

  • Yang, Zhenzhen;Kuang, Nan;Yang, Yongpeng;Kang, Bin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권3호
    • /
    • pp.1167-1187
    • /
    • 2020
  • With the development of medical imaging technology, image registration has been widely used in the field of disease diagnosis. The registration between different modal images of brain magnetic resonance (MR) is particularly important for the diagnosis of brain diseases. However, previous registration methods don't take advantage of the prior knowledge of bilateral brain symmetry. Moreover, the difference in gray scale information of different modal images increases the difficulty of registration. In this paper, a multimodal medical image registration method based on image segmentation and symmetric self-similarity is proposed. This method uses modal independent self-similar information and modal consistency information to register images. More particularly, we propose two novel symmetric self-similarity constraint operators to constrain the segmented medical images and convert each modal medical image into a unified modal for multimodal image registration. The experimental results show that the proposed method can effectively reduce the error rate of brain MR multimodal medical image registration with rotation and translation transformations (average 0.43mm and 0.60mm) respectively, whose accuracy is better compared to state-of-the-art image registration methods.

멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현 (Design and Implementation of Emergency Recognition System based on Multimodal Information)

  • 김영운;강선경;소인미;권대규;이상설;이용주;정성태
    • 한국컴퓨터정보학회논문지
    • /
    • 제14권2호
    • /
    • pp.181-190
    • /
    • 2009
  • 본 논문은 비주얼 정보, 오디오 정보, 중력 센서 정보에 기반한 멀티 모달 응급상황 인식 시스템을 제안한다. 제안된 시스템은 비디오 처리 모듈, 오디오 처리 모듈, 중력 센서 처리 모듈, 멀티모달 통합 모듈로 구성된다. 비디오 처리 모듈과 오디오 처리 모듈 각각은 이동, 정지 기절 등의 동작을 인식하여 멀티모달 통합 모듈에 전달한다. 멀티 모달 통합 모듈은 전달된 정보로부터 응급 상황을 인식하고 오디오 채널을 통하여 사용자에게 질문을 하고 대답을 인식함으로써 응급 상황을 재확인한다. 실험결과 영상에서는 91.5%, 착용형 중력센서는 94% 인식률을 보였으나 이들을 통합하면 응급상황을 100% 인식하는 결과를 보였다.

이동환경에서 치열영상과 음성을 이용한 멀티모달 화자인증 시스템 구현 (An Implementation of Multimodal Speaker Verification System using Teeth Image and Voice on Mobile Environment)

  • 김동주;하길람;홍광석
    • 전자공학회논문지CI
    • /
    • 제45권5호
    • /
    • pp.162-172
    • /
    • 2008
  • 본 논문에서는 이동환경에서 개인의 신원을 인증하는 수단으로 치열영상과 음성을 생체정보로 이용한 멀티모달 화자인증 방법에 대하여 제안한다. 제안한 방법은 이동환경의 단말장치중의 하나인 스마트폰의 영상 및 음성 입력장치를 이용하여 생체 정보를 획득하고, 이를 이용하여 사용자 인증을 수행한다. 더불어, 제안한 방법은 전체적인 사용자 인증 성능의 향상을 위하여 두 개의 단일 생체인식 결과를 결합하는 멀티모달 방식으로 구성하였고, 결합 방법으로는 시스템의 제한된 리소스를 고려하여 비교적 간단하면서도 우수한 성능을 보이는 가중치 합의 방법을 사용하였다. 제안한 멀티모달 화자인증 시스템의 성능평가는 스마트폰에서 획득한 40명의 사용자에 대한 데이터베이스를 이용하였고, 실험 결과, 치열영상과 음성을 이용한 단일 생체인증 결과는 각각 8.59%와 11.73%의 EER를 보였으며, 멀티모달 화자인증 결과는 4.05%의 EER를 나타냈다. 이로부터 본 논문에서는 인증 성능을 향상하기 위하여 두 개의 단일 생체인증 결과를 간단한 가중치 합으로 결합한 결과, 높은 인증 성능의 향상을 도모할 수 있었다.

미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스 (A Multimodal Interface for Telematics based on Multimodal middleware)

  • 박성찬;안세열;박성수;구명완
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF