• Title/Summary/Keyword: Multimodal Information

Search Result 257, Processing Time 0.021 seconds

Estimation of Classification Error Based on the Bhattacharyya Distance for Data with Multimodal Distribution (Multimodal 분포 데이터를 위한 Bhattacharyya distance 기반 분류 에러예측 기법)

  • 최의선;이철희
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.85-87
    • /
    • 2000
  • In pattern classification, the Bhattacharyya distance has been used as a class separability measure and provides useful information for feature selection and extraction. In this paper, we propose a method to predict the classification error for multimodal data based on the Bhattacharyya distance. In our approach, we first approximate the pdf of multimodal distribution with a Gaussian mixture model and find the bhattacharyya distance and classification error. Exprimental results showed that there is a strong relationship between the Bhattacharyya distance and the classification error for multimodal data.

  • PDF

The Practical Strength of Logistics Competition Power for Efficiency of Combined Transport Transaction (복합운송의 물류경쟁력 강화를 위한 실천적 방안)

  • Lee, Hak-Seung
    • International Commerce and Information Review
    • /
    • v.9 no.2
    • /
    • pp.285-303
    • /
    • 2007
  • As the interest about smooth logistics increases lately, the importance of multimodal transport, which performs the key role of logistics emerges, Through there are many issues respecting multimodal transport, the issue of the efficiency of multimodal transport seems to be the most importance. This paper examine as to the problems & systems of the multimodal transport including transportation document and customs clearance for door to door services. I wish our country will use total logistics automation systems for encouraging multimodal transport chance and make a partial amendment of commercial code including the customs clearance regulation. This study will assist in the development of logistic and the enlargement of multimodal transport transaction.

  • PDF

Examination Criteria on the Compliance of Multimodal Transport Document in the ISBP (ISBP상의 복합운송서류의 일치성에 관한 심사기준)

  • Jeon, Soon-Hwan
    • International Commerce and Information Review
    • /
    • v.7 no.4
    • /
    • pp.219-243
    • /
    • 2005
  • The Purpose of this Article is to analyze the examination criteria on the compliance of multimodal transport document in the ISBP. When the goods are taken in charge by the multimodal transport operator, he shall issue a multimodal transport document which, at the option of the consignor, shall be in either negotiable or non-negotiable form. The multimodal transport document shall be signed by the multimodal transport operator or by a person having authority from him. When the multimodal transport document is presented by the beneficiary to the bank in the letter of credit operations, the bank should examinate the bill of exchange and/or shipping documents, including multimodal transport document. There are two rules in connection with examination of the documents in the letter of credit operations. One is the "Uniform Customs and Practice for Documentary Credits(UCP 500)" approved by the Banking Commission in March 10, 1993, the other is the "International Standard Banking Practice for the Examination of Documents under Documentary Letters of Credits(ISBP)" approved by the ICC Banking Commission in October 2002. Therefore, this Article has studied the multimodal transport document presented under documentary credits on the basis of the UCP 500 and the ISBP it reflects.

  • PDF

Adaptive Multimodal In-Vehicle Information System for Safe Driving

  • Park, Hye Sun;Kim, Kyong-Ho
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.626-636
    • /
    • 2015
  • This paper proposes an adaptive multimodal in-vehicle information system for safe driving. The proposed system filters input information based on both the priority assigned to the information and the given driving situation, to effectively manage input information and intelligently provide information to the driver. It then interacts with the driver using an adaptive multimodal interface by considering both the driving workload and the driver's cognitive reaction to the information it provides. It is shown experimentally that the proposed system can promote driver safety and enhance a driver's understanding of the information it provides by filtering the input information. In addition, the system can reduce a driver's workload by selecting an appropriate modality and corresponding level with which to communicate. An analysis of subjective questionnaires regarding the proposed system reveals that more than 85% of the respondents are satisfied with it. The proposed system is expected to provide prioritized information through an easily understood modality.

Multimodal Image Fusion with Human Pose for Illumination-Robust Detection of Human Abnormal Behaviors (조명을 위한 인간 자세와 다중 모드 이미지 융합 - 인간의 이상 행동에 대한 강력한 탐지)

  • Cuong H. Tran;Seong G. Kong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.637-640
    • /
    • 2023
  • This paper presents multimodal image fusion with human pose for detecting abnormal human behaviors in low illumination conditions. Detecting human behaviors in low illumination conditions is challenging due to its limited visibility of the objects of interest in the scene. Multimodal image fusion simultaneously combines visual information in the visible spectrum and thermal radiation information in the long-wave infrared spectrum. We propose an abnormal event detection scheme based on the multimodal fused image and the human poses using the keypoints to characterize the action of the human body. Our method assumes that human behaviors are well correlated to body keypoints such as shoulders, elbows, wrists, hips. In detail, we extracted the human keypoint coordinates from human targets in multimodal fused videos. The coordinate values are used as inputs to train a multilayer perceptron network to classify human behaviors as normal or abnormal. Our experiment demonstrates a significant result on multimodal imaging dataset. The proposed model can capture the complex distribution pattern for both normal and abnormal behaviors.

Error Estimation Based on the Bhattacharyya Distance for Classifying Multimodal Data (Multimodal 데이터에 대한 분류 에러 예측 기법)

  • Choe, Ui-Seon;Kim, Jae-Hui;Lee, Cheol-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.147-154
    • /
    • 2002
  • In this paper, we propose an error estimation method based on the Bhattacharyya distance for multimodal data. First, we try to find the empirical relationship between the classification error and the Bhattacharyya distance. Then, we investigate the possibility to derive the error estimation equation based on the Bhattacharyya distance for multimodal data. We assume that the distribution of multimodal data can be approximated as a mixture of several Gaussian distributions. Experimental results with remotely sensed data showed that there exist strong relationships between the Bhattacharyya distance and the classification error and that it is possible to predict the classification error using the Bhattacharyya distance for multimodal data.

Brain MR Multimodal Medical Image Registration Based on Image Segmentation and Symmetric Self-similarity

  • Yang, Zhenzhen;Kuang, Nan;Yang, Yongpeng;Kang, Bin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1167-1187
    • /
    • 2020
  • With the development of medical imaging technology, image registration has been widely used in the field of disease diagnosis. The registration between different modal images of brain magnetic resonance (MR) is particularly important for the diagnosis of brain diseases. However, previous registration methods don't take advantage of the prior knowledge of bilateral brain symmetry. Moreover, the difference in gray scale information of different modal images increases the difficulty of registration. In this paper, a multimodal medical image registration method based on image segmentation and symmetric self-similarity is proposed. This method uses modal independent self-similar information and modal consistency information to register images. More particularly, we propose two novel symmetric self-similarity constraint operators to constrain the segmented medical images and convert each modal medical image into a unified modal for multimodal image registration. The experimental results show that the proposed method can effectively reduce the error rate of brain MR multimodal medical image registration with rotation and translation transformations (average 0.43mm and 0.60mm) respectively, whose accuracy is better compared to state-of-the-art image registration methods.

Design and Implementation of Emergency Recognition System based on Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현)

  • Kim, Eoung-Un;Kang, Sun-Kyung;So, In-Mi;Kwon, Tae-Kyu;Lee, Sang-Seol;Lee, Yong-Ju;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.181-190
    • /
    • 2009
  • This paper presents a multimodal emergency recognition system based on visual information, audio information and gravity sensor information. It consists of video processing module, audio processing module, gravity sensor processing module and multimodal integration module. The video processing module and gravity sensor processing module respectively detects actions such as moving, stopping and fainting and transfer them to the multimodal integration module. The multimodal integration module detects emergency by fusing the transferred information and verifies it by asking a question and recognizing the answer via audio channel. The experiment results show that the recognition rate of video processing module only is 91.5% and that of gravity sensor processing module only is 94%, but when both information are combined the recognition result becomes 100%.

An Implementation of Multimodal Speaker Verification System using Teeth Image and Voice on Mobile Environment (이동환경에서 치열영상과 음성을 이용한 멀티모달 화자인증 시스템 구현)

  • Kim, Dong-Ju;Ha, Kil-Ram;Hong, Kwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.162-172
    • /
    • 2008
  • In this paper, we propose a multimodal speaker verification method using teeth image and voice as biometric trait for personal verification in mobile terminal equipment. The proposed method obtains the biometric traits using image and sound input devices of smart-phone that is one of mobile terminal equipments, and performs verification with biometric traits. In addition, the proposed method consists the multimodal-fashion of combining two biometric authentication scores for totally performance enhancement, the fusion method is accompanied a weighted-summation method which has comparative simple structure and superior performance for considering limited resources of system. The performance evaluation of proposed multimodal speaker authentication system conducts using a database acquired in smart-phone for 40 subjects. The experimental result shows 8.59% of EER in case of teeth verification 11.73% in case of voice verification and the multimodal speaker authentication result presented the 4.05% of EER. In the experimental result, we obtain the enhanced performance more than each using teeth and voice by using the simple weight-summation method in the multimodal speaker verification system.

A Multimodal Interface for Telematics based on Multimodal middleware (미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스)

  • Park, Sung-Chan;Ahn, Se-Yeol;Park, Seong-Soo;Koo, Myoung-Wan
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF