• Title/Summary/Keyword: multimodal system

Search Result 225, Processing Time 0.027 seconds

Empirical Study of Multimodal Transport Route Choice Model in Freight Transport between Mongolia and Korea

  • Ganbat, Enkhtsetseg;Kim, Hwan-Seong
    • Journal of Navigation and Port Research
    • /
    • v.39 no.5
    • /
    • pp.409-415
    • /
    • 2015
  • According to the globalization of world economy on distribution and sales, logistics and transportation parts are playing an important role. Especially, they have to decide what is the key factor of route choice model and how to choose the right transport route in multimodal transport system. By considering the key factors in rote choice model for freight forwarders between Mongolia and Korea, this paper propose 4 main factors: Cost, Delivery time, Freight and Logistics service with 13 sub factors. The importance of factors is surveyed base on AHP through interview with freight forwarders. In results, the empirical insights about current status of Mongolian forwarders are provided with different factors between transportation modes. Expecially, the Time factor is a role factor to choose transport route for air transportation forwarders.

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Effective vibration control of multimodal structures with low power requirement

  • Loukil, Thamina;Ichchou, Mohamed;Bareille, Olivier;Haddar, Mohamed
    • Smart Structures and Systems
    • /
    • v.13 no.3
    • /
    • pp.435-451
    • /
    • 2014
  • In this paper, we investigate the vibration control of multimodal structures and present an efficient control law that requires less energy supply than active strategies. This strategy is called modal global semi-active control and is designed to work as effectively as the active control and consume less power which represents its major limitation. The proposed law is based on an energetic management of the optimal law such that the controller follows this latter only if there is sufficient energy which will be extracted directly from the system vibrations itself. The control algorithm is presented and validated for a cantilever beam structure subjected to external perturbations. Comparisons between the proposed law performances and those obtained by independent modal space control (IMSC) and semi-active control schemes are offered.

Multimodal Curvature Discrimination of 3D Objects

  • Kim, Kwang-Taek;Lee, Hyuk-Soo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.212-216
    • /
    • 2013
  • As virtual reality technologies are advanced rapidly, how to render 3D objects across modalities is becoming an important issue. This study is therefore aimed to investigate human discriminability on the curvature of 3D polygonal surfaces with focusing on the vision and touch senses because they are most dominant when explore 3D shapes. For the study, we designed a psychophysical experiment using signal detection theory to determine curvature discrimination for three conditions: haptic only, visual only, and both haptic and visual. The results show that there is no statistically significant difference among the conditions although the threshold in the haptic condition is the lowest. The results also indicate that rendering using both visual and haptic channels could degrade the performance of discrimination on a 3D global shape. These results must be considered when a multimodal rendering system is designed in near future.

Multimodal Biometric Using a Hierarchical Fusion of a Person's Face, Voice, and Online Signature

  • Elmir, Youssef;Elberrichi, Zakaria;Adjoudj, Reda
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.555-567
    • /
    • 2014
  • Biometric performance improvement is a challenging task. In this paper, a hierarchical strategy fusion based on multimodal biometric system is presented. This strategy relies on a combination of several biometric traits using a multi-level biometric fusion hierarchy. The multi-level biometric fusion includes a pre-classification fusion with optimal feature selection and a post-classification fusion that is based on the similarity of the maximum of matching scores. The proposed solution enhances biometric recognition performances based on suitable feature selection and reduction, such as principal component analysis (PCA) and linear discriminant analysis (LDA), as much as not all of the feature vectors components support the performance improvement degree.

Implementation of Multimodal Biometric Embedded System (다중 바이오 인식을 위한 임베디드 시스템 구현)

  • Kim, Ki-Hyun;Yoo, Jang-Hee
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.875-876
    • /
    • 2006
  • In this paper, we propose a multimodal biometric embedded system. It is designed to support face, iris, fingerprint and vascular pattern recognition. We use a S3C2440A based on ARM926T core processor that is made in Samsung. The system has support various external device interfaces for multi biometric sensors, and RFID/Smart Card reader/writer. Additionally, it has a 6" LCD panel and numeric keypad for easy GUI. The embedded system offers useful environments to develop better biometric algorithms for stand alone biometric system and accelerator hardware modules for real time operation.

  • PDF

Genetic Algorithm Calibration Method and PnP Platform for Multimodal Sensor Systems (멀티모달 센서 시스템용 유전자 알고리즘 보정기 및 PnP 플랫폼)

  • Lee, Jea Hack;Kim, Byung-Soo;Park, Hyun-Moon;Kim, Dong-Sun;Kwon, Jin-San
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.1
    • /
    • pp.69-80
    • /
    • 2019
  • This paper proposes a multimodal sensor platform which supports plug and play (PnP) technology. PnP technology automatically recognizes a connected sensor module and an application program easily controls a sensor. To verify a multimodal platform for PnP technology, we build up a firmware and have the experiment on a sensor system. When a sensor module is connected to the platform, a firmware recognizes the sensor module and reads sensor data. As a result, it provides PnP technology to simply plug sensors without any software configuration. Measured sensor raw data suffer from various distortions such as gain, offset, and non-linearity errors. Therefore, we introduce a polynomial calculation to compensate for sensor distortions. To find the optimal coefficients for sensor calibration, we apply a genetic algorithm which reduces the calibration time. It achieves reasonable performance using only a few data points with reducing 97% error in the worst case. The platform supports various protocols for multimodal sensors, i.e., UART, I2C, I2S, SPI, and GPIO.

Multimodal Bio-signal Measurement System for Sleep Analysis (수면 분석을 위한 다중 모달 생체신호 측정 시스템)

  • Kim, Sang Kyu;Yoo, Sun Kook
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.609-616
    • /
    • 2018
  • In this paper, we designed a multimodal bio-signal measurement system to observe changes in the brain nervous system and vascular system during sleep. Changes in the nervous system and the cerebral blood flow system in the brain during sleep induce a unique correlation between the changes in the nervous system and the blood flow system. Therefore, it is necessary to simultaneously observe changes in the brain nervous system and changes in the blood flow system to observe the sleep state. To measure the change of the nervous system, EEG, EOG and EMG signal used for the sleep stage analysis were designed. We designed a system for measuring cerebral blood flow changes using functional near-infrared spectroscopy. Among the various imaging methods to measure blood flow and metabolism, it is easy to measure simultaneously with EEG signal and it can be easily designed for miniaturization of equipment. The sleep stage was analyzed by the measured data, and the change of the cerebral blood flow was confirmed by the change of the sleep stage.

Recent Research Trend in Skin-Inspired Soft Sensors with Multimodality (피부 모사형 다기능 유연 센서의 연구 동향)

  • Lee, Seung Goo;Choi, Kyung Ho;Shin, Gyo Jic;Lee, Hyo Sun;Bae, Geun Yeol
    • Journal of Adhesion and Interface
    • /
    • v.21 no.4
    • /
    • pp.162-167
    • /
    • 2020
  • The skin-inspired multimodal soft sensors have been developed through multidisciplinary approaches to mimic the sensing ability with high sensitivity and mechanical durability of human skin. For practical application, although the stimulus discriminability against a complex stimulus composed of various mechanical and thermal stimuli experienced in daily life is essential, it still shows a low level actually. In this paper, we first introduce the operating mechanisms and representative studies of the unimodal soft sensor, and then discuss the recent research trend in the multimodal soft sensors and the stimulus discriminability.

Multi-modal Representation Learning for Classification of Imported Goods (수입물품의 품목 분류를 위한 멀티모달 표현 학습)

  • Apgil Lee;Keunho Choi;Gunwoo Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.203-214
    • /
    • 2023
  • The Korea Customs Service is efficiently handling business with an electronic customs system that can effectively handle one-stop business. This is the case and a more effective method is needed. Import and export require HS Code (Harmonized System Code) for classification and tax rate application for all goods, and item classification that classifies the HS Code is a highly difficult task that requires specialized knowledge and experience and is an important part of customs clearance procedures. Therefore, this study uses various types of data information such as product name, product description, and product image in the item classification request form to learn and develop a deep learning model to reflect information well based on Multimodal representation learning. It is expected to reduce the burden of customs duties by classifying and recommending HS Codes and help with customs procedures by promptly classifying items.