• Title/Summary/Keyword: multimodal transfer

Search Result 20, Processing Time 0.028 seconds

A Study on Finding the K Shortest Paths for the Multimodal Public Transportation Network in the Seoul Metropolitan (수도권 복합 대중교통망의 복수 대안 경로 탐색 알고리즘 고찰)

  • Park, Jong-Hoon;Sohn, Moo-Sung;Oh, Suk-Mun;Min, Jae-Hong
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.607-613
    • /
    • 2011
  • This paper reviews search methods of multiple reasonable paths to implement multimodal public transportation network of Seoul. Such a large scale multimodal public transportation network as Seoul, the computation time of path finding algorithm is a key and the result of path should reflect route choice behavior of public transportation passengers. Search method of alternative path is divided by removing path method and deviation path method. It analyzes previous researches based on the complexity of algorithm for large-scale network. Applying path finding algorithm in public transportation network, transfer and loop constraints must be included to be able to reflect real behavior. It constructs the generalized cost function based on the smart card data to reflect travel behavior of public transportation. To validate the availability of algorithm, experiments conducted with Seoul metropolitan public multimodal transportation network consisted with 22,109 nodes and 215,859 links by using the deviation path method, suitable for large-scale network.

  • PDF

Emergency situations Recognition System Using Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템)

  • Kim, Young-Un;Kang, Sun-Kyung;So, In-Mi;Han, Dae-Kyung;Kim, Yoon-Jin;Jung, Sung-Tae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.757-758
    • /
    • 2008
  • This paper aims to propose an emergency recognition system using multimodal information extracted by an image processing module, a voice processing module, and a gravity sensor processing module. Each processing module detects predefined events such as moving, stopping, fainting, and transfer them to the multimodal integration module. Multimodal integration module recognizes emergency situation by using the transferred events and rechecks it by asking the user some question and recognizing the answer. The experiment was conducted for a faint motion in the living room and bathroom. The results of the experiment show that the proposed system is robust than previous methods and effectively recognizes emergency situations at various situations.

  • PDF

A Model for Evaluating the Connectivity of Multimodal Transit Networks (복합수단 대중교통 네트워크의 연계성 평가 모형)

  • Park, Jun-Sik;Gang, Seong-Cheol
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.3
    • /
    • pp.85-98
    • /
    • 2010
  • As transit networks are becoming more multimodal, the concept of connectivity of transit networks becomes important. This study aims to develop a quantitative model for measuring the connectivity of multimodal transit networks. To that end, we select, as evaluation measures of a transit line, its length, capacity, and speed. We then define the connecting power of a transit line as the product of those measures. The degree centrality of a node, which is a widely used centrality measure in social network analysis, is employed with appropriate modifications suited for transit networks. Using the degree centrality of a transit stop and the connecting powers of transit lines serving the transit stop, we develop an index quantifying the level of connectivity of the transit stop. From the connectivity indexes of transit stops, we derive the connectivity index of a transit line as well as an area of a multimodal transit network. In addition, we present a method to evaluate the connectivity of a transfer center using the connectivity indexes of transit stops and passenger acceptance rate functions. A case study shows that the connectivity evaluation model developed in this study takes well into consideration characteristics of multimodal transit networks, adequately measures the connectivity of transit stops, lines, and areas, and furthermore can be used in determining the level of service of transfer centers.

A Methodology of Multimodal Public Transportation Network Building and Path Searching Using Transportation Card Data (교통카드 기반자료를 활용한 복합대중교통망 구축 및 경로탐색 방안 연구)

  • Cheon, Seung-Hoon;Shin, Seong-Il;Lee, Young-Ihn;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.3
    • /
    • pp.233-243
    • /
    • 2008
  • Recognition for the importance and roles of public transportation is increasing because of traffic problems in many cities. In spite of this paradigm change, previous researches related with public transportation trip assignment have limits in some aspects. Especially, in case of multimodal public transportation networks, many characters should be considered such as transfers. operational time schedules, waiting time and travel cost. After metropolitan integrated transfer discount system was carried out, transfer trips are increasing among traffic modes and this takes the variation of users' route choices. Moreover, the advent of high-technology public transportation card called smart card, public transportation users' travel information can be recorded automatically and this gives many researchers new analytical methodology for multimodal public transportation networks. In this paper, it is suggested that the methodology for establishment of brand new multimodal public transportation networks based on computer programming methods using transportation card data. First, we propose the building method of integrated transportation networks based on bus and urban railroad stations in order to make full use of travel information from transportation card data. Second, it is offered how to connect the broken transfer links by computer-based programming techniques. This is very helpful to solve the transfer problems that existing transportation networks have. Lastly, we give the methodology for users' paths finding and network establishment among multi-modes in multimodal public transportation networks. By using proposed methodology in this research, it becomes easy to build multimodal public transportation networks with existing bus and urban railroad station coordinates. Also, without extra works including transfer links connection, it is possible to make large-scaled multimodal public transportation networks. In the end, this study can contribute to solve users' paths finding problem among multi-modes which is regarded as an unsolved issue in existing transportation networks.

A Survey of Multimodal Systems and Techniques for Motor Learning

  • Tadayon, Ramin;McDaniel, Troy;Panchanathan, Sethuraman
    • Journal of Information Processing Systems
    • /
    • v.13 no.1
    • /
    • pp.8-25
    • /
    • 2017
  • This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.

Design and Implementation of Emergency Recognition System based on Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현)

  • Kim, Eoung-Un;Kang, Sun-Kyung;So, In-Mi;Kwon, Tae-Kyu;Lee, Sang-Seol;Lee, Yong-Ju;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.181-190
    • /
    • 2009
  • This paper presents a multimodal emergency recognition system based on visual information, audio information and gravity sensor information. It consists of video processing module, audio processing module, gravity sensor processing module and multimodal integration module. The video processing module and gravity sensor processing module respectively detects actions such as moving, stopping and fainting and transfer them to the multimodal integration module. The multimodal integration module detects emergency by fusing the transferred information and verifies it by asking a question and recognizing the answer via audio channel. The experiment results show that the recognition rate of video processing module only is 91.5% and that of gravity sensor processing module only is 94%, but when both information are combined the recognition result becomes 100%.

Magnetic Resonance Imaging Meets Fiber Optics: a Brief Investigation of Multimodal Studies on Fiber Optics-Based Diagnostic / Therapeutic Techniques and Magnetic Resonance Imaging

  • Choi, Jong-ryul;Oh, Sung Suk
    • Investigative Magnetic Resonance Imaging
    • /
    • v.25 no.4
    • /
    • pp.218-228
    • /
    • 2021
  • Due to their high degree of freedom to transfer and acquire light, fiber optics can be used in the presence of strong magnetic fields. Hence, optical sensing and imaging based on fiber optics can be integrated with magnetic resonance imaging (MRI) diagnostic systems to acquire valuable information on biological tissues and organs based on a magnetic field. In this article, we explored the combination of MRI and optical sensing/imaging techniques by classifying them into the following topics: 1) functional near-infrared spectroscopy with functional MRI for brain studies and brain disease diagnoses, 2) integration of fiber-optic molecular imaging and optogenetic stimulation with MRI, and 3) optical therapeutic applications with an MRI guidance system. Through these investigations, we believe that a combination of MRI and optical sensing/imaging techniques can be employed as both research methods for multidisciplinary studies and clinical diagnostic/therapeutic devices.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Link-Based Shortest Path Algorithm for the Urban Intermodal Transportation Network with Time-Schedule Constraints (서비스시간 제약이 존재하는 도시부 복합교통망을 위한 링크기반의 최단경로탐색 알고리즘)

  • 장인성
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.6
    • /
    • pp.111-124
    • /
    • 2000
  • 본 연구에서 다루고자 하는 문제는 서비스시간 제약을 갖는 도시부 복합교통망에서의 기종점을 잇는 합리적인 최단경로를 탐색하고자 하는 것이다. 서비스시간 제약은 도시부 복합교통망에서의 현실성을 보다 더 사실적으로 표현하지만 기존의 알고리즘들은 이를 고려하지 않고 있다. 서비스시간 제약은 환승역에서 여행자가 환승차량을 이용해서 다른 지점으로 여행할 수 있는 출발시간이 미리 계획된 차량운행시간들에 의해 제한되어지는 것이다. 환승역에 도착한 여행자는 환승차량의 정해진 운행시간에서만 환승차량을 이용해서 다른 지점으로 여행할 수 있다. 따라서 서비스시간 제약이 고려되어지는 경우 총소요시간에는 여행시간과 환승대기시간이 포함되어지고, 환승대기시간은 여행자가 환승역에 도착한 시간과 환승차량의 출발이 허용되어지는 시간에 의존해서 변한다. 본 논문에서는 이러한 문제를 해결할 수 있는 링크기반의 최단경로탐색 알고리즘을 개발하였다. Dijkstra 알고리즘과 같은 전통적인 탐색법에서는 각 노드까지의 최단도착시간을 계산하여 각 노드에 표지로 설정하지만 제안된 알고리즘에서는 각 링크가지의 최단도착시간과 각 링크에서의 가장 빠른 출발시간을 계산하여 각 링크의 표지로 설정한다. 제안된 알고리즘의 자세한 탐색과정이 간단한 복합교통망에 대하여 예시되어진다.

  • PDF