• Title/Summary/Keyword: multimodal input

Search Result 34, Processing Time 0.02 seconds

Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction (멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계)

  • Im, Mi-Jeong;Park, Beom
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.

Adaptive Multimodal In-Vehicle Information System for Safe Driving

  • Park, Hye Sun;Kim, Kyong-Ho
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.626-636
    • /
    • 2015
  • This paper proposes an adaptive multimodal in-vehicle information system for safe driving. The proposed system filters input information based on both the priority assigned to the information and the given driving situation, to effectively manage input information and intelligently provide information to the driver. It then interacts with the driver using an adaptive multimodal interface by considering both the driving workload and the driver's cognitive reaction to the information it provides. It is shown experimentally that the proposed system can promote driver safety and enhance a driver's understanding of the information it provides by filtering the input information. In addition, the system can reduce a driver's workload by selecting an appropriate modality and corresponding level with which to communicate. An analysis of subjective questionnaires regarding the proposed system reveals that more than 85% of the respondents are satisfied with it. The proposed system is expected to provide prioritized information through an easily understood modality.

Speech-Oriented Multimodal Usage Pattern Analysis for TV Guide Application Scenarios (TV 가이드 영역에서의 음성기반 멀티모달 사용 유형 분석)

  • Kim Ji-Young;Lee Kyong-Nim;Hong Ki-Hyung
    • MALSORI
    • /
    • no.58
    • /
    • pp.101-117
    • /
    • 2006
  • The development of efficient multimodal interfaces and fusion algorithms requires knowledge of usage patterns that show how people use multiple modalities. We analyzed multimodal usage patterns for TV-guide application scenarios (or tasks). In order to collect usage patterns, we implemented a multimodal usage pattern collection system having two input modalities: speech and touch-gesture. Fifty-four subjects participated in our study. Analysis of the collected usage patterns shows a positive correlation between the task type and multimodal usage patterns. In addition, we analyzed the timing between speech-utterances and their corresponding touch-gestures that shows the touch-gesture occurring time interval relative to the duration of speech utterance. We believe that, for developing efficient multimodal fusion algorithms on an application, the multimodal usage pattern analysis for the given application, similar to our work for TV guide application, have to be done in advance.

  • PDF

A Multimodal Interface for Telematics based on Multimodal middleware (미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스)

  • Park, Sung-Chan;Ahn, Se-Yeol;Park, Seong-Soo;Koo, Myoung-Wan
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF

Design and Implementation of a Multimodal Input Device Using a Web Camera

  • Na, Jong-Whoa;Choi, Won-Suk;Lee, Dong-Woo
    • ETRI Journal
    • /
    • v.30 no.4
    • /
    • pp.621-623
    • /
    • 2008
  • We propose a novel input pointing device called the multimodal mouse (MM) which uses two modalities: face recognition and speech recognition. From an analysis of Microsoft Office workloads, we find that 80% of Microsoft Office Specialist test tasks are compound tasks using both the keyboard and the mouse together. When we use the optical mouse (OM), operation is quick, but it requires a hand exchange delay between the keyboard and the mouse. This takes up a significant amount of the total execution time. The MM operates more slowly than the OM, but it does not consume any hand exchange time. As a result, the MM shows better performance than the OM in many cases.

  • PDF

The Status and Research Themes of Speech based Multimodal Interface Technology (음성기반 멀티모달 인터페이스 기술 현황 및 과제)

  • Lee ChiGeun;Lee EunSuk;Lee HaeJung;Kim BongWan;Joung SukTae;Jung SungTae;Lee YongJoo;Han MoonSung
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.111-114
    • /
    • 2002
  • Complementary use of several modalities in human-to-human communication ensures high accuracy, and only few communication problem occur. Therefore, multimodal interface is considered as the next generation interface between human and computer. This paper presents the current status and research themes of speech-based multimodal interface technology, It first introduces about the concept of multimodal interface. It surveys the recognition technologies of input modalities and synthesis technologies of output modalities. After that it surveys integration technology of modality. Finally, it presents research themes of speech-based multimodal interface technology.

  • PDF

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Design of the Multimodal Input System using Image Processing and Speech Recognition (음성인식 및 영상처리 기반 멀티모달 입력장치의 설계)

  • Choi, Won-Suk;Lee, Dong-Woo;Kim, Moon-Sik;Na, Jong-Whoa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.743-748
    • /
    • 2007
  • Recently, various types of camera mouse are developed using the image processing. The camera mouse showed limited performance compared to the traditional optical mouse in terms of the response time and the usability. These problems are caused by the mismatch between the size of the monitor and that of the active pixel area of the CMOS Image Sensor. To overcome these limitations, we designed a new input device that uses the face recognition as well as the speech recognition simultaneously. In the proposed system, the area of the monitor is partitioned into 'n' zones. The face recognition is performed using the web-camera, so that the mouse pointer follows the movement of the face of the user in a particular zone. The user can switch the zone by speaking the name of the zone. The multimodal mouse is analyzed using the Keystroke Level Model and the initial experiments was performed to evaluate the feasibility and the performance of the proposed system.

Development of a multimodal interface for mobile phones (휴대폰용 멀티모달 인터페이스 개발 - 키패드, 모션, 음성인식을 결합한 멀티모달 인터페이스)

  • Kim, Won-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.559-563
    • /
    • 2008
  • The purpose of this paper is to introduce a multimodal interface for mobile phones and to verify its feasibility. The multimodal interface integrates multiple input devices together including speech, keypad and motion. It can enhance the late and time for speech recognition, and shorten the menu depth.

  • PDF

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF