• Title/Summary/Keyword: Multimodal Interaction

Search Result 59, Processing Time 0.024 seconds

Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction (멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계)

  • Im, Mi-Jeong;Park, Beom
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.

Multimodal Interaction on Automultiscopic Content with Mobile Surface Haptics

  • Kim, Jin Ryong;Shin, Seunghyup;Choi, Seungho;Yoo, Yeonwoo
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1085-1094
    • /
    • 2016
  • In this work, we present interactive automultiscopic content with mobile surface haptics for multimodal interaction. Our system consists of a 40-view automultiscopic display and a tablet supporting surface haptics in an immersive room. Animated graphics are projected onto the walls of the room. The 40-view automultiscopic display is placed at the center of the front wall. The haptic tablet is installed at the mobile station to enable the user to interact with the tablet. The 40-view real-time rendering and multiplexing technology is applied by establishing virtual cameras in the convergence layout. Surface haptics rendering is synchronized with three-dimensional (3D) objects on the display for real-time haptic interaction. We conduct an experiment to evaluate user experiences of the proposed system. The results demonstrate that the system's multimodal interaction provides positive user experiences of immersion, control, user interface intuitiveness, and 3D effects.

The Effects of Multimodal Sensory Stimulation Combined with Chiropractic Therapy on Growth and Mother-Infant Interaction in Infants with Low Birth Weight (통합감각자극이 저체중아의 성장 및 모아 상호작용에 미치는 효과)

  • Jang, Gun-Ja
    • Child Health Nursing Research
    • /
    • v.13 no.1
    • /
    • pp.33-42
    • /
    • 2007
  • Purpose: This study was conducted to investigate the effects of multimodal sensory stimulation on growth and mother-infant interaction in infants with low birth weight (LBW). Method: A non-equivalent control group time-series study design was used. The participants were 38 LBW infants and their mothers (19 in the intervention group and 19 in the control group). The data were collected from September 1, 2003 to March 31, 2004. For the mothers in the intervention group, this researcher instructed mothers in the multimodal sensory stimulation therapy, in turn the mothers used these techniques on their infants once a day during the 4-week research period. The researcher measured weight, length, and head circumference of the LBW infants once a week for 4 weeks and made a film of the mother playing with the infant for 5 minutes in the last week of the research period. Results: Compared to the control group, LBW infants in the intervention group showed significant increases in weekly weight gain (F=3.82, p=.012) and had significantly higher scores for mother-infant interaction (t=3.93, p>.000). Conclusion: The results suggest that multimodal sensory stimulation therapy can be used to increase the growth of LBW infants and improve mother-infant interaction.

  • PDF

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.9-20
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Design and Implementation of Multimodal Middleware for Mobile Environments (모바일 환경을 위한 멀티모달 미들웨어의 설계 및 구현)

  • Park, Seong-Soo;Ahn, Se-Yeol;Kim, Won-Woo;Koo, Myoung-Wan;Park, Sung-Chan
    • MALSORI
    • /
    • no.60
    • /
    • pp.125-144
    • /
    • 2006
  • W3C announced a standard software architecture for multimodal context-aware middleware that emphasizes modularity and separates structure, contents, and presentation. We implemented a distributed multimodal interface system followed the W3C architecture, based on SCXML. SCXML uses parallel states to invoke both XHTML and VoiceXML contents as well as to gather composite or sequential multimodal inputs through man-machine interactions. We also hire Delivery Context Interface(DCI) module and an external service bundle enabling middleware to support context-awareness services for real world environments. The provision of personalized user interfaces for mobile devices is expected to be used for different devices with a wide variety of capabilities and interaction modalities. We demonstrated the implemented middleware could maintain multimodal scenarios in a clear, concise and consistent manner by some experiments.

  • PDF

Multimodal Interaction Framework for Collaborative Augmented Reality in Education

  • Asiri, Dalia Mohammed Eissa;Allehaibi, Khalid Hamed;Basori, Ahmad Hoirul
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.268-282
    • /
    • 2022
  • One of the most important technologies today is augmented reality technology, it allows users to experience the real world using virtual objects that are combined with the real world. This technology is interesting and has become applied in many sectors such as the shopping and medicine, also it has been included in the sector of education. In the field of education, AR technology has become widely used due to its effectiveness. It has many benefits, such as arousing students' interest in learning imaginative concepts that are difficult to understand. On the other hand, studies have proven that collaborative between students increases learning opportunities by exchanging information, and this is known as Collaborative Learning. The use of multimodal creates a distinctive and interesting experience, especially for students, as it increases the interaction of users with the technologies. The research aims at developing collaborative framework for developing achievement of 6th graders through designing a framework that integrated a collaborative framework with a multimodal input "hand-gesture and touch", considering the development of an effective, fun and easy to use framework with a multimodal interaction in AR technology that was applied to reformulate the genetics and traits lesson from the science textbook for the 6th grade, the first semester, the second lesson, in an interactive manner by creating a video based on the science teachers' consultations and a puzzle game in which the game images were inserted. As well, the framework adopted the cooperative between students to solve the questions. The finding showed a significant difference between post-test and pre-test of the experimental group on the mean scores of the science course at the level of remembering, understanding, and applying. Which indicates the success of the framework, in addition to the fact that 43 students preferred to use the framework over traditional education.

A Multimodal Interface for Telematics based on Multimodal middleware (미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스)

  • Park, Sung-Chan;Ahn, Se-Yeol;Park, Seong-Soo;Koo, Myoung-Wan
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF