• 제목/요약/키워드: multimodal interaction

검색결과 58건 처리시간 0.026초

멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계 (Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction)

  • 임미정;박범
    • 대한인간공학회지
    • /
    • 제25권2호
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.

Multimodal Interaction on Automultiscopic Content with Mobile Surface Haptics

  • Kim, Jin Ryong;Shin, Seunghyup;Choi, Seungho;Yoo, Yeonwoo
    • ETRI Journal
    • /
    • 제38권6호
    • /
    • pp.1085-1094
    • /
    • 2016
  • In this work, we present interactive automultiscopic content with mobile surface haptics for multimodal interaction. Our system consists of a 40-view automultiscopic display and a tablet supporting surface haptics in an immersive room. Animated graphics are projected onto the walls of the room. The 40-view automultiscopic display is placed at the center of the front wall. The haptic tablet is installed at the mobile station to enable the user to interact with the tablet. The 40-view real-time rendering and multiplexing technology is applied by establishing virtual cameras in the convergence layout. Surface haptics rendering is synchronized with three-dimensional (3D) objects on the display for real-time haptic interaction. We conduct an experiment to evaluate user experiences of the proposed system. The results demonstrate that the system's multimodal interaction provides positive user experiences of immersion, control, user interface intuitiveness, and 3D effects.

통합감각자극이 저체중아의 성장 및 모아 상호작용에 미치는 효과 (The Effects of Multimodal Sensory Stimulation Combined with Chiropractic Therapy on Growth and Mother-Infant Interaction in Infants with Low Birth Weight)

  • 장군자
    • Child Health Nursing Research
    • /
    • 제13권1호
    • /
    • pp.33-42
    • /
    • 2007
  • Purpose: This study was conducted to investigate the effects of multimodal sensory stimulation on growth and mother-infant interaction in infants with low birth weight (LBW). Method: A non-equivalent control group time-series study design was used. The participants were 38 LBW infants and their mothers (19 in the intervention group and 19 in the control group). The data were collected from September 1, 2003 to March 31, 2004. For the mothers in the intervention group, this researcher instructed mothers in the multimodal sensory stimulation therapy, in turn the mothers used these techniques on their infants once a day during the 4-week research period. The researcher measured weight, length, and head circumference of the LBW infants once a week for 4 weeks and made a film of the mother playing with the infant for 5 minutes in the last week of the research period. Results: Compared to the control group, LBW infants in the intervention group showed significant increases in weekly weight gain (F=3.82, p=.012) and had significantly higher scores for mother-infant interaction (t=3.93, p>.000). Conclusion: The results suggest that multimodal sensory stimulation therapy can be used to increase the growth of LBW infants and improve mother-infant interaction.

  • PDF

반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로 (The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character)

  • 서민수;홍승혜;이정명
    • 한국콘텐츠학회논문지
    • /
    • 제18권8호
    • /
    • pp.92-101
    • /
    • 2018
  • 대화형 AI 스피커가 보편화되면서 음성인식은 자율주행 상황에서의 중요한 차량-운전자 인터랙션 방식으로 인식되고 있다. 이 연구의 목적은 반자율주행 상황에서 음성뿐만 아니라 AI 캐릭터의 시각적 피드백을 함께 전달하는 멀티모달 인터랙션이 음성 단일 모드 인터랙션보다 사용자 경험 최적화에 효과적인지를 확인하는 것이다. 실험 참가자에게 주행 중 AI 스피커와 캐릭터를 통해 음악 선곡과 조정을 위한 인터랙션 태스크를 수행하게 하고, 정보 및 시스템 품질, 실재감, 지각된 유용성과 용이성, 그리고 지속 사용 의도를 측정하였다. 평균차이 분석 결과, 대부분의 사용자 경험 요인에서 시각적 캐릭터의 멀티모달 효과는 나타나지 않았으며, 지속사용 의도에서도 효과는 나타나지 않았다. 오히려, 정보품질 요인에서 음성 단일 모드가 멀티모달보다 효과적인 것으로 나타났다. 운전자의 인지적 노력이 필요한 반자율주행 단계에서는 멀티모달 인터랙션이 단일 모드 인터랙션에 비해 사용자 경험 최적화에 효과적이지 않았다.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터 (W3C based Interoperable Multimodal Communicator)

  • 박대민;권대혁;최진혁;이인재;최해철
    • 방송공학회논문지
    • /
    • 제20권1호
    • /
    • pp.140-152
    • /
    • 2015
  • 최근 사용자와 컴퓨터간의 양방향 상호작용을 가능하게 하는 HCI(Human Computer Interaction) 연구를 위해 인간의 의사소통 체계와 유사한 인터페이스 기술들이 개발되고 있다. 이러한 인간과의 의사소통 과정에서 사용되는 커뮤니케이션 채널을 모달리티라고 부르며, 다양한 단말기 및 서비스 환경에 따라 최적의 사용자 인터페이스를 제공하기 위해서 두 개 이상의 모달리티를 활용하는 멀티모달 인터페이스가 활발히 연구되고 있다. 하지만, 멀티모달 인터페이스를 사용하기에는 각각의 모달리티가 갖는 정보 형식이 서로 상이하기 때문에 상호 연동이 어려우며 상호 보완적인 성능을 발휘하는데 한계가 있다. 이에 따라 본 논문은 W3C(World Wide Web Consortium)의 EMMA(Extensible Multimodal Annotation Markup language)와 MMI(Multimodal Interaction Framework)표준에 기반하여 복수의 모달리티를 상호연동할 수 있는 멀티모달 커뮤니케이터를 제안한다. 멀티모달 커뮤니케이터는 W3C 표준에 포함된 MC(Modality Component), IM(Interaction Manager), PC(Presentation Component)로 구성되며 국제 표준에 기반하여 설계하였기 때문에 다양한 모달리티의 수용 및 확장이 용이하다. 실험에서는 시선 추적과 동작 인식 모달리티를 이용하여 지도 탐색 시나리오에 멀티모달 커뮤니케이터를 적용한 사례를 제시한다.

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회논문지
    • /
    • 제1권1호
    • /
    • pp.9-20
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

모바일 환경을 위한 멀티모달 미들웨어의 설계 및 구현 (Design and Implementation of Multimodal Middleware for Mobile Environments)

  • 박성수;안세열;김원우;구명완;박성찬
    • 대한음성학회지:말소리
    • /
    • 제60호
    • /
    • pp.125-144
    • /
    • 2006
  • W3C announced a standard software architecture for multimodal context-aware middleware that emphasizes modularity and separates structure, contents, and presentation. We implemented a distributed multimodal interface system followed the W3C architecture, based on SCXML. SCXML uses parallel states to invoke both XHTML and VoiceXML contents as well as to gather composite or sequential multimodal inputs through man-machine interactions. We also hire Delivery Context Interface(DCI) module and an external service bundle enabling middleware to support context-awareness services for real world environments. The provision of personalized user interfaces for mobile devices is expected to be used for different devices with a wide variety of capabilities and interaction modalities. We demonstrated the implemented middleware could maintain multimodal scenarios in a clear, concise and consistent manner by some experiments.

  • PDF

Multimodal Interaction Framework for Collaborative Augmented Reality in Education

  • Asiri, Dalia Mohammed Eissa;Allehaibi, Khalid Hamed;Basori, Ahmad Hoirul
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.268-282
    • /
    • 2022
  • One of the most important technologies today is augmented reality technology, it allows users to experience the real world using virtual objects that are combined with the real world. This technology is interesting and has become applied in many sectors such as the shopping and medicine, also it has been included in the sector of education. In the field of education, AR technology has become widely used due to its effectiveness. It has many benefits, such as arousing students' interest in learning imaginative concepts that are difficult to understand. On the other hand, studies have proven that collaborative between students increases learning opportunities by exchanging information, and this is known as Collaborative Learning. The use of multimodal creates a distinctive and interesting experience, especially for students, as it increases the interaction of users with the technologies. The research aims at developing collaborative framework for developing achievement of 6th graders through designing a framework that integrated a collaborative framework with a multimodal input "hand-gesture and touch", considering the development of an effective, fun and easy to use framework with a multimodal interaction in AR technology that was applied to reformulate the genetics and traits lesson from the science textbook for the 6th grade, the first semester, the second lesson, in an interactive manner by creating a video based on the science teachers' consultations and a puzzle game in which the game images were inserted. As well, the framework adopted the cooperative between students to solve the questions. The finding showed a significant difference between post-test and pre-test of the experimental group on the mean scores of the science course at the level of remembering, understanding, and applying. Which indicates the success of the framework, in addition to the fact that 43 students preferred to use the framework over traditional education.

미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스 (A Multimodal Interface for Telematics based on Multimodal middleware)

  • 박성찬;안세열;박성수;구명완
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF