• Title/Summary/Keyword: Multimodal Interaction

Search Result 59, Processing Time 0.028 seconds

A study on AR(Augmented Reality) game platform design using multimodal interaction (멀티모달 인터렉션을 이용한 증강현실 게임 플랫폼 설계에 관한 연구)

  • Kim, Chi-Jung;Hwang, Min-Cheol;Park, Gang-Ryeong;Kim, Jong-Hwa;Lee, Ui-Cheol;U, Jin-Cheol;Kim, Yong-U;Kim, Ji-Hye;Jeong, Yong-Mu
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.87-90
    • /
    • 2009
  • 본 연구는 HMD(Head Mounted Display), 적외선 카메라, 웹 카메라, 데이터 글러브, 그리고 생리신호 측정 센서를 이용한 증강현실 게임 플랫폼 설계를 목적으로 하고 있다. HMD 는 사용자의 머리의 움직임을 파악하고, 사용자에게 가상 물체를 디스플레이화면에 제공한다. 적외선 카메라는 HMD 하단에 부착하여 사용자의 시선을 추적한다. 웹 카메라는 HMD 상단에 부착하여 전방 영상을 취득 후, 현실영상을 HMD 디스플레이를 통하여 사용자에게 제공한다. 데이터 글러브는 사용자의 손동작을 파악한다. 자율신경계반응은 GSR(Galvanic Skin Response), PPG(PhotoPlethysmoGraphy), 그리고 SKT(SKin Temperature) 센서로 측정한다. 측정된 피부전기반응, 맥파, 그리고 피부온도는 실시간 데이터분석을 통하여 집중 정도를 파악하게 된다. 사용자의 머리 움직임, 시선, 그리고 손동작은 직관적 인터랙션에 사용되고, 집중 정도는 직관적 인터랙션과 결합하여 사용자의 의도파악에 사용된다. 따라서, 본 연구는 멀티모달 인터랙션을 이용하여 직관적 인터랙션 구현과 집중력 분석을 통하여 사용자의 의도를 파악할 수 있는 새로운 증강현실 게임 플랫폼을 설계하였다.

  • PDF

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Artificial Life Algorithm for Functions Optimization (함수 최적화를 위한 인공생명 알고리듬)

  • Yang, Bo-Seok;Lee, Yun-Hui;Choe, Byeong-Geun;Kim, Dong-Jo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.25 no.2
    • /
    • pp.173-181
    • /
    • 2001
  • This paper presents an artificial life algorithm which is remarkable in the area of engineering for functions optimization. As artificial life organisms have a sensing system, they can find the resource which they want to find and metabolize. And the characteristics of artificial life are emergence and dynamic interaction with environment. In other words, the micro-interaction with each other in the artificial lifes group results in emergent colonization in the whole system. In this paper, therefore, artificial life algorithm by using above characteristics is employed into functions optimization. The optimizing ability and convergent characteristics of this proposed algorithm is verified by using three test functions. The numerical results also show that the proposed algorithm is superior to genetic algorithms and immune algorithms for the multimodal functions.

Multimodal Interface Control Module for Immersive Virtual Education (몰입형 가상교육을 위한 멀티모달 인터페이스 제어모듈)

  • Lee, Jaehyub;Im, SungMin
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.5 no.1
    • /
    • pp.40-44
    • /
    • 2013
  • This paper suggests a multimodal interface control module which allows a student to naturally interact with educational contents in virtual environment. The suggested module recognizes a user's motion when he/she interacts with virtual environment and then conveys the user's motion to the virtual environment via wireless communication. Futhermore, a haptic actuator is incorporated into the proposed module in order to create haptic information. Due to the proposed module, a user can haptically sense the virtual object as if the virtual object is exists in real world.

  • PDF

Motion based interaction techique for a camera-tracked laser pointer system (카메라 추적 기반 레이저 포인터 시스템을 위한 동작 기반 상호작용 기술)

  • Ahn, Sang-Mahn;Lim, Jong-Gwan;Kwon, Dong-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.257-261
    • /
    • 2008
  • In this paper, intuitive interactions compatible with various software and replaceable with the conventional mouse function are proposed for camera-tracked laser pointer system. For this purpose, this paper designs the motion based interaction using acceleration information from a new laser pointer with 3-axes accelerometer and shows its usability.

  • PDF

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

A new human-robot interaction method using semantic symbols

  • Park, Sang-Hyun;Hwang, Jung-Hoon;Kwon, Dong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.2005-2010
    • /
    • 2004
  • As robots become more prevalent in human daily life, situations requiring interaction between humans and robots will occur more frequently. Therefore, human-robot interaction (HRI) is becoming increasingly important. Although robotics researchers have made many technical developments in their field, intuitive and easy ways for most common users to interact with robots are still lacking. This paper introduces a new approach to enhance human-robot interaction using a semantic symbol language and proposes a method to acquire the intentions of robot users. In the proposed approach, each semantic symbol represents knowledge about either the environment or an action that a robot can perform. Users'intentions are expressed by symbolized multimodal information. To interpret a users'command, a probabilistic approach is used, which is appropriate for interpreting a freestyle user expression or insufficient input information. Therefore, a first-order Markov model is constructed as a probabilistic model, and a questionnaire is conducted to obtain state transition probabilities for this Markov model. Finally, we evaluated our model to show how well it interprets users'commands.

  • PDF

Proposal on the Enhancement of Real-time Processing and Interaction in a Camera-tracked Laser Pointer System (카메라 추적 기반 레이저 포인터 시스템의 실시간 처리와 상호작용 개선을 위한 제안)

  • Lim, Jong-Gwan;Sohn, Young-Il;Sharifi, Farrokh;Kwon, Dong-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.332-336
    • /
    • 2008
  • For reliable real-time interaction in a camera-tracked laser pointer system, a new idea is proposed and its feasibility is tested in this paper. In order to improve response time in the system and remove useless visual overload, the function of a laser pointer in the system is divided, the Region of Functional Interest is defined and subsequently its new interactions are introduced. Finally the experiments to measure reliability, accuracy, latency and usability are conducted and the results are presented.

  • PDF

Real-time Simulation Technique for Visual-Haptic Interaction between SPH-based Fluid Media and Soluble Solids (SPH 기반의 유체 및 용해성 강체에 대한 시각-촉각 융합 상호작용 시뮬레이션)

  • Kim, Seokyeol;Park, Jinah
    • Journal of the Korean Society of Visualization
    • /
    • v.15 no.1
    • /
    • pp.32-40
    • /
    • 2017
  • Interaction between fluid and a rigid object is frequently observed in everyday life. However, it is difficult to simulate their interaction as the medium and the object have different representations. One of the challenging issues arises especially in handling deformation of the object visually as well as rendering haptic feedback. In this paper, we propose a real-time simulation technique for multimodal interaction between particle-based fluids and soluble solids. We have developed the dissolution behavior model of solids, which is discretized based on the idea of smoothed particle hydrodynamics, and the changes in physical properties accompanying dissolution is immediately reflected to the object. The user is allowed to intervene in the simulation environment anytime by manipulating the solid object, where both visual and haptic feedback are delivered to the user on the fly. For immersive visualization, we also adopt the screen space fluid rendering technique which can balance realism and performance.

Multimedia Information and Authoring for Personalized Media Networks

  • Choi, Insook;Bargar, Robin
    • Journal of Multimedia Information System
    • /
    • v.4 no.3
    • /
    • pp.123-144
    • /
    • 2017
  • Personalized media includes user-targeted and user-generated content (UGC) exchanged through social media and interactive applications. The increased consumption of UGC presents challenges and opportunities to multimedia information systems. We work towards modeling a deep structure for content networks. To gain insights, a hybrid practice with Media Framework (MF) is presented for network creation of personalized media, which leverages the authoring methodology with user-generated semantics. The system's vertical integration allows users to audition their personalized media networks in the context of a global system network. A navigation scheme with dynamic GUI shifts the interaction paradigm for content query and sharing. MF adopts a multimodal architecture anticipating emerging use cases and genres. To model diversification of platforms, information processing is robust across multiple technology configurations. Physical and virtual networks are integrated with distributed services and transactions, IoT, and semantic networks representing media content. MF applies spatiotemporal and semantic signal processing to differentiate action responsiveness and information responsiveness. The extension of multimedia information processing into authoring enables generating interactive and impermanent media on computationally enabled devices. The outcome of this integrated approach with presented methodologies demonstrates a paradigmatic shift of the concept of UGC as personalized media network, which is dynamical and evolvable.