An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam (Department of Information and Communicaions, Kwangju Institute of Science and Technology(K-JIST)) ;
  • Ko, Jong-Gook (Department of Information and Communicaions, Kwangju Institute of Science and Technology(K-JIST)) ;
  • SeungHo choi (Department of Information and Communications, Dongshin University) ;
  • Kim, Jin-Young (Department of Electronics, Chonnam University) ;
  • Kim, Ki-Jung (Department of Internet Information Technology, Kwangyang College)
  • Published : 2000.07.01

Abstract

An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

Keywords