DOI QR코드

DOI QR Code

POMY: POSTECH Immersive English Study with Haptic Feedback

POMY: 햅틱 피드백을 적용한 몰입형 영어 학습 시스템

  • Lee, Jaebong (Department of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH)) ;
  • Lee, Kyusong (Department of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH)) ;
  • Phuong, Hoang Minh (Department of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH)) ;
  • Lee, Hojin (Department of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH)) ;
  • Lee, Gary Geunbae (Department of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH)) ;
  • Choi, Seungmoon (Department of Computer Science and Engineering, Pohang University of Science and Technology (POSTECH))
  • 이재봉 (포항공과대학교 컴퓨터공학과) ;
  • 이규송 (포항공과대학교 컴퓨터공학과) ;
  • ;
  • 이호진 (포항공과대학교 컴퓨터공학과) ;
  • 이근배 (포항공과대학교 컴퓨터공학과) ;
  • 최승문 (포항공과대학교 컴퓨터공학과)
  • Received : 2014.06.02
  • Accepted : 2014.06.30
  • Published : 2014.08.01

Abstract

In this paper, we propose a novel CALL (Computer-Assisted Language Learning) system, which is called POMY (POSTECH Immersive English Study). In our system, students can study English while talking to characters in a computer-generated virtual environment. POMY also supports haptic feedback, so students can study English in a more interesting manner. Haptic feedback is provided by two platforms, a haptic chair and a force-feedback device. The haptic chair, which is equipped with an array of vibrotactile actuators, delivers directional information to the student. The force-feedback device enables the student to feel the physical properties of an object. These haptic systems help the student better understand English conversations and focus on studying. We conducted a user experiment and its results showed that our haptic-enabled English study contributes to better learning of English.

Keywords

References

  1. W. L. Johnson, N. Wang, and S. Wu, "Experience with serious games for learning foreign languages and cultures," Proc. of the SimTecT, Jun. 2007.
  2. H. Morton and M. A. Jack, "Scenario-based spoken interaction with virtual agents," Computer Assisted Language Learning, vol. 18, no. 3, pp. 171-191, 2005. https://doi.org/10.1080/09588220500173344
  3. P. Wik and A. Hjalmarsson, "Embodied conversational agents in computer assisted language learning," Speech Communication, vol. 51, no. 10, pp. 1024-1037, 2009. https://doi.org/10.1016/j.specom.2009.05.006
  4. M. Karam, F. A. Russo, and D. I. Fels, "Designing the model human cochlea: An ambient crossmodal audio-tactile display," IEEE Transactions on Haptics, vol. 2, no. 3, pp. 160-169, 2009. https://doi.org/10.1109/TOH.2009.32
  5. Y. Zheng and J. B. Morrell, "Avibrotactile feedback approach to posture guidance," Proc. of IEEE Haptics Symposium, pp. 351-358, 2010.
  6. M. Kim, S. Lee, and S. Choi, "Saliency-driven tactile effect authoring for real-time visuotactile feedback," Lecture Notes on Computer Science, vol. LNCS 7282 (Eurohaptics 2012, Part I), pp. 258-269, 2012.
  7. F. P. Brooks, M. Ouh-Young, J. J. Battert, and P. J. Kilpatrick, "Project GROPE-HapticDisplays for scientific visualization," Proc. of the annual conference on Computer graphics and interactive techniques (SIGGRAPH), pp 177-185, 1990.
  8. M. Sato, X. Liu, J. Murayama, K. Akahane, and M. Isshiki, "A haptic virtual environment for molecular chemistry education," Lecture Notes on Computer Science, vol. LNCS 5080 (Trasactions on Edutainment I), pp. 28-39, 2008.
  9. D. I. Grow, L. N. Verner, and A. M. Okamura, "Educational haptics," Proc. of the AAAI Spring Symposium: Semantic Scientific Knowledge Integration, pp. 53-58, 2007.
  10. D. H. Ahn and M. Chung, "One-pass semi-dynamic network decoding using a subnetwork caching model for large vocabulary continuous speech recongnition," IEICE Transactions on Information and Systems, vol. 87, pp. 1164-1174, 2004.
  11. A. Raux and M. Eskenazi, "Using task-oriented spoken dialog systems for language learning. potential, practical applications and challenges," Proc. of INSTIL, 2004.
  12. H. Morton and M. A. Jack, "Scenario-based spoken interaction with virtual agents," Computer Assisted Language Learning, vol. 18, no. 3 pp. 171-191. 2005. https://doi.org/10.1080/09588220500173344
  13. S. Lee, C. Lee, J. Lee, H. Noh, and G. G. Lee, "Intention-based corrective feedback generation using context-aware model," Proc. of International Conference on Computer Supported Education, 2010.
  14. H. Noh, S. Ryu, D. Lee, K. Lee, C. Lee, and G. G. Lee, "An example-based approach to ranking multiple dialog states for flexible dialog management," IEEE Journal of Selected Topics in Signal Processing, vol. 6, no. 8, pp. 943-958, 2012. https://doi.org/10.1109/JSTSP.2012.2229692
  15. Y. Yoo, I. Hwang, and S. Choi, "Consonance of vibrotactile chords," IEEEE Transactions on Haptics, (Early access), 2014.
  16. C. H. Ho, C. Basdogan, and M. A. Srinivasan, "Efficient point-based rendering techniques for haptic display of virtual objects," Presence, vol. 8, pp. 447-491, 1999.
  17. C. Basdogan, C. Ho, and M. A. Srinivansan, "Aray-based haptic rendering technique for display shape and texture of 3D objects in virtual environments," ASME Winter Annual Meeting, vol. 61, pp. 77-84, 1997.