• Title/Summary/Keyword: human and computer interaction

Search Result 602, Processing Time 0.023 seconds

3DARModeler: a 3D Modeling System in Augmented Reality Environment (3DARModeler : 증강현실 환경 3D 모델링 시스템)

  • Do, Trien Van;Lee, Jeong-Gyu;Lee, Jong-Weon
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.127-136
    • /
    • 2009
  • This paper describes a 3D modeling system in Augmented Reality environment, named 3DARModeler. It can be considered a simple version of 3D Studio Max with necessary functions for a modeling system such as creating objects, applying texture, adding animation, estimating real light sources and casting shadows. The 3DARModeler introduces convenient, and effective human-computer interaction to build 3D models. The 3DARModeler targets nontechnical users. As such, they do not need much knowledge of computer graphics and modeling techniques. All they have to do is select basic objects, customize their attributes, and put them together to build a 3D model in a simple and intuitive way as if they were doing in the real world.

  • PDF

Implementation of Emotional Model of Software Robot Using the Sensor Modules for External Environments (외부 환경 감지 센서 모듈을 이용한 소프트웨어 로봇의 감정 모델 구현)

  • Lee, Joon-Yong;Kim, Chang-Hyun;Lee, Ju-Jang
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.11a
    • /
    • pp.179-184
    • /
    • 2005
  • Recently, studying on modeling the emotion of a robot has become issued among fields of a humanoid robot and an interaction of human and robot. Especially, modeling of the motivation, the emotion, the behavior, and so on, in the robot, is hard and need to make efforts to use our originality. In this paper, new modeling using mathematical formulations to represent the emotion and the behavior selection is proposed for the software robot with virtual sensor modules. Various Points which affect six emotional states such as happy or sad are formulated as simple exponential equations with various parameters. There are several experiments with seven external sensor inputs from virtual environment and human to evaluate this modeling.

  • PDF

Implementation of Emotional Model of Software Robot Using the Sensor Modules for External Environments (외부 환경 감지 센서 모듈을 이용한 소프트웨어 로봇의 감정 모델 구현)

  • Lee, Joon-Yong;Kim, Chang-Hyun;Lee, Ju-Jang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.37-42
    • /
    • 2006
  • Recently, studying on modeling the emotion of a robot has become issued among fields of a humanoid robot and an interaction of human and robot. Especially, modeling of the motivation, the emotion, the behavior. and so on, in the robot, is hard and need to make efforts to use ow originality. In this paper, new modeling using mathematical formulations to represent the emotion and the behavior selection is proposed for the software robot with virtual sensor modules. Various points which affect six emotional states such as happy or sad are formulated as simple exponential equations with various parameters. There are several experiments with seven external sensor inputs from virtual environment and human to evaluate this modeling.

Virtual Human Authoring ToolKit for a Senior Citizen Living Alone (독거노인용 가상 휴먼 제작 툴킷)

  • Shin, Eunji;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1245-1248
    • /
    • 2020
  • Elderly people living alone need smart care for independent living. Recent advances in artificial intelligence have allowed for easier interaction by a computer-controlled virtual human. This technology can realize services such as medicine intake guide for the elderly living alone. In this paper, we suggest an intelligent virtual human and present our virtual human toolkit for controlling virtual humans for a senior citizen living alone. To make the virtual human motion, we suggest our authoring toolkit to map gestures, emotions, voices of virtual humans. The toolkit configured to create virtual human interactions allows the response of a suitable virtual human with facial expressions, gestures, and voice.

Enhancing e-Learning Interactivity vla Emotion Recognition Computing Technology (감성 인식 컴퓨팅 기술을 적용한 이러닝 상호작용 기술 연구)

  • Park, Jung-Hyun;Kim, InOk;Jung, SangMok;Song, Ki-Sang;Kim, JongBaek
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.2
    • /
    • pp.89-98
    • /
    • 2008
  • Providing appropriate interactions between learner and e- Learning system is an essential factor of a successful e-Learning system. Although many interaction functions are employed in multimedia Web-based Instruction content, learner experience a lack of similar feedbacks from educators in real- time due to the limitation of Human-Computer Interaction techniques. In this paper, an emotion recognition system via learner facial expressions has been developed and applied to a tutoring system. As human educators do, the system observes learners' emotions from facial expressions and provides any or all pertinent feedback. And various feedbacks can bring to motivations and get rid of isolation from e-Learning environments by oneself. The test results showed that this system may provide significant improvement in terms of interesting and educational achievement.

  • PDF

Edge based Interactive Segmentation (경계선 기반의 대화형 영상분할 시스템)

  • Yun, Hyun Joo;Lee, Sang Wook
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.2
    • /
    • pp.15-22
    • /
    • 2002
  • Image segmentation methods partition an image into meaningful regions. For image composition and analysis, it is desirable for the partitioned regions to represent meaningful objects in terms of human perception and manipulation. Despite the recent progress in image understanding, however, most of the segmentation methods mainly employ low-level image features and it is still highly challenging to automatically segment an image based on high-level meaning suitable for human interpretation. The concept of HCI (Human Computer Interaction) can be applied to operator-assisted image segmentation in a manner that a human operator provides guidance to automatic image processing by interactively supplying critical information about object boundaries. Intelligent Scissors and Snakes have demonstrated the effectiveness of human-assisted segmentation [2] [1]. This paper presents a method for interactive image segmentation for more efficient and effective detection and tracking of object boundaries. The presented method is partly based on the concept of Intelligent Scissors, but employs the well-established Canny edge detector for stable edge detection. It also uses "sewing method" for including weak edges in object boundaries, and 5-direction search to promote more efficient and stable linking of neighboring edges than the previous methods.

  • PDF

Design of Parallel Processing System for Face Tracking (얼굴 추적을 위한 병렬처리 시스템의 설계)

  • ;;;;R.S.Ramakrishna
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.765-767
    • /
    • 1998
  • Many application in human computer interaction(HCI) require tacking a human face and facial features. In this paper we propose efficient parallel processing system for face tracking under heterogeneous networked. To track a face in the video image we use the skin color information and connected components. In terms of parallelism we choose the master-slave model which has thread for each processes, master and slaves, The threads are responsible for real computation in each process. By placing queues between the threads we give flexibility of data flowing

  • PDF

A Study on the Cognitive Process of Supervisory control in Human-Computer Interaction (인간-컴퓨터 작업에서 감시체계의 상황인지과정에 관한 연구)

  • 오영진;이근희
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.16 no.27
    • /
    • pp.105-111
    • /
    • 1993
  • Human works shift its roll from physical condition to the system supervisory control task In this paper safety-presentation configuration is discussed instead of well-known fault-warning configuration. Of paticular interest was the personal factor which include the cognitive process. Through a performance between each person information processing(d') and decision process($\beta$) was pointed out to explain the sensitivity of personal cognitive process. Impact of uncertainty effect the supervisor having doubt situations. These facts are released by the use of flat fuzzy number of $\beta$ and its learning rate R.

  • PDF

A Study on Smart Touch Projector System Technology Using Infrared (IR) Imaging Sensor (적외선 영상센서를 이용한 스마트 터치 프로젝터 시스템 기술 연구)

  • Lee, Kuk-Seon;Oh, Sang-Heon;Jeon, Kuk-Hui;Kang, Seong-Soo;Ryu, Dong-Hee;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.7
    • /
    • pp.870-878
    • /
    • 2012
  • Recently, very rapid development of computer and sensor technologies induces various kinds of user interface (UI) technologies based on user experience (UX). In this study, we investigate and develop a smart touch projector system technology on the basis of IR sensor and image processing. In the proposed system, a user can control computer by understanding the control events based on gesture of IR pen as an input device. In the IR image, we extract the movement (or gesture) of the devised pen and track it for recognizing gesture pattern. Also, to correct the error between the coordinate of input image sensor and display device (projector), we propose a coordinate correction algorithm to improve the accuracy of operation. Through this system technology as the next generation human-computer interaction, we can control the events of the equipped computer on the projected image screen without manipulating the computer directly.