• 제목/요약/키워드: Human computer interactions

검색결과 93건 처리시간 0.028초

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • 제37권4호
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

독거노인용 가상 휴먼 제작 툴킷 (Virtual Human Authoring ToolKit for a Senior Citizen Living Alone)

  • Shin, Eunji;Jo, Dongsik
    • 한국정보통신학회논문지
    • /
    • 제24권9호
    • /
    • pp.1245-1248
    • /
    • 2020
  • Elderly people living alone need smart care for independent living. Recent advances in artificial intelligence have allowed for easier interaction by a computer-controlled virtual human. This technology can realize services such as medicine intake guide for the elderly living alone. In this paper, we suggest an intelligent virtual human and present our virtual human toolkit for controlling virtual humans for a senior citizen living alone. To make the virtual human motion, we suggest our authoring toolkit to map gestures, emotions, voices of virtual humans. The toolkit configured to create virtual human interactions allows the response of a suitable virtual human with facial expressions, gestures, and voice.

제품의 유지보수를 위한 시각 기반 증강현실 기술 개발 (Development Technology of Vision Based Augmented Reality for the Maintenance of Products)

  • 이경호;이정민;김동근;한영수;이재준
    • 한국CDE학회논문집
    • /
    • 제13권4호
    • /
    • pp.265-272
    • /
    • 2008
  • The flow of technology is going to human-oriented direction, from the time when the computer was first invented, to now where new computing environment using mobile and global network are everywhere. On that technology flow, ubiquitous is being suggested as new paradigm of computing environment. Augmented Reality is one of ubiquitous technologies that provide the interactions between human and computer. By adding computer-generated information to real information and their interaction, user can get the improved and more knowledgeable information about real world. The purpose of this paper is to show the possibility of applying vision based augmented reality to maintenance of product system.

Human Centered Robot for Mutual Interaction in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권3호
    • /
    • pp.246-252
    • /
    • 2005
  • Intelligent Space is a space where many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents, which provide human with services. To realize this, human and mobile robots have to approach each other as much as possible. Moreover, it is necessary for them to perform interactions naturally. It is desirable for a mobile robot to carry out human affinitive movement. In this research, a mobile robot is controlled by the Intelligent Space through its resources. The mobile robot is controlled to follow walking human as stably and precisely as possible. In order to follow a human, control law is derived from the assumption that a human and a mobile robot are connected with a virtual spring model. Input velocity to a mobile robot is generated on the basis of the elastic force from the virtual spring in this model. And its performance is verified by the computer simulation and the experiment.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권4호
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Prediction of Protein-Protein Interactions from Sequences using a Correlation Matrix of the Physicochemical Properties of Amino Acids

  • Kopoin, Charlemagne N'Diffon;Atiampo, Armand Kodjo;N'Guessan, Behou Gerard;Babri, Michel
    • International Journal of Computer Science & Network Security
    • /
    • 제21권3호
    • /
    • pp.41-47
    • /
    • 2021
  • Detection of protein-protein interactions (PPIs) remains essential for the development of therapies against diseases. Experimental studies to detect PPI are longer and more expensive. Today, with the availability of PPI data, several computer models for predicting PPIs have been proposed. One of the big challenges in this task is feature extraction. The relevance of the information extracted by some extraction techniques remains limited. In this work, we first propose an extraction method based on correlation relationships between the physicochemical properties of amino acids. The proposed method uses a correlation matrix obtained from the hydrophobicity and hydrophilicity properties that it then integrates in the calculation of the bigram. Then, we use the SVM algorithm to detect the presence of an interaction between 2 given proteins. Experimental results show that the proposed method obtains better performances compared to the approaches in the literature. It obtains performances of 94.75% in accuracy, 95.12% in precision and 96% in sensitivity on human HPRD protein data.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권8호
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Behavior Decision Model Based on Emotion and Dynamic Personality

  • Yu, Chan-Woo;Choi, Jin-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.101-106
    • /
    • 2005
  • In this paper, we propose a behavior decision model for a robot, which is based on artificial emotion, various motivations and dynamic personality. Our goal is making a robot which can express its emotion human-like way. To achieve this goal, we applied several emotion and personality theories in psychology. Especially, we introduced the concept of dynamic personality model for a robot. Drawing on this concept, we could make a behavior decision model so that the emotion expression of the robot has adaptability to various environments through interactions between human and the robot.

  • PDF

B-COV:Bio-inspired Virtual Interaction for 3D Articulated Robotic Arm for Post-stroke Rehabilitation during Pandemic of COVID-19

  • Allehaibi, Khalid Hamid Salman;Basori, Ahmad Hoirul;Albaqami, Nasser Nammas
    • International Journal of Computer Science & Network Security
    • /
    • 제21권2호
    • /
    • pp.110-119
    • /
    • 2021
  • The Coronavirus or COVID-19 is contagiousness virus that infected almost every single part of the world. This pandemic forced a major country did lockdown and stay at a home policy to reduce virus spread and the number of victims. Interactions between humans and robots form a popular subject of research worldwide. In medical robotics, the primary challenge is to implement natural interactions between robots and human users. Human communication consists of dynamic processes that involve joint attention and attracting each other. Coordinated care involves sharing among agents of behaviours, events, interests, and contexts in the world from time to time. The robotics arm is an expensive and complicated system because robot simulators are widely used instead of for rehabilitation purposes in medicine. Interaction in natural ways is necessary for disabled persons to work with the robot simulator. This article proposes a low-cost rehabilitation system by building an arm gesture tracking system based on a depth camera that can capture and interpret human gestures and use them as interactive commands for a robot simulator to perform specific tasks on the 3D block. The results show that the proposed system can help patients control the rotation and movement of the 3D arm using their hands. The pilot testing with healthy subjects yielded encouraging results. They could synchronize their actions with a 3D robotic arm to perform several repetitive tasks and exerting 19920 J of energy (kg.m2.S-2). The average of consumed energy mentioned before is in medium scale. Therefore, we relate this energy with rehabilitation performance as an initial stage and can be improved further with extra repetitive exercise to speed up the recovery process.

모델휴먼프로세서를 활용한 인지과정 시뮬레이터 구축에 관한 연구 (A Study on Development of a Cognitive Process Simulator Based on Model Human Processor)

  • 이동하;나윤균
    • 한국안전학회지
    • /
    • 제13권4호
    • /
    • pp.230-239
    • /
    • 1998
  • Though limited, Model Human Processor (MHP) has been used to explain the complex users' behaviors during human-computer interactions in a simplified manner. MHP consists of perceptual, cognitive and motor systems, each with processors and memories interacting with each other in serial or parallel mode. The important parameters of memory include the storage capacity, the decay time, and the code type of a memorized item. The important parameter of a processor is the cycle time. Using these features of the model, this study developed a computerized cognitive process simulator to predict the cognitive process time of a class match task process. An experimental validity test result showed that the mean prediction time for cognitive process of the class match task simulated 50 times by the simulator was consistent with the mean cognitive process time of the same task performed by 37 subjects. Animation of the data flow during the class match task simulation will help understand the invisible human cognitive process.

  • PDF