• 제목/요약/키워드: Computer interaction

검색결과 1,957건 처리시간 0.029초

인간-컴퓨터 상호 작용을 위한 인간 팔의 3차원 자세 추정 - 기계요소 모델링 기법을 컴퓨터 비전에 적용 (3D Pose Estimation of a Human Arm for Human-Computer Interaction - Application of Mechanical Modeling Techniques to Computer Vision)

  • 한영모
    • 전자공학회논문지SC
    • /
    • 제42권4호
    • /
    • pp.11-18
    • /
    • 2005
  • 인간은 의사 표현을 위해 음성언어 뿐 아니라 몸짓 언어(body languages)를 많이 사용한다 이 몸짓 언어 중 대표적인 것은, 물론 손과 팔의 사용이다. 따라서 인간 팔의 운동 해석은 인간과 기계의 상호 작용(human-computer interaction)에 있어 매우 중요하다고 할 수 있다. 이러한 견지에서 본 논문에서는 다음과 같은 방법으로 컴퓨터비전을 이용한 인간팔의 3차원 자세 추정 방법을 제안하다. 먼저 팔의 운동이 대부분 회전 관절(revolute-joint)에 의해 이루어진다는 점에 착안하여, 컴퓨터 비전 시스템을 활용한 회전 관절의 3차원 운동 해석 기법을 제안한다. 이를 위해 회전 관절의 기구학적 모델링 기법(kinematic modeling techniques)과 컴퓨터 비전의 경사 투영 모델(perspective projection model)을 결합한다. 다음으로, 회전 관절의 3차원 운동해석 기법을 컴퓨터 비전을 이용한 인간 팔의 3차원 자세 추정 문제에 웅용한다. 그 기본 발상은 회전 관절의 3차원 운동 복원 알고리즘을 인간 팔의 각 관절에 순서 데로 적용하는 것이다. 본 알고리즘은 특히 유비쿼터스 컴퓨팅(ubiquitous computing)과 가상현실(virtual reality)를 위한 인간-컴퓨터 상호작용(human-computer interaction)이라는 응용을 목표로, 고수준의 정확도를 갖는 폐쇄구조 형태(closed-form)의 해를 구하는데 주력한다.

Collocated Wearable Interaction for Audio Book Application on Smartwatch and Hearables

  • Yoon, Hyoseok;Son, Jangmi
    • Journal of Multimedia Information System
    • /
    • 제7권2호
    • /
    • pp.107-114
    • /
    • 2020
  • This paper proposes a wearable audio book application using two wearable devices, a smartwatch and a hearables. We review requirements of what could be a killer wearable application and design our application based on these elicited requirements. To distinguish our application, we present 7 scenarios and introduce several wearable interaction modalities. To show feasibility of our approach, we design and implement our proof-of-concept prototype on Android emulator as well as on a commercial smartwatch. We thoroughly address how different interaction modalities are designed and implemented in the Android platform. Lastly, we show latency of the multi-modal and alternative interaction modalities that can be gracefully handled in wearable audio application use cases.

Brain-Computer Interface 기반 인간-로봇상호작용 플랫폼 (A Brain-Computer Interface Based Human-Robot Interaction Platform)

  • 윤중선
    • 한국산학기술학회논문지
    • /
    • 제16권11호
    • /
    • pp.7508-7512
    • /
    • 2015
  • 뇌파로 의도를 접속하여 기계를 작동하는 뇌-기기 접속(Brain-Computer Interface, BCI) 기반의 인간-로봇상호작용(Human-Robot Interaction, HRI) 플랫폼을 제안한다. 사람의 뇌파로 의도를 포착하고 포착된 뇌파 신호에서 의도를 추출하거나 연관시키고 추출된 의도로 기기를 작동하게 하는 포착, 처리, 실행을 수행하는 플랫폼의 설계, 운용 및 구현 과정을 소개한다. 제안된 플랫폼의 구현 사례로 처리기에 구현된 상호작용 게임과 처리기를 통한 외부 장치 제어가 기술되었다. BCI 기반 플랫폼의 의도와 감지 사이의 신뢰성을 확보하기 위한 다양한 시도들을 소개한다. 제안된 플랫폼과 구현 사례는 BCI 기반의 새로운 기기 제어 작동 방식의 실현으로 확장될 것으로 기대된다.

효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술 (Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction)

  • 박동건;강경민;배진우;한지형
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

Collective Interaction Filtering Approach for Detection of Group in Diverse Crowded Scenes

  • Wong, Pei Voon;Mustapha, Norwati;Affendey, Lilly Suriani;Khalid, Fatimah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.912-928
    • /
    • 2019
  • Crowd behavior analysis research has revealed a central role in helping people to find safety hazards or crime optimistic forecast. Thus, it is significant in the future video surveillance systems. Recently, the growing demand for safety monitoring has changed the awareness of video surveillance studies from analysis of individuals behavior to group behavior. Group detection is the process before crowd behavior analysis, which separates scene of individuals in a crowd into respective groups by understanding their complex relations. Most existing studies on group detection are scene-specific. Crowds with various densities, structures, and occlusion of each other are the challenges for group detection in diverse crowded scenes. Therefore, we propose a group detection approach called Collective Interaction Filtering to discover people motion interaction from trajectories. This approach is able to deduce people interaction with the Expectation-Maximization algorithm. The Collective Interaction Filtering approach accurately identifies groups by clustering trajectories in crowds with various densities, structures and occlusion of each other. It also tackles grouping consistency between frames. Experiments on the CUHK Crowd Dataset demonstrate that approach used in this study achieves better than previous methods which leads to latest results.

컴퓨터게임 유형과 유아의 시지각 능력 (Types of Computer Game and Abilities of Children's Visual Perception)

  • 이은주;이소은
    • 아동학회지
    • /
    • 제24권5호
    • /
    • pp.43-58
    • /
    • 2003
  • This research was conducted to comprehend the effects of computer games on the development children's visual perception. First, the effects of experience of computer games on children's visual perception abilities was analyzed. Second, the effects of different types of computer games on children's visual perception abilities were examined. And third, the interaction effects of sex and computer game types were examined. The subjects of this study were 78 5-year-olds engaging public kindergarten located in Cheung-Ju. To analyse data, percent, mean, standard deviation, and ANCOVA were used. The results showed that children's visual perception abilities were improved significantly with the experience of computer games. And the improvement of children's visual perception varied significantly according to the type of computer game. No interaction effects were found between a child's sex and the types of computer games.

  • PDF

Computer Graphies : Quinolone계 항균제의 DNA-Intercalator에 관한 이론적 연구 (Computer Graphics : Theoretical Study of Antibacterial Quinolone Derivatives as DNA-Intercalator)

  • 서명은
    • 약학회지
    • /
    • 제39권1호
    • /
    • pp.78-84
    • /
    • 1995
  • Based on Computer graphics molecular modeling method, quinolone derivatives as DNA-gyrase inhibitors formed stable DNA-intercalation complex with deoxycytidilyl-3',5'-deoxy guanosine[d($C_{p}G)_{2}$] dinucleotide. When d($C_{p}G)_{2}$ and d($A_{p}T)_{2}$, were compared in order to find out which DNA could form more stable DNA-Drug complex based on interaction energy($\Delta$E) and DNA-Drug complex energy, d($C_{p}G)_{2}$ resulted in lower energy than d($A_{p}T)_{2}$.

  • PDF

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권4호
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF