• Title/Summary/Keyword: 인간과 컴퓨터의 상호작용

Search Result 280, Processing Time 0.024 seconds

Real-time Avatar Animation using Component-based Human Body Tracking (구성요소 기반 인체 추적을 이용한 실시간 아바타 애니메이션)

  • Lee Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.65-74
    • /
    • 2006
  • Human tracking is a requirement for the advanced human-computer interface (HCI), This paper proposes a method which uses a component-based human model, detects body parts, estimates human postures, and animates an avatar, Each body part consists of color, connection, and location information and it matches to a corresponding component of the human model. For human tracking, the 2D information of human posture is used for body tracking by computing similarities between frames, The depth information is decided by a relative location between components and is transferred to a moving direction to build a 2-1/2D human model. While each body part is modelled by posture and directions, the corresponding component of a 3D avatar is rotated in 3D using the information transferred from the human model. We achieved 90% tracking rate of a test video containing a variety of postures and the rate increased as the proposed system processed more frames.

  • PDF

사용자 인터페이스 디자인을 위한 Video Protocol 분석 도구 개발에 관한 연구

  • 김병욱;이건표
    • Proceedings of the ESK Conference
    • /
    • 1997.10a
    • /
    • pp.456-459
    • /
    • 1997
  • 디자이너에게 필요한 정보가 제품의 성격이 변함에 따라 변하면서 정보 수집의 흐름도 달라져가고 있다. 지금까 지의 디자인 정보 수집이 단순 설문조사에서 사용자의 사용상황을 관찰하는 보다 실질적인조사 혹은 컨텍스트 디 자인(Context Design: 사용환경을 고려한 디자인)으로 바뀌어 가고 있다. 그러나 기존의 조사분석방법들이 특정한 면(인지적 혹은 행위적)에 편중된 자료 수집에 초점이 맞추어져 있어서 다양한 정보의 수집과 이러한 정보의 통합 적 관리가 어려운 문제점을 가지고 있다. 정보의 수집과 통합적 관리의 어려움이 발생하게 되는 원인을 좀더 구체 적으로 살펴보면 첫째, 수잡되는 데이터의 문제이다. 데이터의 성격을 사용자인 개인의 조작행위에 중점을 두거나 또는 작업상황이나 사용자의 성격/선호도와 같은 부분에 중점을 두기 때문에 서로 상호작용하여 사용자 인터페이스에 큰 영향을 미치는 이 두 유형의 데이터에 관한 분석을 어렵게 하는 원인이 되고 있다. 둘째, 적당한 데이터 수집도구 (data logger)의 미비를 들 수 있다. 이것은 기존의 Data logger들은 정보를 문자나 숫자 또는 특수문자로서 정보를 추출하게 됨으로써 데이터를 뽑는 당시부터 정보가 추출자로인하여 1차적으로 가공이 된다는 점이다. 이것은 보다 실질적이고 분석단계에서 보다 면밀한 분석이 이루어 질 수 있는 중요한 데이터가 유출될 수도 있다는 것을 의미하는 것이다. 마지막으로 위에서 언급했던 두 유형의 데이터를 동시에 분석할 수 있는 방법의 부재와 실질적인 데이터를 분석하여 이를 디자인에 활용할 수 있는 정보로 가공할 수 있는 도구가 마련되어 있지 않다는 점이다. 디자인 작업은 종합적인 시각화 과정임에 비해 분석된 결과가 비시각적이며 위에서 언급했듯이 분절된 정보가 되기 쉽기 때문에 이를 디자인에 곧바로 적용시키기 어려운 점이 있다. 이에 본 연구는 기존의 바용성 평가를 위한 분석도구들이 갖는 문제 점들 해결하여 제품의 사용자 인터페이스 디자인 개발과정에서 활용할 수 있는 평가 분석도구를 개발하는 것을 목표로 한다. 이를 위해 첫째, 다양한 유형의 정보를 포함하는 비디오 정보를 선정하였따. 둘째, 데이터를 다양한 측면에서 추출할 수 있는 Data logger를 개발하였다. 셋째, 데이터를 시각적으로 정리하고 분석할 수 있는 도구를 제안한다. 마지막으로 인터페이스 디자인에서 여러 가지 디자인안을 도출해 내는 작업에 이용할 수 있는 종합화과정을 개발한다. 이러한 일련의 과정이 통합된 컴퓨터 시스템 안에서 이루어지도록 프로그램을 개발하여 정보의 유용성을 높일 수 있도록 한다.

  • PDF

Robust Estimation of Hand Poses Based on Learning (학습을 이용한 손 자세의 강인한 추정)

  • Kim, Sul-Ho;Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1528-1534
    • /
    • 2019
  • Recently, due to the popularization of 3D depth cameras, new researches and opportunities have been made in research conducted on RGB images, but estimation of human hand pose is still classified as one of the difficult topics. In this paper, we propose a robust estimation method of human hand pose from various input 3D depth images using a learning algorithm. The proposed approach first generates a skeleton-based hand model and then aligns the generated hand model with three-dimensional point cloud data. Then, using a random forest-based learning algorithm, the hand pose is strongly estimated from the aligned hand model. Experimental results in this paper show that the proposed hierarchical approach makes robust and fast estimation of human hand posture from input depth images captured in various indoor and outdoor environments.

Interactive Cultural Content Using Finger Motion and HMD VR (Finger Motion과 HMD VR을 이용한 인터렉티브 문화재 콘텐츠)

  • Lee, Byungseok;Jung, Jonghee;Back, Chanyeol;Son, Youngro;Chin, Seongah
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.6 no.11
    • /
    • pp.519-528
    • /
    • 2016
  • Most cultural contents currently we face are not suitable for associating with state of arts and high technology as simply providing one-sided learning. Pictures and movies of cultural contents also sees to utilize for efficacy of cultural education. There are still some limitations to draw interest from users when providing one-sided learning for cultural study, which aims to only deliver knowledge itself. In this paper, we propose interactive HMD VR cultural contents that can support more experience to get rid of aforementioned limitations. To this end, we first select quite interesting and wellknown cultural contents from world wide to draw more attention and effect. To increase immersion, presence and interactivity we have used HMD VR and Leapmotion, which intentionally draws more attention to increase interest. The cultural contents also facilitate augmented information as well as puzzle gaming components. To verify, we have carried out a user study as well.

Design of Vision-based Interaction Tool for 3D Interaction in Desktop Environment (데스크탑 환경에서의 3차원 상호작용을 위한 비전기반 인터랙션 도구의 설계)

  • Choi, Yoo-Joo;Rhee, Seon-Min;You, Hyo-Sun;Roh, Young-Sub
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.421-434
    • /
    • 2008
  • As computer graphics, virtual reality and augmented reality technologies have been developed, in many application areas based on those techniques, interaction for 3D space is required such as selection and manipulation of an 3D object. In this paper, we propose a framework for a vision-based 3D interaction which enables to simulate functions of an expensive 3D mouse for a desktop environment. The proposed framework includes a specially manufactured interaction device using three-color LEDs. By recognizing position and color of the LED from video sequences, various events of the mouse and 6 DOF interactions are supported. Since the proposed device is more intuitive and easier than an existing 3D mouse which is expensive and requires skilled manipulation, it can be used without additional learning or training. In this paper, we explain methods for making a pointing device using three-color LEDs which is one of the components of the proposed framework, calculating 3D position and orientation of the pointer and analyzing color of the LED from video sequences. We verify accuracy and usefulness of the proposed device by showing a measurement result of an error of the 3D position and orientation.

A Study on the Design and Implementation of a Camera-Based 6DoF Tracking and Pose Estimation System (카메라 기반 6DoF 추적 및 포즈 추정 시스템의 설계 및 구현에 관한 연구)

  • Do-Yoon Jeong;Hee-Ja Jeong;Nam-Ho Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.5
    • /
    • pp.53-59
    • /
    • 2024
  • This study presents the design and implementation of a camera-based 6DoF (6 Degrees of Freedom) tracking and pose estimation system. In particular, we propose a method for accurately estimating the positions and orientations of all fingers of a user utilizing a 6DoF robotic arm. The system is developed using the Python programming language, leveraging the Mediapipe and OpenCV libraries. Mediapipe is employed to extract keypoints of the fingers in real-time, allowing for precise recognition of the joint positions of each finger. OpenCV processes the image data collected from the camera to analyze the finger positions, thereby enabling pose estimation. This approach is designed to maintain high accuracy despite varying lighting conditions and changes in hand position. The proposed system's performance has been validated through experiments, evaluating the accuracy of hand gesture recognition and the control capabilities of the robotic arm. The experimental results demonstrate that the system can estimate finger positions in real-time, facilitating precise movements of the 6DoF robotic arm. This research is expected to make significant contributions to the fields of robotic control and human-robot interaction, opening up various possibilities for future applications. The findings of this study will aid in advancing robotic technology and promoting natural interactions between humans and robots.

Technology Development for Non-Contact Interface of Multi-Region Classifier based on Context-Aware (상황 인식 기반 다중 영역 분류기 비접촉 인터페이스기술 개발)

  • Jin, Songguo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.175-182
    • /
    • 2020
  • The non-contact eye tracking is a nonintrusive human-computer interface providing hands-free communications for people with severe disabilities. Recently. it is expected to do an important role in non-contact systems due to the recent coronavirus COVID-19, etc. This paper proposes a novel approach for an eye mouse using an eye tracking method based on a context-aware based AdaBoost multi-region classifier and ASSL algorithm. The conventional AdaBoost algorithm, however, cannot provide sufficiently reliable performance in face tracking for eye cursor pointing estimation, because it cannot take advantage of the spatial context relations among facial features. Therefore, we propose the eye-region context based AdaBoost multiple classifier for the efficient non-contact gaze tracking and mouse implementation. The proposed method detects, tracks, and aggregates various eye features to evaluate the gaze and adjusts active and semi-supervised learning based on the on-screen cursor. The proposed system has been successfully employed in eye location, and it can also be used to detect and track eye features. This system controls the computer cursor along the user's gaze and it was postprocessing by applying Gaussian modeling to prevent shaking during the real-time tracking using Kalman filter. In this system, target objects were randomly generated and the eye tracking performance was analyzed according to the Fits law in real time. It is expected that the utilization of non-contact interfaces.

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

A Study on the Key Factors in User Acceptance of the Smart Clothing (스마트웨어의 수용 요인에 대한 연구)

  • Hong, Ji-Young;Chae, Haeng-Suk;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.9 no.spc3
    • /
    • pp.235-241
    • /
    • 2006
  • This paper predict user acceptance of smart clothing. The present research develops and validates new products for smart clothing. Studies suggest that further analysis of the process be undertaken to better establish properties for smart clothing, underlying structures and stability over innovative technologies. The findings reported in this paper should be useful methods which identify user needs. such findings in now provide a way to explain technology acceptance. Both of qualitative and quantitative methods, were applied to this study in order to find out user needs for smart clothing. We are writing scenarios and conducting both focused group interviews and a survey to assess the user's interest. The purpose of the survey is to evaluate the importance of the functions and to evaluate the degree of the participant's feeling and attitude. Furthermore, we explore the nature and specific influences of factors that may affect the user perception and usage.

  • PDF

Fast Hand-Gesture Recognition Algorithm For Embedded System (임베디드 시스템을 위한 고속의 손동작 인식 알고리즘)

  • Hwang, Dong-Hyun;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.7
    • /
    • pp.1349-1354
    • /
    • 2017
  • In this paper, we propose a fast hand-gesture recognition algorithm for embedded system. Existing hand-gesture recognition algorithm has a difficulty to use in a low performance system such as embedded systems and mobile devices because of high computational complexity of contour tracing method that extracts all points of hand contour. Instead of using algorithms based on contour tracing, the proposed algorithm uses concentric-circle tracing method to estimate the abstracted contour of fingers, then classify hand-gestures by extracting features. The proposed algorithm has an average recognition rate of 95% and an average execution time of 1.29ms, which shows a maximum performance improvement of 44% compared with algorithm using the existing contour tracing method. It is confirmed that the algorithm can be used in a low performance system such as embedded systems and mobile devices.