• Title/Summary/Keyword: 모션 제어

Search Result 348, Processing Time 0.023 seconds

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

A Study on the Ubiquitous Home Network Interface System by Application of User's Gesture Recognition Method (사용자 제스처 인식을 활용한 유비쿼터스 홈 네트워크 인터페이스 체계에 대한 연구)

  • Park In-Chan;Kim Sun-Chul
    • Science of Emotion and Sensibility
    • /
    • v.8 no.3
    • /
    • pp.265-276
    • /
    • 2005
  • 현재의 유비쿼터스 환경의 홈 네트워크 제품 사용자는 단일 사용자가 아닌 다수의 사용자가 사용하는 네트워크 행태를 취하고 있다. 변화하는 사용환경과 시스템들은 현재와는 다른 요구사항을 가지고 있으며, 이에 따른 사용자 중심의 디자인과 제품 인터페이스 체계의 연구활동은 국내외에서 활발하게 이루어지고 있다. 다양한 모바일 디바이스 및 홈 네트워크 제품의 보급화가 빠르게 성장하면서 이를 쉽게 제어하기 위한 다양한 제어방식이 연구되고 있다. 이중 음성인식기술을 비롯한 표정은 안면표정인식기술의 개발이 활발히 진행되고 있다. 모션감지 센서를 활용한 사용자 제스처 콘트롤 체계는 아직까지는 초보적인 단계에 있으나, 제품 제어에 있어서 향후 근미래에는 자연스러운 인터랙티브 인터페이스의 활용도가 높아질 전망이다. 이에 본 연구에서는 효과적인 디바이스 제어를 위한 제스처 유형의 자연스러운 사용언어체계 개발 방법 및 결과 그리고 사용자 맨탈모델와 메타포 실험을 통한 연구내용을 정리하였다. 기존 사용자의 제스처 유형의 자연스러운 사용언어를 분석하면서 디바이스 제어방식으로서 활용 가능성을 검토할 수 있었으며, 동작 감지 카메라 및 센서를 활용한 새로운 디바이스 제어방식 개발과정의 연구를 통하여 제스처 유형의 자연스러운 언어 체계 개발 및 과정을 정립하였다.

  • PDF

Fingertip Tracking Robust to Local Illumination Changes and Cluttered Background (국부적인 조명변화와 복잡한 배경에 강인한 손 끝 좌표 추적)

  • 김유호;김종선;이준호
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.439-442
    • /
    • 2000
  • 본 연구는 손의 동작변화로 인한 손 영역의 국부적인 조명변화와 복잡한 배경환경에서 손 영역의 검지좌표를 안정적으로 검출, 추적하여 마우스 포인터를 제어하는 핑거 마우스 시스템을 제안하였다. 손의 동작변화로 인한 국부적인 조명변화에 강인한 손 영역 검출을 위한 적응적인 on-line학습법을 제안하였으며 복잡한 배경에서도 안정적인 손 영역 추적이 가능하도록 칼만 트렉킹과 차영상을 이용한 모션 세그멘테이션을 복합적으로 적용하였다. 실험결과 복잡한 배경과 손의 움직임에 상관 없이 검지 좌표를 안정적으로 추적 할 수 있었다.

  • PDF

Real-time Motion Retargetting (실시간 동작 변환)

  • Choe, Gwang-Jin;Go, Hyeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.5 no.2
    • /
    • pp.25-32
    • /
    • 1999
  • 본 논문은 한 캐릭터의 동작을 다른 캐릭터에게도 이용 가능하도록 실시간으로 동작을 변환하는 알고리즘을 제시한다. 본 알고리즘은 작업 우선 순위를 고려한 폐루프 역 변화율 제어(closed-loop inverse rate control)에 기반하고 있다. 최우선 순위의 작업으로서 캐릭터간의 앤드이펙터들의 궤적의 차이를 줄이도록 하고, 다음 우선 순위의 작업으로 잉여 자유도를 이용하여 캐릭터간의 관절각의 차이를 최소화함으로서 전체 동작 변환이 수행된다. 동작 변환은 온라인으로 이루어지므로 모션 캡쳐시 변환되는 동작을 화면상에서 실시간으로 볼 수 있다. 따라서 동작을 수행하는 사람이 원하는 결과가 얻어질 때까지 화면을 보면서 동작을 변화시킬 수 있으며 이는 오프라인 알고리즘에 비해 보다 효과적인 인터액션을 가능하게 한다.

  • PDF

Dragging Body Parts in 3D Space to Direct Animated Characters (3차원 공간 상의 신체 부위 드래깅을 통한 캐릭터 애니메이션 제어)

  • Lee, Kang Hoon;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.2
    • /
    • pp.11-20
    • /
    • 2015
  • We present a new interactive technique for directing the motion sequences of an animated character by dragging its specific body part to a desired location in the three-dimensional virtual environment via a hand motion tracking device. The motion sequences of our character is synthesized by reordering subsequences of captured motion data based on a well-known graph representation. For each new input location, our system samples the space of possible future states by unrolling the graph into a spatial search tree, and retrieves one of the states at which the dragged body part of the character gets closer to the input location. We minimize the difference between each pair of successively retrieved states, so that the user is able to anticipate which states will be found by varying the input location, and resultantly, to quickly reach the desired states. The usefulness of our method is demonstrated through experiments with breakdance, boxing, and basketball motion data.

Real-time Tele-operated Drone System with LTE Communication (LTE 통신을 이용한 실시간 원격주행 드론 시스템)

  • Kang, Byoung Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.35-40
    • /
    • 2019
  • In this research, we suggest a real-time tele-driving system for unmanned drone operations using the LTE communication system. The drone operator is located 180km away and controls the altitude and position of the drone with a 50ms time delay. The motion data and video from the drone is streamed to the operator. The video is played on the operator's head-mounted display (HMD) and the motion data emulates the drone on the simulator for the operator. In general, a drone is operated using RF signal and the maximum distance for direct control is limited to 2km. For long range drone control over 2km, an auto flying mode is enabled using a mission plan along with GPS data. In an emergency situation, the autopilot is stopped and the "return home" function is executed. In this research, the immersion tele-driving system is suggested for drone operation with a 50ms time delay using LTE communication. A successful test run of the suggested tele-driving system has already been performed between an operator in Daejeon and a drone in Inje (Gangwon-Do) which is approximately 180km apart.

Motion Patches (모션 패치)

  • Choi, Myung-Geol;Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.1_2
    • /
    • pp.119-127
    • /
    • 2006
  • Real-time animation of human figures in virtual environments is an important problem in the context of computer games and virtual environments. Recently, the use of large collections of captured motion data have added increased realism in character animation. However, assuming that the virtual environment is large and complex, the effort of capturing motion data in a physical environment and adapting them to an extended virtual environment is the bottleneck for achieving interactive character animation and control. We present a new technique for allowing our animated characters to navigate through a large virtual environment, which is constructed using a small set of building blocks. The building blocks can be tiled or aligned with a repeating pattern to create a large environment. We annotate each block with a motion patch, which informs what motions are available for animated characters within the block. We demonstrate the versatility and flexibility of our approach through examples in which multiple characters are animated and controlled at interactive rates in large, complex virtual environments.

Development of Frozen Shoulder Rehabilitation Robot Based On Motion Capture Data (모션 캡쳐 데이터 기반의 오십견 재활 보조용 로봇의 개발)

  • Yang, Un-Je;Kim, Jung-Yup
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.9
    • /
    • pp.1017-1026
    • /
    • 2012
  • In this study, an exoskeleton-type robot is developed to assist frozen shoulder rehabilitation in a systematic and efficient manner for humans. The developed robot has two main features. The first is a structural feature: this robot was designed to rehabilitate both shoulders of a patient, and the three axes of the shoulder meet at one point to generate human-like ball joint motions. The second is a functional feature that is divided into two rehabilitation modes: the first mode is a joint rehabilitation mode that helps to recover the shoulder's original range of motion by moving the patient's shoulder according to patterns obtained by motion capture, and the second mode is a muscle rehabilitation mode that strengthens the shoulder muscles by suitably resisting the patient's motion. Through these two modes, frozen shoulder rehabilitation can be performed systematically according to the patient's condition. The development procedure is described in detail.

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.