• Title/Summary/Keyword: 모션 캡처 데이터

Search Result 39, Processing Time 0.03 seconds

A Schema Definition for Exchanging Character Animation Data (캐릭터 애니메이션 데이터 교환을 위한 스키마 정의)

  • Park, Jong-Hyun;Jung, Chul-Hee;Park, Chang-Sup;Lee, Myeong-Won
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.430-433
    • /
    • 2011
  • 본 연구에서는 ISO/IEC JTC1 SC24와 Web3D Consortium에 의해 제정된 국제표준인 H-Anim을 이용하여 캐릭터 애니메이션 교환을 위한 스키마를 정의한다. 기존의 H-Anim 에서는 인간형 캐릭터 구조의 전송이나 저장에 필요한 계층적 데이터 구조를 X3D 기반으로 정의하고 있으나, 생성된 애니메이션을 다른 캐릭터에 그대로 적용할 수 있도록 설계되어 있지는 않다. 본 연구에서는 임의의 H-Anim 캐릭터 모델에 임의의 모션 캡처 데이터를 이용하여 애니메이션을 생성할 수 있도록 H-Anim 에 애니메이션을 정의하는 방법과 이를 기존의 H-Anim 구조에 부합하도록 애니메이션 기능을 위한 스키마 확장에 대해 기술한다. 본 연구에서의 캐릭터 애니메이션 데이터 형식은 서로 다른 응용들 사이에서 애니메이션 데이터를 공유하고 서로 호환성을 갖도록 하는 것을 목적으로 한다.

The digital transformation of mask dance movement in intangible cultural asset based on human pose recognition (휴먼포즈 인식을 적용한 무형문화재 탈춤 동작 디지털전환)

  • SooHyuong Kang;SungGeon Park;KwangYoung Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.678-680
    • /
    • 2023
  • 본 연구는 2022년 유네스코 인류무형유산 대표목록에 등재된 탈춤 동작을 디지털화하여 후속 세대에게 정보를 제공하는 것을 목적으로 한다. 데이터 수집은 국가무형문화제로 지정된 탈춤 단체 13개, 시도무형문화재 단체 5개에 소속된 무형문화재, 전승자 39명이 관성식 모션 캡처 장비를 착용하고, 8대의 카메라를 이용하여 수집하였다. 데이터 가공은 바운딩박스를 수행하였고, 탈춤동작 추정은 YOLO v8을 사용하였고 탈춤 동작 분류는 YOLO v8에 CNN모델을 결합하여 130개의 탈춤을 분류하였다. 연구결과, mAP-50은 0.953, mAP50-95는 0.596, Accuracy 70%를 달성하였다. 향후 학습용 데이터셋 구축량이 늘어나고, 데이터 품질이 개선된다면 탈춤 분류 성능은 더욱 개선될 것이라 기대한다.

Comparative Analysis of Linear and Nonlinear Projection Techniques for the Best Visualization of Facial Expression Data (얼굴 표정 데이터의 최적의 가시화를 위한 선형 및 비선형 투영 기법의 비교 분석)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.9
    • /
    • pp.97-104
    • /
    • 2009
  • This paper describes comparison and analysis of methodology which enables us in order to search the projection technique of optimum for projection in the plane. For this methodology, we applies the high-dimensional facial motion capture data respectively in linear and nonlinear projection techniques. The one core element of the methodology is to applies the high-dimensional facial expression data of frame unit in PCA where is a linear projection technique and Isomap, MDS, CCA, Sammon's Mapping and LLE where are a nonlinear projection techniques. And another is to find out the methodology which distributes in this low-dimensional space, and analyze the result last. For this goal, we calculate the distance between the high-dimensional facial expression frame data of existing. And we distribute it in two-dimensional plane space to maintain the distance relationship between the high-dimensional facial expression frame data of existing like that from the condition which applies linear and nonlinear projection techniques. When comparing the facial expression data which distribute in two-dimensional space and the data of existing, we find out the projection technique to maintain the relationship of distance between the frame data like that in condition of optimum. Finally, this paper compare linear and nonlinear projection techniques to projection high-dimensional facial expression data in low-dimensional space and analyze it. And we find out the projection technique of optimum from it.

Dragging Body Parts in 3D Space to Direct Animated Characters (3차원 공간 상의 신체 부위 드래깅을 통한 캐릭터 애니메이션 제어)

  • Lee, Kang Hoon;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.2
    • /
    • pp.11-20
    • /
    • 2015
  • We present a new interactive technique for directing the motion sequences of an animated character by dragging its specific body part to a desired location in the three-dimensional virtual environment via a hand motion tracking device. The motion sequences of our character is synthesized by reordering subsequences of captured motion data based on a well-known graph representation. For each new input location, our system samples the space of possible future states by unrolling the graph into a spatial search tree, and retrieves one of the states at which the dragged body part of the character gets closer to the input location. We minimize the difference between each pair of successively retrieved states, so that the user is able to anticipate which states will be found by varying the input location, and resultantly, to quickly reach the desired states. The usefulness of our method is demonstrated through experiments with breakdance, boxing, and basketball motion data.

Development of Smart Phone App. Contents for 3D Sign Language Education (3D 수화교육 스마트폰 앱콘텐츠 개발)

  • Jung, Young Kee
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.8-14
    • /
    • 2012
  • In this paper, we develope the smart phone App. contents of 3D sign language to widen the opportunity of the korean sign language education for the hearing-impaired and normal people. Especially, we propose the sign language conversion algorithm that automatically transform the structure of Korean phrases to the structure of the sign language. Also, we implement the 3D sign language animation DB using motion capture system and data glove for acquiring the natural motions. Finally, UNITY 3D engine is used for the realtime 3D rendering of sign language motion. We are distributing the proposed App. with 3D sign language DB of 1,300 words to the iPhone App. store and Android App. store.

  • PDF

Interactive Locomotion Controller using Inverted Pendulum Model with Low-Dimensional Data (역진자 모델-저차원 모션 캡처 데이터를 이용한 보행 모션 제어기)

  • Han, KuHyun;Kim, YoungBeom;Park, Byung-Ha;Jung, Kwang-Mo;Han, JungHyun
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1587-1596
    • /
    • 2016
  • This paper presents an interactive locomotion controller using motion capture data and inverted pendulum model. Most of the data-driven character controller using motion capture data have two kinds of limitation. First, it needs many example motion capture data to generate realistic motion. Second, it is difficult to make natural-looking motion when characters navigate dynamic terrain. In this paper, we present a technique that uses dimension reduction technique to motion capture data together with the Gaussian process dynamical model (GPDM), and interpolates the low-dimensional data to make final motion. With the low-dimensional data, we can make realistic walking motion with few example motion capture data. In addition, we apply the inverted pendulum model (IPM) to calculate the root trajectory considering the real-time user input upon the dynamic terrain. Our method can be used in game, virtual training, and many real-time applications.

Real-Time Motion Generation Method of Humanoid Robots based on RGB-D Camera for Interactive Performance and Exhibition (인터렉티브 공연·전시를 위한 RGB-D 카메라 기반 휴머노이드 로봇의 실시간 로봇 동작 생성 방법)

  • Seo, Bohyeong;Lee, Duk-Yeon;Choi, Dongwoon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.528-536
    • /
    • 2020
  • As humanoid robot technology advances, the use of robots for performance is increasing. As a result, studies are being conducted to increase the scope of use of robots in performances by making them natural like humans. Among them, the use of motion capture technology is often used, and there are environmental inconveniences in preparing for motion capture, such as the need for IMU sensors or markers attached to each part of the body and precise high-performance cameras. In addition, for robots used in performance technology, there is a problem that they have to respond in real time depending on the unexpected situation or the audience's response. To make up for the above mentioned problems, in this paper, we proposed a real-time motion capture system by using a number of RGB-D cameras and creating natural robot motion similar to human motion by using motion-captured data.

Development of Virtual Reality Contents for Korean Sign Language Interpretation (수화 통역을 위한 VR 콘텐츠 개발)

  • Na, Kil-Hang;Lee, Byung-Ho;Kim, Jong-Hun;Kim, Jong-Nam;Jung, Young-Kee
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.690-695
    • /
    • 2009
  • 본 논문은 영화, 방송, 애니메이션 등의 다양한 동영상 콘텐츠에 수화 애니메이션을 합성하여 동영상 콘텐츠를 청각 및 언어장애인들에게 이해시키기 위한 수화 통역 VR 콘텐츠 시스템을 제안하고자 한다. 제안된 시스템은 수화 사전에 있는 수화들을 3D 애니메이션으로 DB화하기 위해, 모션 캡처 시스템과 데이터 글러브를 사용하여 실제 사람처럼 자연스러운 애니메이션을 생성하였다. 최종적으로 동영상 콘텐츠의 자막이나 대본의 구문분석을 한 후, 이를 수화용 단어자막을 통해 수화 애니메이션을 DB에서 검색한 후, 실시간적으로 기존 동영상 콘텐츠와 동기합성을 하여 수화 통역 콘텐츠를 제공하는 VR 콘텐츠 시스템을 구현하였고 이 시스템을 동화용 애니메이션에 적용하였다.

  • PDF

Development of Frozen Shoulder Rehabilitation Robot Based On Motion Capture Data (모션 캡쳐 데이터 기반의 오십견 재활 보조용 로봇의 개발)

  • Yang, Un-Je;Kim, Jung-Yup
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.9
    • /
    • pp.1017-1026
    • /
    • 2012
  • In this study, an exoskeleton-type robot is developed to assist frozen shoulder rehabilitation in a systematic and efficient manner for humans. The developed robot has two main features. The first is a structural feature: this robot was designed to rehabilitate both shoulders of a patient, and the three axes of the shoulder meet at one point to generate human-like ball joint motions. The second is a functional feature that is divided into two rehabilitation modes: the first mode is a joint rehabilitation mode that helps to recover the shoulder's original range of motion by moving the patient's shoulder according to patterns obtained by motion capture, and the second mode is a muscle rehabilitation mode that strengthens the shoulder muscles by suitably resisting the patient's motion. Through these two modes, frozen shoulder rehabilitation can be performed systematically according to the patient's condition. The development procedure is described in detail.

Real-time Interactive Animation System for Low-Priced Motion Capture Sensors (저가형 모션 캡처 장비를 이용한 실시간 상호작용 애니메이션 시스템)

  • Kim, Jeongho;Kang, Daeun;Lee, Yoonsang;Kwon, Taesoo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.2
    • /
    • pp.29-41
    • /
    • 2022
  • In this paper, we introduce a novel real-time, interactive animation system which uses real-time motion inputs from a low-cost motion-sensing device Kinect. Our system generates interaction motions between the user character and the counterpart character in real-time. While the motion of the user character is generated mimicking the user's input motion, the other character's motion is decided to react to the user avatar's motion. During a pre-processing step, our system analyzes the reference motion data and generates mapping model in advance. At run-time, our system first generates initial poses of two characters and then modifies them so that it could provide plausible interacting behavior. Our experimental results show plausible interacting animations in that the user character performs a modified motion of user input and the counterpart character properly reacts against the user character. The proposed method will be useful for developing real-time interactive animation systems which provide a better immersive experience for users.