• Title/Summary/Keyword: 3D skeleton data

Search Result 35, Processing Time 0.021 seconds

Denoising 3D Skeleton Frames using Intersection Over Union

  • Chuluunsaikhan, Tserenpurev;Kim, Jeong-Hun;Choi, Jong-Hyeok;Nasridinov, Aziz
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.474-475
    • /
    • 2021
  • The accuracy of real-time video analysis system based on 3D skeleton data highly depends on the quality of data. This study proposes a methodology to distinguish noise in 3D skeleton frames using Intersection Over Union (IOU) method. IOU is metric that tells how similar two rectangles (i.e., boxes). Simply, the method decides a frame as noise or not by comparing the frame with a set of valid frames. Our proposed method distinguished noise in 3D skeleton frames with the accuracy of 99%. According to the result, our proposed method can be used to track noise in 3D skeleton frames.

A Study for Animation Using 3D Laser Scanned Body Data (인체 전신 레이저 스캔 데이터를 대상으로 한 인체 애니메이션 연구)

  • Yoon, Geun-Ho;Cho, Chang-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.10
    • /
    • pp.1257-1263
    • /
    • 2012
  • An implementation of animation module using the 3D body data scanned by laser scanner is reported in this paper. Characteristic points of the skeleton in human body were picked up as pivot point for 3D rotation. The body data set wes reconstructed as objects built in hierarchical tree structure, which is based on skeleton model. In order to implement the 3D animation of the laser scanned body data, the vertexes of the objects were connected as skeleton structure and animated to follow dynamic patterns inputted by user.

A Research on Efficient Skeleton Retargeting Method Suitable for MetaHuman

  • Shijie Sun;Ki-Hong Kim;David-Junesok Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.47-54
    • /
    • 2024
  • With the rapid development of 3D animation, MetaHuman is widely used in film production, game development and VR production as a virtual human creation platform.In the animation production of virtual humans, motion capture is usually used.Since different motion capture solutions use different skeletons for motion recording, when the skeleton level of recorded animation data is different from that of MetaHuman, the animation data recorded by motion capture cannot be directly used on MetaHuman. This requires Reorient the skeletons of both.This study explores an efficient skeleton reorientation method that can maintain the accuracy of animation data by reducing the number of bone chains.In the experiment, three skeleton structures, Rokoko, Mixamo and Xsens were used for efficient redirection experiments, to compare and analyze the adaptability of different skeleton structures to the MetaHuman skeleton, and to explore which skeleton structure has the highest compatibility with the MetaHuman skeleton.This research provides an efficient skeleton reorientation idea for the production team of 3D animated video content, which can significantly reduce time costs and improve work efficiency.

Dual-Stream Fusion and Graph Convolutional Network for Skeleton-Based Action Recognition

  • Hu, Zeyuan;Feng, Yiran;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.423-430
    • /
    • 2021
  • Aiming Graph convolutional networks (GCNs) have achieved outstanding performances on skeleton-based action recognition. However, several problems remain in existing GCN-based methods, and the problem of low recognition rate caused by single input data information has not been effectively solved. In this article, we propose a Dual-stream fusion method that combines video data and skeleton data. The two networks respectively identify skeleton data and video data and fuse the probabilities of the two outputs to achieve the effect of information fusion. Experiments on two large dataset, Kinetics and NTU-RGBC+D Human Action Dataset, illustrate that our proposed method achieves state-of-the-art. Compared with the traditional method, the recognition accuracy is improved better.

Realtime 3D Human Full-Body Convergence Motion Capture using a Kinect Sensor (Kinect Sensor를 이용한 실시간 3D 인체 전신 융합 모션 캡처)

  • Kim, Sung-Ho
    • Journal of Digital Convergence
    • /
    • v.14 no.1
    • /
    • pp.189-194
    • /
    • 2016
  • Recently, there is increasing demand for image processing technology while activated the use of equipments such as camera, camcorder and CCTV. In particular, research and development related to 3D image technology using the depth camera such as Kinect sensor has been more activated. Kinect sensor is a high-performance camera that can acquire a 3D human skeleton structure via a RGB, skeleton and depth image in real-time frame-by-frame. In this paper, we develop a system. This system captures the motion of a 3D human skeleton structure using the Kinect sensor. And this system can be stored by selecting the motion file format as trc and bvh that is used for general purposes. The system also has a function that converts TRC motion captured format file into BVH format. Finally, this paper confirms visually through the motion capture data viewer that motion data captured using the Kinect sensor is captured correctly.

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.

Motion classification using distributional features of 3D skeleton data

  • Woohyun Kim;Daeun Kim;Kyoung Shin Park;Sungim Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.551-560
    • /
    • 2023
  • Recently, there has been significant research into the recognition of human activities using three-dimensional sequential skeleton data captured by the Kinect depth sensor. Many of these studies employ deep learning models. This study introduces a novel feature selection method for this data and analyzes it using machine learning models. Due to the high-dimensional nature of the original Kinect data, effective feature extraction methods are required to address the classification challenge. In this research, we propose using the first four moments as predictors to represent the distribution of joint sequences and evaluate their effectiveness using two datasets: The exergame dataset, consisting of three activities, and the MSR daily activity dataset, composed of ten activities. The results show that the accuracy of our approach outperforms existing methods on average across different classifiers.

Optimised ML-based System Model for Adult-Child Actions Recognition

  • Alhammami, Muhammad;Hammami, Samir Marwan;Ooi, Chee-Pun;Tan, Wooi-Haw
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.929-944
    • /
    • 2019
  • Many critical applications require accurate real-time human action recognition. However, there are many hurdles associated with capturing and pre-processing image data, calculating features, and classification because they consume significant resources for both storage and computation. To circumvent these hurdles, this paper presents a recognition machine learning (ML) based system model which uses reduced data structure features by projecting real 3D skeleton modality on virtual 2D space. The MMU VAAC dataset is used to test the proposed ML model. The results show a high accuracy rate of 97.88% which is only slightly lower than the accuracy when using the original 3D modality-based features but with a 75% reduction ratio from using RGB modality. These results motivate implementing the proposed recognition model on an embedded system platform in the future.

User classification and location tracking algorithm using deep learning (딥러닝을 이용한 사용자 구분 및 위치추적 알고리즘)

  • Park, Jung-tak;Lee, Sol;Park, Byung-Seo;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.78-79
    • /
    • 2022
  • In this paper, we propose a technique for tracking the classification and location of each user through body proportion analysis of the normalized skeletons of multiple users obtained using RGB-D cameras. To this end, each user's 3D skeleton is extracted from the 3D point cloud and body proportion information is stored. After that, the stored body proportion information is compared with the body proportion data output from the entire frame to propose a user classification and location tracking algorithm in the entire image.

  • PDF

Robot System Design Capable of Motion Recognition and Tracking the Operator's Motion (사용자의 동작인식 및 모사를 구현하는 로봇시스템 설계)

  • Choi, Yonguk;Yoon, Sanghyun;Kim, Junsik;Ahn, YoungSeok;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.6
    • /
    • pp.605-612
    • /
    • 2015
  • Three dimensional (3D) position determination and motion recognition using a 3D depth sensor camera are applied to a developed penguin-shaped robot, and its validity and closeness are investigated. The robot is equipped with an Asus Xtion Pro Live as a 3D depth camera, and a sound module. Using the skeleton information from the motion recognition data extracted from the camera, the robot is controlled so as to follow the typical three mode-reactions formed by the operator's gestures. In this study, the extraction of skeleton joint information using the 3D depth camera is introduced, and the tracking performance of the operator's motions is explained.