• Title/Summary/Keyword: Kinect camera

Search Result 106, Processing Time 0.026 seconds

The Development of Interactive Ski-Simulation Motion Recognition System by Physics-Based Analysis (물리 모델 분석을 통한 상호 작용형 스키시뮬레이터 동작인식 시스템 개발)

  • Jin, Moon-Sub;Choi, Chun-Ho;Chung, Kyung-Ryul
    • Transactions of the KSME C: Technology and Education
    • /
    • v.1 no.2
    • /
    • pp.205-210
    • /
    • 2013
  • In this research, we have developed a ski-simulation system based on a physics-based simulation model using Newton's second law of motion. Key parameters of the model, which estimates skier's trajectory, speed and acceleration change due to skier's control on ski plate and posture changes, were derived from a field test study performed on real ski slope. Skier's posture and motion were measured by motion capture system composed of 13 high speed IR camera, and skier's control and pressure distribution on ski plate were measured by acceleration and pressure sensors attached on ski plate and ski boots. Developed ski-simulation model analyzes user's full body and center of mass using a depth camera(Microsoft Kinect) device in real time and provides feedback about force, velocity and acceleration for user. As a result, through the development of interactive ski-simulation motion recognition system, we accumulated experience and skills based on physics models for development of sports simulator.

A Study on Modeling Automation of Human Engineering Simulation Using Multi Kinect Depth Cameras (여러 대의 키넥트 뎁스 카메라를 이용한 인간공학 시뮬레이션 모델링 자동화에 관한 연구)

  • Jun, Chanmo;Lee, Ju Yeon;Noh, Sang Do
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.1
    • /
    • pp.9-19
    • /
    • 2016
  • Applying human engineering simulation to analyzing work capability and movements of operators during manufacturing is highly demanded. However, difficulty in modeling digital human required for simulation makes engineers to be reluctant to utilize human simulation for their tasks. This paper addresses such problem on human engineering simulation by developing the technology to automatize human modeling with multiple Kinects at different depths. The Kinects enable us to acquire the movements of digital human which are essential data for implementing human engineering simulation. In this paper, we present a system for modeling automation of digital human. Especially, the system provides a way of generating the digital model of workers' movement and position using multiple Kinects which cannot be generated by single Kinect. Lastly, we verify the effects of the developed system in terms of modeling time and accuracy by applying the system to four different scenarios. In conclusion, the proposed system makes it possible to generate the digital human model easily and reduce costs and time for human engineering simulation.

MultiView-Based Hand Posture Recognition Method Based on Point Cloud

  • Xu, Wenkai;Lee, Ick-Soo;Lee, Suk-Kwan;Lu, Bo;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2585-2598
    • /
    • 2015
  • Hand posture recognition has played a very important role in Human Computer Interaction (HCI) and Computer Vision (CV) for many years. The challenge arises mainly due to self-occlusions caused by the limited view of the camera. In this paper, a robust hand posture recognition approach based on 3D point cloud from two RGB-D sensors (Kinect) is proposed to make maximum use of 3D information from depth map. Through noise reduction and registering two point sets obtained satisfactory from two views as we designed, a multi-viewed hand posture point cloud with most 3D information can be acquired. Moreover, we utilize the accurate reconstruction and classify each point cloud by directly matching the normalized point set with the templates of different classes from dataset, which can reduce the training time and calculation. Experimental results based on posture dataset captured by Kinect sensors (from digit 1 to 10) demonstrate the effectiveness of the proposed method.

Face Detection based Real-time Eye Gaze Correction Method Using a Depth Camera (거리 카메라를 이용한 얼굴 검출 기반 실시간 시선 보정 방법)

  • Jo, Hoon;Ra, Moon-Soo;Kim, Whoi-Yul;Kim, Deuk-Hwa
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.11a
    • /
    • pp.151-154
    • /
    • 2012
  • 본 논문에서는 화상통신의 현실감을 증진시킬 수 있는 화자 간 시선 맞춤 시스템을 제안한다. 제안하는 방법은 Kinect 거리 카메라로부터 입력된 영상에서 화자의 얼굴 영역을 획득하여 화자의 시선이 카메라를 응시하도록 획득한 영역을 변환한 후에 원본 영상과 합성한다. Kinect 거리 카메라에서 획득한 얼굴 영역에는 다양한 형태의 잡음이 많아 미디언 필터와 모폴로지 연산을 통해 얼굴 영역의 잡음을 제거한다. 화자의 위치에 상관 없이 화자가 카메라를 응시하는 영상을 생성하기 위해서 Kinect 가 제공하는 거리 정보를 이용하여 시선 보정 각도와 회전 축을 획득한다. 시선이 보정된 얼굴 영역은 원본 영상에서 존재하지 않는 영역을 포함하고 있기 때문에, 원본 영상의 각 화소를 삼각형 메쉬로 구성한 후 해당 영역을 보간하여 최종적으로 시선이 보정된 영상을 생성한다. 제안하는 방법은 시선 맞춤 영상을 생성하는 데 필수적인 눈과 주변 얼굴 영역만 선택해서 변환하므로 영상의 왜곡이 적고 실시간 처리가 가능하다는 장점이 있다. 또한 카메라와 화자 사이의 거리 정보를 이용해 화자의 위치에 적응적인 시선 맞춤 영상을 생성할 수 있다. 실험을 통해 Intel i5 CPU 를 장착한 PC에서 $320{\times}240$ 크기의 영상을 사용할 경우 초당 약 35 프레임의 보정된 영상을 생성하여 제안하는 방법이 실시간 처리가 가능하다는 것을 확인하였다.

  • PDF

Person Tracking by Detection of Mobile Robot using RGB-D Cameras

  • Kim, Young-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.17-25
    • /
    • 2017
  • In this paper, we have implemented a low-cost mobile robot supporting the person tracking by detection using RGB-D cameras and ROS(Robot Operating System) framework. The mobile robot was developed based on the Kobuki mobile base equipped with 2's Kinect devices and a high performance controller. One kinect device was used to detect and track the single person among people in the constrained working area by combining point cloud data filtering & clustering, HOG classifier and Kalman Filter-based estimation successively, and the other to perform the SLAM-based navigation supported in ROS framework. In performance evaluation, the person tracking by detection was proved to be robustly executed in real-time, and the navigation function showed the accuracy with the mean distance error being lower than 50mm. The mobile robot implemented has a significance in using the open-source based, general-purpose and low-cost approach.

Magic Mirror Fashion Coordination System using Kinect (키넥트를 이용한 매직미러 패션코디네이션 시스템)

  • Kim, Cheeyong;Kim, Mi-Ri;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.11
    • /
    • pp.1374-1381
    • /
    • 2014
  • Digital technology With the popularization of computers and IT technology development is causing a dramatic change across the human life. Increase of profit in fashion industry has a significant impact on the overall industry. It has been studied to develop consumer oriented higher value-added fashion products of including clothes using digital technology abroad. In this paper, we propose a system that when user stand in front of display, user can show body captured depth camera look the coordination of a variety of costume and fashion concept through a magic mirror. Using the system, we will satisfy the convenience of user and be used as a way appropriate to clothing shopping in the shortest time. The system will develop personalized fashion content industry enhanced interaction.

Jitter Correction of the Face Motion Capture Data for 3D Animation

  • Lee, Junsang;Han, Soowhan;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.39-45
    • /
    • 2015
  • Along with the advance of digital technology, various methods are adopted for capturing the 3D animating data. Especially, in 3D animation production market, the motion capture system is widely used to make films, games, and animation contents. The technique quickly tracks the movements of the actor and translate the data to use as animating character's motion. Thus the animation characters are able to mimic the natural motion and gesture, even face expression. However, the conventional motion capture system needs tricky conditions, such as space, light, number of camera etc. Furthermore the data acquired from the motion capture system is frequently corrupted by noise, drift and surrounding environment. In this paper, we introduce the post production techniques to stabilizing the jitters of motion capture data from the low cost handy system based on Kinect.

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.

Development non-smoking billboard using augmented reality function (증강현실기능을 이용한 금연 광고판 개발)

  • Hong, Jeong-Soo;Lee, Jin-Dong;Yun, Yong-Gyu;Yoo, Jeong-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.274-276
    • /
    • 2016
  • Recently due to increase of tobacco users, many problems have been issued. Not only smoking in public places, smoking indoors as well causes harm to non-smoking people. Smoking booths that are installed but the quality is considerably less and purification devices are not correctly installed, which leads to harm the people around the smoking booth. In this paper, we introduce the "Augmented Reality Billboard" in order for smokers to effectively recognize the non-smoking warning image and healthy warning messages, Kinect Camera Sensor and Augmented Reality (AR) functions are used to recognize the motion of a person to coordinate the corresponding coordinate values.

  • PDF

Fall Detection Based on 2-Stacked Bi-LSTM and Human-Skeleton Keypoints of RGBD Camera (RGBD 카메라 기반의 Human-Skeleton Keypoints와 2-Stacked Bi-LSTM 모델을 이용한 낙상 탐지)

  • Shin, Byung Geun;Kim, Uung Ho;Lee, Sang Woo;Yang, Jae Young;Kim, Wongyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.491-500
    • /
    • 2021
  • In this study, we propose a method for detecting fall behavior using MS Kinect v2 RGBD Camera-based Human-Skeleton Keypoints and a 2-Stacked Bi-LSTM model. In previous studies, skeletal information was extracted from RGB images using a deep learning model such as OpenPose, and then recognition was performed using a recurrent neural network model such as LSTM and GRU. The proposed method receives skeletal information directly from the camera, extracts 2 time-series features of acceleration and distance, and then recognizes the fall behavior using the 2-Stacked Bi-LSTM model. The central joint was obtained for the major skeletons such as the shoulder, spine, and pelvis, and the movement acceleration and distance from the floor were proposed as features of the central joint. The extracted features were compared with models such as Stacked LSTM and Bi-LSTM, and improved detection performance compared to existing studies such as GRU and LSTM was demonstrated through experiments.