• Title/Summary/Keyword: 키넥트 V2

Search Result 9, Processing Time 0.021 seconds

Online Monitoring System based notifications on Mobile devices with Kinect V2 (키넥트와 모바일 장치 알림 기반 온라인 모니터링 시스템)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1183-1188
    • /
    • 2016
  • Kinect sensor version 2 is a kind of camera released by Microsoft as a computer vision and a natural user interface for game consoles like Xbox one. It allows acquiring color images, depth images, audio input and skeletal data with a high frame rate. In this paper, using depth image, we present a surveillance system of a certain area within Kinect's field of view. With computer vision library(Emgu CV), if an object is detected in the target area, it is tracked and kinect camera takes RGB image to send it in database server. Therefore, a mobile application on android platform was developed in order to notify the user that Kinect has sensed strange motion in the target region and display the RGB image of the scene. User gets the notification in real-time to react in the best way in the case of valuable things in monitored area or other cases related to a reserved zone.

Study on object detection and distance measurement functions with Kinect for windows version 2 (키넥트(Kinect) 윈도우 V2를 통한 사물감지 및 거리측정 기능에 관한 연구)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.6
    • /
    • pp.1237-1242
    • /
    • 2017
  • Computer vision is coming more interesting with new imaging sensors' new capabilities which enable it to understand more its surrounding environment by imitating human vision system with artificial intelligence techniques. In this paper, we made experiments with Kinect camera, a new depth sensor for object detection and distance measurement functions, most essential functions in computer vision such as for unmanned or manned vehicles, robots, drones, etc. Therefore, Kinect camera is used here to estimate the position or the location of objects in its field of view and measure the distance from them to its depth sensor in an accuracy way by checking whether that the detected object is real object or not to reduce processing time ignoring pixels which are not part of real object. Tests showed promising results with such low-cost range sensor, Kinect camera which can be used for object detection and distance measurement which are fundamental functions in computer vision applications for further processing.

RGB-Depth Camera for Dynamic Measurement of Liquid Sloshing (RGB-Depth 카메라를 활용한 유체 표면의 거동 계측분석)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.1
    • /
    • pp.29-35
    • /
    • 2019
  • In this paper, a low-cost dynamic measurement system using the RGB-depth camera, Microsoft $Kinect^{(R)}$ v2, is proposed for measuring time-varying free surface motion of liquid dampers used in building vibration mitigation. Various experimental studies are conducted consecutively: performance evaluation and validation of the $Kinect^{(R)}$ v2, real-time monitoring using the $Kinect^{(R)}$ v2 SDK(software development kits), point cloud acquisition of liquid free surface in the 3D space, comparison with the existing video sensing technology. Utilizing the proposed $Kinect^{(R)}$ v2-based measurement system in this study, dynamic behavior of liquid in a laboratory-scaled small tank under a wide frequency range of input excitation is experimentally analyzed.

Face Detection Algorithm using Kinect-based Skin Color and Depth Information for Multiple Faces Detection (Kinect 디바이스에서 피부색과 깊이 정보를 융합한 여러 명의 얼굴 검출 알고리즘)

  • Yun, Young-Ji;Chien, Sung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.1
    • /
    • pp.137-144
    • /
    • 2017
  • Face detection is still a challenging task under severe face pose variations in complex background. This paper proposes an effective algorithm which can detect single or multiple faces based on skin color detection and depth information. We introduce Gaussian mixture model(GMM) for skin color detection in a color image. The depth information is from three dimensional depth sensor of Kinect V2 device, and is useful in segmenting a human body from the background. Then, a labeling process successfully removes non-face region using several features. Experimental results show that the proposed face detection algorithm can provide robust detection performance even under variable conditions and complex background.

Specification and Limitation of ToF Cameras (ToF 카메라의 특성과 그 한계)

  • Hong, Su-Min;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.12-15
    • /
    • 2016
  • 요즘 들어, 3차원 콘텐츠의 수요는 지속적으로 증가하고 있다. 3차원 콘텐츠의 품질은 해당 장면의 깊이 정보에 큰 영향을 받기 때문에 정확한 깊이 정보를 얻는 방법이 매우 중요하다. 깊이 정보를 얻는 방법은 크게 수동형 방식과 능동형 방식으로 나뉘는데, 수동형 방식은 계산 과정이 복잡하고 깊이맵의 품질이 보장되지 않는 단점을 갖기 때문에 능동형 방식이 많이 사용되고 있다. 능동형 방식은 깊이 카메라를 이용하여 직접적인 깊이 정보를 얻는 방식으로, 대게 ToF(Time-of-flight) 기술이 사용된다. 이 논문에서는 ToF 깊이 카메라로 촬영된 실제 깊이맵의 특성을 분석하기 위해 여러 가지 촬영 환경과 객체에 대해서 SR4000 깊이 카메라와 키넥트 v2 센서를 이용하여 깊이맵 품질을 비교했다. 실험 결과, 적외선이 제대로 반사되기 어려운 방사성 물질이나 표면, 경계 영역, 어두운 영역, 머리 영역 등에서 정확한 깊이 정보를 얻기 어려웠으며, 실외 환경에서 정확한 깊이 정보가 획득되지 않는 것을 확인할 수 있었다.

  • PDF

Development on Multi-view synthesis system for producing 3D image (3D 영상 제작을 위한 다시점 영상 획득 시스템 개발)

  • Lee, Sang-Ha;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.89-91
    • /
    • 2016
  • 본 논문에서는 실사 영상 기반으로 3D 영상을 생성하기 위하여 효율적으로 다시점 영상을 획득하는 시스템을 제안한다. 기존의 시스템은 대부분 다수의 카메라를 이용하여 다시점 영상을 획득하는 구조이다. 이 경우 각 카메라 간의 정합(calibration)을 수행해야 할 뿐만 아니라 스테레오 매칭을 통해 깊이 정보를 추출하는 과정이 필요하다. 제안하는 시스템에서는 카메라는 고정시킨 상태에서 촬영하고자 하는 객체를 턴테이블 위에 놓고 회전시키면서 촬영한다. 카메라는 Microsoft에서 출시한 컬러 정보와 깊이 정보를 동시에 얻을 수 있는 키넥트(Kinect) v2를 사용한다. 실험을 통하여 제안하는 시스템이 기존 시스템보다 다시점 영상을 효율적으로 생성하는 것을 확인하였다.

  • PDF

Development and Evaluation of the V-Catch Vision System

  • Kim, Dong Keun;Cho, Yongjoo;Park, Kyoung Shin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.45-52
    • /
    • 2022
  • A tangible sports game is an exercise game that uses sensors or cameras to track the user's body movements and to feel a sense of reality. Recently, VR indoor sports room systems installed to utilize tangible sports game for physical activity in schools. However, these systems primarily use screen-touch user interaction. In this research, we developed a V-Catch Vision system that uses AI image recognition technology to enable tracking of user movements in three-dimensional space rather than two-dimensional wall touch interaction. We also conducted a usability evaluation experiment to investigate the exercise effects of this system. We tried to evaluate quantitative exercise effects by measuring blood oxygen saturation level, the real-time ECG heart rate variability, and user body movement and angle change of Kinect skeleton. The experiment result showed that there was a statistically significant increase in heart rate and an increase in the amount of body movement when using the V-Catch Vision system. In the subjective evaluation, most subjects found the exercise using this system fun and satisfactory.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

An Extraction Method of Meaningful Hand Gesture for a Robot Control (로봇 제어를 위한 의미 있는 손동작 추출 방법)

  • Kim, Aram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.126-131
    • /
    • 2017
  • In this paper, we propose a method to extract meaningful motion among various kinds of hand gestures on giving commands to robots using hand gestures. On giving a command to the robot, the hand gestures of people can be divided into a preparation one, a main one, and a finishing one. The main motion is a meaningful one for transmitting a command to the robot in this process, and the other operation is a meaningless auxiliary operation to do the main motion. Therefore, it is necessary to extract only the main motion from the continuous hand gestures. In addition, people can move their hands unconsciously. These actions must also be judged by the robot with meaningless ones. In this study, we extract human skeleton data from a depth image obtained by using a Kinect v2 sensor and extract location data of hands data from them. By using the Kalman filter, we track the location of the hand and distinguish whether hand motion is meaningful or meaningless to recognize the hand gesture by using the hidden markov model.