• Title/Summary/Keyword: Kinect camera

Search Result 106, Processing Time 0.026 seconds

Development of Sign Language Translation System using Motion Recognition of Kinect (키넥트의 모션 인식 기능을 이용한 수화번역 시스템 개발)

  • Lee, Hyun-Suk;Kim, Seung-Pil;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.235-242
    • /
    • 2013
  • In this paper, the system which can translate sign language through motion recognition of Kinect camera system is developed for the communication between hearing-impaired person or language disability, and normal person. The proposed algorithm which can translate sign language is developed by using core function of Kinect, and two ways such as length normalization and elbow normalization are introduced to improve accuracy of translating sign langauge for various sign language users. After that the sign language data is compared by chart in order to know how effective these ways of normalization. The accuracy of this program is demonstrated by entering 10 databases and translating sign languages ranging from simple signs to complex signs. In addition, the reliability of translating sign language is improved by applying this program to people who have various body shapes and fixing measure errors in body shapes.

Hand shape recognition based on geometric feature using the convex-hull (Convex-hull을 이용한 기하학적 특징 기반의 손 모양 인식 기법)

  • Choi, In-Kyu;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.8
    • /
    • pp.1931-1940
    • /
    • 2014
  • In this paper, we propose a new hand shape recognition algorithm based on the geometric features using the convex-hull from the depth image acquired by Kinect system. Kinect is a camera providing a depth image and user's skeleton information and used for detecting hand region. In the proposed algorithm, hand region is detected in a depth image acquired by Kinect and convex-hull of the region is found. Boundary points caused by noise and unnecessary points for recognition are eliminated in the convex-hull that changes depending on hand shape. Hand shape is recognized by the sum of internal angle of a polygon that is matched with convex-hull reconstructed with selected boundary points. Through experiments, we confirm that proposed algorithm shows high recognition rate not only for five models but also those cases rotated.

Development of Wave Height Field Measurement System Using a Depth Camera (깊이카메라를 이용한 파고장 계측 시스템의 구축)

  • Kim, Hoyong;Jeon, Chanil;Seo, Jeonghwa
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.6
    • /
    • pp.382-390
    • /
    • 2021
  • The present study suggests the application of a depth camera for wave height field measurement, focusing on the calibration procedure and test setup. Azure Kinect system is used to measure the water surface elevation, with a field of view of 800 mm × 800 mm and repetition rate of 30 Hz. In the optimal optical setup, the spatial resolution of the field of view is 288 × 320 pixels. To detect the water surface by the depth camera, tracer particles that float on the water and reflects infrared is added. The calibration consists of wave height scaling and correction of the barrel distortion. A polynomial regression model of image correction is established using machine learning. The measurement results by the depth camera are compared with capacitance type wave height gauge measurement, to show good agreement.

Head Tracking System Implementation Using a Depth Camera (깊이 카메라를 이용한 머리 추적 시스템 구현)

  • Ahn, Yang-Keun;Jung, Kwnag-Mo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1673-1674
    • /
    • 2015
  • 본 논문에서는 깊이 카메라를 이용하여 사용자 수에 상관없이 사용자의 머리를 추적하는 방법에 대해 제안한다. 제안된 방법은 색상 정보를 제외한 깊이 정보만을 이용하여 머리를 추적하고, 각각의 사용자에 따라 깊이 이미지 형태가 다르게 나오는 머리를 실험적 데이터를 통하여 추적한다. 또한 제안된 방법은 카메라의 종류에 상관없이 머리를 추적할 수 있다는 장점이 있다. 본 논문에서는 Microsoft사의 Kinect for Window와 SoftKinetic사의 DS311을 실험을 진행하였다.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

A Method for Generation of Contour lines and 3D Modeling using Depth Sensor (깊이 센서를 이용한 등고선 레이어 생성 및 모델링 방법)

  • Jung, Hunjo;Lee, Dongeun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.1
    • /
    • pp.27-33
    • /
    • 2016
  • In this study we propose a method for 3D landform reconstruction and object modeling method by generating contour lines on the map using a depth sensor which abstracts characteristics of geological layers from the depth map. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust contour and object can be extracted. The algorithm suggested in this paper first abstracts the characteristics of each geological layer from the depth map image and rearranges it into the proper order, then creates contour lines using the Bezier curve. Using the created contour lines, 3D images are reconstructed through rendering by mapping RGB images of the visual camera. Experimental results show that the proposed method using depth sensor can reconstruct contour map and 3D modeling in real-time. The generation of the contours with depth data is more efficient and economical in terms of the quality and accuracy.

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

Infrared Image Based Human Victim Recognition for a Search and Rescue Robot (수색 구조 로봇을 위한 적외선 영상 기반 인명 인식)

  • Park, Jungkil;Lee, Geunjae;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.4
    • /
    • pp.288-292
    • /
    • 2016
  • In this paper, we propose an infrared image based human victim recognition method for a search and rescue robot in dark environments, like general disaster situations. For recognizing a human victim, an infrared camera on a RGB-D camera, Microsoft Kinect, is used. The contrast and brightness of the infrared image are first improved by histogram equalization, and the noise on the image is removed by morphological operation and Gaussian filtering. For recognizing a human victim, the binarization and blob labeling methods are applied to the improved image. Finally, for verifying the effectiveness and feasibility of the proposed method, an experiment for human victim recognition is carried out in a dark environment.

A Study on User Interface for Quiz Game Contents using Gesture Recognition (제스처인식을 이용한 퀴즈게임 콘텐츠의 사용자 인터페이스에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Digital Contents Society
    • /
    • v.13 no.1
    • /
    • pp.91-99
    • /
    • 2012
  • In this paper we introduce a quiz application program that digitizes the analogue quiz game. We digitize the quiz components such as quiz proceeding, participants recognition, problem presentation, volunteer recognition who raises his hand first, answer judgement, score addition, winner decision, etc, which are manually performed in the normal quiz game. For automation, we obtained the depth images from the kinect camera which comes into the spotlight recently, so that we located the quiz participants and recognized the user-friendly defined gestures. Analyzing the depth distribution, we detected and segmented the upper body parts and located the hands' areas. Also, we extracted hand features and designed the decision function that classified the hand pose into palm, fist or else, so that a participant can select the example that he wants among presented examples. The implemented quiz application program was tested in real time and showed very satisfactory gesture recognition results.

People Counting System by Facial Age Group (얼굴 나이 그룹별 피플 카운팅 시스템)

  • Ko, Ginam;Lee, YongSub;Moon, Nammee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.69-75
    • /
    • 2014
  • Existing People Counting System using a single overhead mounted camera has limitation in object recognition and counting in various environments. Those limitations are attributable to overlapping, occlusion and external factors, such as over-sized belongings and dramatic light change. Thus, this paper proposes the new concept of People Counting System by Facial Age Group using two depth cameras, at overhead and frontal viewpoints, in order to improve object recognition accuracy and robust people counting to external factors. The proposed system is counting the pedestrians by five process such as overhead image processing, frontal image processing, identical object recognition, facial age group classification and in-coming/out-going counting. The proposed system developed by C++, OpenCV and Kinect SDK, and it target group of 40 people(10 people by each age group) was setup for People Counting and Facial Age Group classification performance evaluation. The experimental results indicated approximately 98% accuracy in People Counting and 74.23% accuracy in the Facial Age Group classification.