• Title/Summary/Keyword: RGB-depth camera

Search Result 82, Processing Time 0.035 seconds

Physical Function Monitoring Systems for Community-Dwelling Elderly Living Alone: A Comprehensive Review

  • Jo, Sungbae;Song, Changho
    • Physical Therapy Rehabilitation Science
    • /
    • v.11 no.1
    • /
    • pp.49-57
    • /
    • 2022
  • Objective: This study aims to conduct a comprehensive review of monitoring systems to monitor and manage physical function of community-dwelling elderly living alone and suggest future directions of unobtrusive monitoring. Design: Literature review Methods: The importance of health-related monitoring has been emphasized due to the aging population and novel corona virus (COVID-19) outbreak.As the population gets old and because of changes in culture, the number of single-person households among the elderly is expected to continue to increase. Elders are staying home longer and their physical function may decline rapidly,which can be a disturbing factorto successful aging.Therefore, systematic elderly management must be considered. Results: Frequently used technologies to monitor elders at home included red, green, blue (RGB) camera, accelerometer, passive infrared (PIR) sensor, wearable devices, and depth camera. Of them all, considering privacy concerns and easy-to-use features for elders, depth camera possibly can be a technology to be adapted at homes to unobtrusively monitor physical function of elderly living alone.The depth camera has been used to evaluate physical functions during rehabilitation and proven its efficiency. Conclusions: Therefore, physical monitoring system that is unobtrusive should be studied and developed in the future to monitor physical function of community-dwelling elderly living alone for the aging population.

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

Presentation Method Using Depth Information (깊이 정보를 이용한 프레젠테이션 방법)

  • Kim, Ho-Seung;Kwon, Soon-Kak
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.409-415
    • /
    • 2013
  • Recently, various equipments have been developed for convenience of presentations. Presentation equipments added the keyboard and mouse functions to laser pointer and devices have become main method. However these devices have demerits of limited action and a few events. In this paper, we propose a method which increases the degrees of freedom of presentation as the control of the hand by using a depth camera. The proposed method recognizes the horizontal and vertical positions of hand pointer and the distance between hand and camera from both depth and RGB cameras, then performs a presentation event as the location and pattern that the hand touches a screen. The simulation results show that a camera is fixed on left side of the screen, and nine presentation events is correctly performed.

Human-Computer Natur al User Inter face Based on Hand Motion Detection and Tracking

  • Xu, Wenkai;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.501-507
    • /
    • 2012
  • Human body motion is a non-verbal part for interaction or movement that can be used to involves real world and virtual world. In this paper, we explain a study on natural user interface (NUI) in human hand motion recognition using RGB color information and depth information by Kinect camera from Microsoft Corporation. To achieve the goal, hand tracking and gesture recognition have no major dependencies of the work environment, lighting or users' skin color, libraries of particular use for natural interaction and Kinect device, which serves to provide RGB images of the environment and the depth map of the scene were used. An improved Camshift tracking algorithm is used to tracking hand motion, the experimental results show out it has better performance than Camshift algorithm, and it has higher stability and accuracy as well.

Vertically Structured Camera System Implementation for Digital Holographic Service (디지털 홀로그램 서비스를 위한 수직구조 카메라 시스템 구현)

  • Koo, Ja-Myung;Lee, Yoon-Hyuk;Kim, Woo-Youl;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.309-311
    • /
    • 2012
  • 본 논문에서는 3차원 입체 비디오처리 기술의 최종목표인 디지털 홀로그램을 생성하는데 필요한 정보인 객체의 좌표와 색상정보를 얻기 위해서 간단하게 장면이 정확히 일치하는 RGB image와 depth image를 생성할 수 있는 시스템을 구축하는 방법을 제안하였다. 가시광선과 적외선의 파장을 이용하여 파장에 따라 투과율이 달라지는 cold mirror를 사용하여 같은 시점에 대한 RGB + depth image를 얻은 후, 전처리 과정에서 카메라 왜곡에 대한 lens correction과정을 걸친 후, 해상도를 맞추기 위한 resolution resize과정을 마친 후, 디지털 홀로그램으로 구현 할 object를 추출한다. 그 다음 CGH(computer-generated hologram) 알고리즘으로 추출한 object를 CGH로 만든다.

  • PDF

Contactless Chroma Key System Using Gesture Recognition (제스처 인식을 이용한 비 접촉식 크로마키 시스템)

  • Jeong, Jongmyeon;Jo, HongLae;Kim, Hoyoung;Song, Sion;Lee, Junseo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.159-160
    • /
    • 2015
  • 본 논문에서는 사용자의 제스처를 인식하여 동작하는 비 접촉식 크로마키 시스템을 제안한다. 이를 위해서 키넥트 카메라로부터 깊이(depth) 이미지와 RGB 이미지를 입력받는다. 먼저 깊이 카메라와 RGB 카메라의 위치 차이로 인한 불일치(disparity)를 보정하고, 깊이 이미지에 대해 모폴로지 연산을 수행하여 잡음을 제거한 후 RGB 이미지와 결합하여 객체 영역을 추출한다. 추출된 객체영역을 분석하여 사용자 손의 위치와 모양을 인식하고 손의 위치와 모양을 포인팅 장비로 간주하여 크로마키 시스템을 제어한다. 실험을 통해 비접촉식 크로마키 시스템이 실시간으로 동작함을 확인하였다.

  • PDF

Depth and RGB-based Camera Pose Estimation for Capturing Volumetric Object (체적형 객체의 촬영을 위한 깊이 및 RGB 카메라 기반의 카메라 자세 추정 알고리즘)

  • Kim, Kyung-Jin;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.123-124
    • /
    • 2019
  • 본 논문에서는 다중 깊이 및 RGB 카메라의 캘리브레이션 최적화 알고리즘을 제안한다. 컴퓨터 비전 분야에서 카메라의 자세 및 위치를 추정하는 것은 꼭 필요한 과정 중 하나이다. 기존의 방법들은 핀홀 카메라 모델을 이용하여 카메라 파라미터를 계산하기 때문에 오차가 존재한다. 따라서 이 문제점을 개선하기 위해 깊이 카메라에서 얻은 물체의 실제 거리와 함수 최적화 방식을 이용하여 카메라 외부 파라미터의 최적화를 진행한다. 이 알고리즘을 이용하여 카메라 간의 정합을 진행하면 보다 더 좋은 품질의 3D 모델을 얻을 수 있다.

  • PDF

Height Estimation using Kinect in the Indoor (키넥트를 이용한 실내에서의 키 추정 방법)

  • Kim, Sung-Min;Song, Jong-Kwan;Yoon, Byung-Woo;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.3
    • /
    • pp.343-350
    • /
    • 2014
  • Object recognition is one of the key technologies of the monitoring system for the prevention of crimes diversified the intelligent. The height is one of the physical information of the person, it may be important information to confirm the identity with physical characteristics of the subject has. In this paper, we provide a method of measuring the height that utilize RGB-Depth camera, the Kinect. Given that in order to measure the height of a person, and know the height of Kinect, by using the depth information of Kinect the distance to the head and foot of Kinect, estimating the height of a person. The proposed method throughout the experiment confirms that it is effective to estimate the height of a person in the room.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.