• Title/Summary/Keyword: Human keypoint detection

Search Result 12, Processing Time 0.022 seconds

Convolutional GRU and Attention based Fall Detection Integrating with Human Body Keypoints and DensePose

  • Yi Zheng;Cunyi Liao;Ruifeng Xiao;Qiang He
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2782-2804
    • /
    • 2024
  • The integration of artificial intelligence technology with medicine has rapidly evolved, with increasing demands for quality of life. However, falls remain a significant risk leading to severe injuries and fatalities, especially among the elderly. Therefore, the development and application of computer vision-based fall detection technologies have become increasingly important. In this paper, firstly, the keypoint detection algorithm ViTPose++ is used to obtain the coordinates of human body keypoints from the camera images. Human skeletal feature maps are generated from this keypoint coordinate information. Meanwhile, human dense feature maps are produced based on the DensePose algorithm. Then, these two types of feature maps are confused as dual-channel inputs for the model. The convolutional gated recurrent unit is introduced to extract the frame-to-frame relevance in the process of falling. To further integrate features across three dimensions (spatio-temporal-channel), a dual-channel fall detection algorithm based on video streams is proposed by combining the Convolutional Block Attention Module (CBAM) with the ConvGRU. Finally, experiments on the public UR Fall Detection Dataset demonstrate that the improved ConvGRU-CBAM achieves an F1 score of 92.86% and an AUC of 95.34%.

A Method for Body Keypoint Localization based on Object Detection using the RGB-D information (RGB-D 정보를 이용한 객체 탐지 기반의 신체 키포인트 검출 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.85-92
    • /
    • 2017
  • Recently, in the field of video surveillance, a Deep Learning based learning method has been applied to a method of detecting a moving person in a video and analyzing the behavior of a detected person. The human activity recognition, which is one of the fields this intelligent image analysis technology, detects the object and goes through the process of detecting the body keypoint to recognize the behavior of the detected object. In this paper, we propose a method for Body Keypoint Localization based on Object Detection using RGB-D information. First, the moving object is segmented and detected from the background using color information and depth information generated by the two cameras. The input image generated by rescaling the detected object region using RGB-D information is applied to Convolutional Pose Machines for one person's pose estimation. CPM are used to generate Belief Maps for 14 body parts per person and to detect body keypoints based on Belief Maps. This method provides an accurate region for objects to detect keypoints an can be extended from single Body Keypoint Localization to multiple Body Keypoint Localization through the integration of individual Body Keypoint Localization. In the future, it is possible to generate a model for human pose estimation using the detected keypoints and contribute to the field of human activity recognition.

Hierarchical Graph Based Segmentation and Consensus based Human Tracking Technique

  • Ramachandra, Sunitha Madasi;Jayanna, Haradagere Siddaramaiah;Ramegowda, Ramegowda
    • Journal of Information Processing Systems
    • /
    • v.15 no.1
    • /
    • pp.67-90
    • /
    • 2019
  • Accurate detection, tracking and analysis of human movement using robots and other visual surveillance systems is still a challenge. Efforts are on to make the system robust against constraints such as variation in shape, size, pose and occlusion. Traditional methods of detection used the sliding window approach which involved scanning of various sizes of windows across an image. This paper concentrates on employing a state-of-the-art, hierarchical graph based method for segmentation. It has two stages: part level segmentation for color-consistent segments and object level segmentation for category-consistent regions. The tracking phase is achieved by employing SIFT keypoint descriptor based technique in a combined matching and tracking scheme with validation phase. Localization of human region in each frame is performed by keypoints by casting votes for the center of the human detected region. As it is difficult to avoid incorrect keypoints, a consensus-based framework is used to detect voting behavior. The designed methodology is tested on the video sequences having 3 to 4 persons.

Research Trends and Case Study on Keypoint Recognition and Tracking for Augmented Reality in Mobile Devices (모바일 증강현실을 위한 특징점 인식, 추적 기술 및 사례 연구)

  • Choi, Heeseung;Ahn, Sang Chul;Kim, Ig-Jae
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.2
    • /
    • pp.45-55
    • /
    • 2015
  • In recent years, keypoint recognition and tracking technologies are considered as crucial task in many practical systems for markerless augmented reality. The keypoint recognition and technologies are widely studied in many research areas, including computer vision, robot navigation, human computer interaction, and etc. Moreover, due to the rapid growth of mobile market related to augmented reality applications, several effective keypoint-based matching and tracking methods have been introduced by considering mobile embedded systems. Therefore, in this paper, we extensively analyze the recent research trends on keypoint-based recognition and tracking with several core components: keypoint detection, description, matching, and tracking. Then, we also present one of our research related to mobile augmented reality, named mobile tour guide system, by real-time recognition and tracking of tour maps on mobile devices.

2D Human Pose Estimation based on Object Detection using RGB-D information

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.800-816
    • /
    • 2018
  • In recent years, video surveillance research has been able to recognize various behaviors of pedestrians and analyze the overall situation of objects by combining image analysis technology and deep learning method. Human Activity Recognition (HAR), which is important issue in video surveillance research, is a field to detect abnormal behavior of pedestrians in CCTV environment. In order to recognize human behavior, it is necessary to detect the human in the image and to estimate the pose from the detected human. In this paper, we propose a novel approach for 2D Human Pose Estimation based on object detection using RGB-D information. By adding depth information to the RGB information that has some limitation in detecting object due to lack of topological information, we can improve the detecting accuracy. Subsequently, the rescaled region of the detected object is applied to ConVol.utional Pose Machines (CPM) which is a sequential prediction structure based on ConVol.utional Neural Network. We utilize CPM to generate belief maps to predict the positions of keypoint representing human body parts and to estimate human pose by detecting 14 key body points. From the experimental results, we can prove that the proposed method detects target objects robustly in occlusion. It is also possible to perform 2D human pose estimation by providing an accurately detected region as an input of the CPM. As for the future work, we will estimate the 3D human pose by mapping the 2D coordinate information on the body part onto the 3D space. Consequently, we can provide useful human behavior information in the research of HAR.

A Method for 3D Human Pose Estimation based on 2D Keypoint Detection using RGB-D information (RGB-D 정보를 이용한 2차원 키포인트 탐지 기반 3차원 인간 자세 추정 방법)

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.41-51
    • /
    • 2018
  • Recently, in the field of video surveillance, deep learning based learning method is applied to intelligent video surveillance system, and various events such as crime, fire, and abnormal phenomenon can be robustly detected. However, since occlusion occurs due to the loss of 3d information generated by projecting the 3d real-world in 2d image, it is need to consider the occlusion problem in order to accurately detect the object and to estimate the pose. Therefore, in this paper, we detect moving objects by solving the occlusion problem of object detection process by adding depth information to existing RGB information. Then, using the convolution neural network in the detected region, the positions of the 14 keypoints of the human joint region can be predicted. Finally, in order to solve the self-occlusion problem occurring in the pose estimation process, the method for 3d human pose estimation is described by extending the range of estimation to the 3d space using the predicted result of 2d keypoint and the deep neural network. In the future, the result of 2d and 3d pose estimation of this research can be used as easy data for future human behavior recognition and contribute to the development of industrial technology.

Fall Detection Based on 2-Stacked Bi-LSTM and Human-Skeleton Keypoints of RGBD Camera (RGBD 카메라 기반의 Human-Skeleton Keypoints와 2-Stacked Bi-LSTM 모델을 이용한 낙상 탐지)

  • Shin, Byung Geun;Kim, Uung Ho;Lee, Sang Woo;Yang, Jae Young;Kim, Wongyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.491-500
    • /
    • 2021
  • In this study, we propose a method for detecting fall behavior using MS Kinect v2 RGBD Camera-based Human-Skeleton Keypoints and a 2-Stacked Bi-LSTM model. In previous studies, skeletal information was extracted from RGB images using a deep learning model such as OpenPose, and then recognition was performed using a recurrent neural network model such as LSTM and GRU. The proposed method receives skeletal information directly from the camera, extracts 2 time-series features of acceleration and distance, and then recognizes the fall behavior using the 2-Stacked Bi-LSTM model. The central joint was obtained for the major skeletons such as the shoulder, spine, and pelvis, and the movement acceleration and distance from the floor were proposed as features of the central joint. The extracted features were compared with models such as Stacked LSTM and Bi-LSTM, and improved detection performance compared to existing studies such as GRU and LSTM was demonstrated through experiments.

Multimodal Image Fusion with Human Pose for Illumination-Robust Detection of Human Abnormal Behaviors (조명을 위한 인간 자세와 다중 모드 이미지 융합 - 인간의 이상 행동에 대한 강력한 탐지)

  • Cuong H. Tran;Seong G. Kong
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.637-640
    • /
    • 2023
  • This paper presents multimodal image fusion with human pose for detecting abnormal human behaviors in low illumination conditions. Detecting human behaviors in low illumination conditions is challenging due to its limited visibility of the objects of interest in the scene. Multimodal image fusion simultaneously combines visual information in the visible spectrum and thermal radiation information in the long-wave infrared spectrum. We propose an abnormal event detection scheme based on the multimodal fused image and the human poses using the keypoints to characterize the action of the human body. Our method assumes that human behaviors are well correlated to body keypoints such as shoulders, elbows, wrists, hips. In detail, we extracted the human keypoint coordinates from human targets in multimodal fused videos. The coordinate values are used as inputs to train a multilayer perceptron network to classify human behaviors as normal or abnormal. Our experiment demonstrates a significant result on multimodal imaging dataset. The proposed model can capture the complex distribution pattern for both normal and abnormal behaviors.

Multi-resolution Fusion Network for Human Pose Estimation in Low-resolution Images

  • Kim, Boeun;Choo, YeonSeung;Jeong, Hea In;Kim, Chung-Il;Shin, Saim;Kim, Jungho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2328-2344
    • /
    • 2022
  • 2D human pose estimation still faces difficulty in low-resolution images. Most existing top-down approaches scale up the target human bonding box images to the large size and insert the scaled image into the network. Due to up-sampling, artifacts occur in the low-resolution target images, and the degraded images adversely affect the accurate estimation of the joint positions. To address this issue, we propose a multi-resolution input feature fusion network for human pose estimation. Specifically, the bounding box image of the target human is rescaled to multiple input images of various sizes, and the features extracted from the multiple images are fused in the network. Moreover, we introduce a guiding channel which induces the multi-resolution input features to alternatively affect the network according to the resolution of the target image. We conduct experiments on MS COCO dataset which is a representative dataset for 2D human pose estimation, where our method achieves superior performance compared to the strong baseline HRNet and the previous state-of-the-art methods.

3D Human Keypoint Detection With RGB and Depth Image (RGB 이미지와 Depth 이미지를 이용한 3D 휴먼 키포인트 탐지)

  • Jeong, Keunseok;Lee, Yegi;Yoon, Kyoungro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.239-241
    • /
    • 2021
  • 2019 발생한 COVID-19로 인하여 전 세계 사람들의 여가 활동이 제한되면서 건강관리를 위해 홈 트레이닝에 많은 관심을 기울이고 있다. 뿐만 아니라 최근 컴퓨팅 기술의 발전에 따라 사람의 행동을 눈으로 직접 판단했던 작업을 컴퓨터가 키포인트 탐지를 통해 인간의 행동을 이해하려는 많은 연구가 진행되고 있다. 이에 따라 본 논문은 Azure Kinect를 이용하여 촬영한 RGB 이미지와 Depth 이미지를 이용하여 3D 키포인트를 추정한다. RGB 이미지는 2D 키포인트 탐지기를 이용하여 2차원 공간에서의 좌표를 탐지한다. 앞서 탐지한 2D 좌표를 Depth 이미지에 투영하여 추출한 3D 키포인트의 깊이 값을 이용하여 3D 키포인트 탐지에 대한 연구 개발하였다.

  • PDF