• Title/Summary/Keyword: DensePose

Search Result 16, Processing Time 0.022 seconds

Convolutional GRU and Attention based Fall Detection Integrating with Human Body Keypoints and DensePose

  • Yi Zheng;Cunyi Liao;Ruifeng Xiao;Qiang He
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2782-2804
    • /
    • 2024
  • The integration of artificial intelligence technology with medicine has rapidly evolved, with increasing demands for quality of life. However, falls remain a significant risk leading to severe injuries and fatalities, especially among the elderly. Therefore, the development and application of computer vision-based fall detection technologies have become increasingly important. In this paper, firstly, the keypoint detection algorithm ViTPose++ is used to obtain the coordinates of human body keypoints from the camera images. Human skeletal feature maps are generated from this keypoint coordinate information. Meanwhile, human dense feature maps are produced based on the DensePose algorithm. Then, these two types of feature maps are confused as dual-channel inputs for the model. The convolutional gated recurrent unit is introduced to extract the frame-to-frame relevance in the process of falling. To further integrate features across three dimensions (spatio-temporal-channel), a dual-channel fall detection algorithm based on video streams is proposed by combining the Convolutional Block Attention Module (CBAM) with the ConvGRU. Finally, experiments on the public UR Fall Detection Dataset demonstrate that the improved ConvGRU-CBAM achieves an F1 score of 92.86% and an AUC of 95.34%.

Enhancing 3D Excavator Pose Estimation through Realism-Centric Image Synthetization and Labeling Technique

  • Tianyu Liang;Hongyang Zhao;Seyedeh Fatemeh Saffari;Daeho Kim
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1065-1072
    • /
    • 2024
  • Previous approaches to 3D excavator pose estimation via synthetic data training utilized a single virtual excavator model, low polygon objects, relatively poor textures, and few background objects, which led to reduced accuracy when the resulting models were tested on differing excavator types and more complex backgrounds. To address these limitations, the authors present a realism-centric synthetization and labeling approach that synthesizes results with improved image quality, more detailed excavator models, additional excavator types, and complex background conditions. Additionally, the data generated includes dense pose labels and depth maps for the excavator models. Utilizing the realism-centric generation method, the authors achieved significantly greater image detail, excavator variety, and background complexity for potentially improved labeling accuracy. The dense pose labels, featuring fifty points instead of the conventional four to six, could allow inferences to be made from unclear excavator pose estimates. The synthesized depth maps could be utilized in a variety of DNN applications, including multi-modal data integration and object detection. Our next step involves training and testing DNN models that would quantify the degree of accuracy enhancement achieved by increased image quality, excavator diversity, and background complexity, helping lay the groundwork for broader application of synthetic models in construction robotics and automated project management.

Photorealistic Real-Time Dense 3D Mesh Mapping for AUV (자율 수중 로봇을 위한 사실적인 실시간 고밀도 3차원 Mesh 지도 작성)

  • Jungwoo Lee;Younggun Cho
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.188-195
    • /
    • 2024
  • This paper proposes a photorealistic real-time dense 3D mapping system that utilizes a neural network-based image enhancement method and mesh-based map representation. Due to the characteristics of the underwater environment, where problems such as hazing and low contrast occur, it is hard to apply conventional simultaneous localization and mapping (SLAM) methods. At the same time, the behavior of Autonomous Underwater Vehicle (AUV) is computationally constrained. In this paper, we utilize a neural network-based image enhancement method to improve pose estimation and mapping quality and apply a sliding window-based mesh expansion method to enable lightweight, fast, and photorealistic mapping. To validate our results, we utilize real-world and indoor synthetic datasets. We performed qualitative validation with the real-world dataset and quantitative validation by modeling images from the indoor synthetic dataset as underwater scenes.

Onboard dynamic RGB-D simultaneous localization and mapping for mobile robot navigation

  • Canovas, Bruce;Negre, Amaury;Rombaut, Michele
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.617-629
    • /
    • 2021
  • Although the actual visual simultaneous localization and mapping (SLAM) algorithms provide highly accurate tracking and mapping, most algorithms are too heavy to run live on embedded devices. In addition, the maps they produce are often unsuitable for path planning. To mitigate these issues, we propose a completely closed-loop online dense RGB-D SLAM algorithm targeting autonomous indoor mobile robot navigation tasks. The proposed algorithm runs live on an NVIDIA Jetson board embedded on a two-wheel differential-drive robot. It exhibits lightweight three-dimensional mapping, room-scale consistency, accurate pose tracking, and robustness to moving objects. Further, we introduce a navigation strategy based on the proposed algorithm. Experimental results demonstrate the robustness of the proposed SLAM algorithm, its computational efficiency, and its benefits for on-the-fly navigation while mapping.

Building a 3D Morphable Face Model using Finding Semi-automatic Dense Correspondence (반자동적인 대응점 찾기를 이용한 3차원 얼굴 모델 생성)

  • Choi, In-Ho;Cho, Sun-Young;Kim, Dai-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.7
    • /
    • pp.723-727
    • /
    • 2008
  • 2D face analysis has some limitations which are pose and illumination sensitive. For these reasons, even if many researchers try to study in the 3D face analysis and processing, because of the low computing performance and the absence of a high-speed 3D scanner then a lot of research is not being able to proceed. But, due to improving of the computing performance in these days, the advanced 3D face research was now underway. In this paper, we propose the method of building a 3D face model which deal successfully with dense correspondence problem.

Keypoints-Based 2D Virtual Try-on Network System

  • Pham, Duy Lai;Ngyuen, Nhat Tan;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.186-203
    • /
    • 2020
  • Image-based Virtual Try-On Systems are among the most potential solution for virtual fitting which tries on a target clothes into a model person image and thus have attracted considerable research efforts. In many cases, current solutions for those fails in achieving naturally looking virtual fitted image where a target clothes is transferred into the body area of a model person of any shape and pose while keeping clothes context like texture, text, logo without distortion and artifacts. In this paper, we propose a new improved image-based virtual try-on network system based on keypoints, which we name as KP-VTON. The proposed KP-VTON first detects keypoints in the target clothes and reliably predicts keypoints in the clothes of a model person image by utilizing a dense human pose estimation. Then, through TPS transformation calculated by utilizing the keypoints as control points, the warped target clothes image, which is matched into the body area for wearing the target clothes, is obtained. Finally, a new try-on module adopting Attention U-Net is applied to handle more detailed synthesis of virtual fitted image. Extensive experiments on a well-known dataset show that the proposed KP-VTON performs better the state-of-the-art virtual try-on systems.

UV Mapping Based Pose Estimation of Furniture Parts in Assembly Manuals (UV-map 기반의 신경망 학습을 이용한 조립 설명서에서의 부품의 자세 추정)

  • Kang, Isaac;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.667-670
    • /
    • 2020
  • 최근에는 증강현실, 로봇공학 등의 분야에서 객체의 위치 검출 이외에도, 객체의 자세에 대한 추정이 요구되고 있다. 객체의 자세 정보가 포함된 데이터셋은 위치 정보만 포함된 데이터셋에 비하여 상대적으로 매우 적기 때문에 인공 신경망 구조를 활용하기 어려운 측면이 있으나, 최근에 들어서는 기계학습 기반의 자세 추정 알고리즘들이 여럿 등장하고 있다. 본 논문에서는 이 가운데 Dense 6d Pose Object detector (DPOD) [11]의 구조를 기반으로 하여 가구의 조립 설명서에 그려진 가구 부품들의 자세를 추정하고자 한다. DPOD [11]는 입력으로 RGB 영상을 받으며, 해당 영상에서 자세를 추정하고자 하는 객체의 영역에 해당하는 픽셀들을 추정하고, 객체의 영역에 해당되는 각 픽셀에서 해당 객체의 3D 모델의 UV map 값을 추정한다. 이렇게 픽셀 개수만큼의 2D - 3D 대응이 생성된 이후에는, RANSAC과 PnP 알고리즘을 통해 RGB 영상에서의 객체와 객체의 3D 모델 간의 변환 관계 행렬이 구해지게 된다. 본 논문에서는 사전에 정해진 24개의 자세 후보들을 기반으로 가구 부품의 3D 모델을 2D에 투영한 RGB 영상들로 인공 신경망을 학습하였으며, 평가 시에는 실제 조립 설명서에서의 가구 부품의 자세를 추정하였다. 실험 결과 IKEA의 Stefan 의자 조립 설명서에 대하여 100%의 ADD score를 얻었으며, 추정 자세가 자세 후보군 중 정답 자세에 가장 근접한 경우를 정답으로 평가했을 때 100%의 정답률을 얻었다. 제안하는 신경망을 사용하였을 때, 가구 조립 설명서에서 가구 부품의 위치를 찾는 객체 검출기(object detection network)와, 각 개체의 종류를 구분하는 객체 리트리벌 네트워크(retrieval network)를 함께 사용하여 최종적으로 가구 부품의 자세를 추정할 수 있다.

  • PDF

Transfer Learning Models for Enhanced Prediction of Cracked Tires

  • Candra Zonyfar;Taek Lee;Jung-Been Lee;Jeong-Dong Kim
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.13-20
    • /
    • 2023
  • Regularly inspecting vehicle tires' condition is imperative for driving safety and comfort. Poorly maintained tires can pose fatal risks, leading to accidents. Unfortunately, manual tire visual inspections are often considered no less laborious than employing an automatic tire inspection system. Nevertheless, an automated tire inspection method can significantly enhance driver compliance and awareness, encouraging routine checks. Therefore, there is an urgency for automated tire inspection solutions. Here, we focus on developing a deep learning (DL) model to predict cracked tires. The main idea of this study is to demonstrate the comparative analysis of DenseNet121, VGG-19 and EfficientNet Convolution Neural Network-based (CNN) Transfer Learning (TL) and suggest which model is more recommended for cracked tire classification tasks. To measure the model's effectiveness, we experimented using a publicly accessible dataset of 1028 images categorized into two classes. Our experimental results obtain good performance in terms of accuracy, with 0.9515. This shows that the model is reliable even though it works on a dataset of tire images which are characterized by homogeneous color intensity.

  • PDF

Dancing Avatar: You can dance like PSY too (춤추는 아바타: 당신도 싸이처럼 춤을 출 수 있다.)

  • Gu, Dongjun;Joo, Youngdon;Vu, Van Manh;Lee, Jungwoo;Ahn, Heejune
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.256-259
    • /
    • 2021
  • 본 논문에서는 사람을 키넥트로 촬영하여 3 차원 아바타로 복원하여 연예인처럼 춤을 추게 하는 기술을 설계 구현하였다. 기존의 순수 딥러닝 기반 방식과 달리 본 기술은 3 차원 인체 모델을 사용하여 안정적이고 자유로운 결과를 얻을 수 있다. 우선 인체 모델의 기하학적 정보는 3 차원 조인트를 사용하여 추정하고 DensePose를 통하여 정교한 텍스쳐를 복원한다. 여기에 3 차원 포인트-클라우드와 ICP 매칭 기법을 사용하여 의상 모델 정보를 복원한다. 이렇게 확보한 신체 모델과 의상 모델을 사용한 아바타는 신체 모델의 rigged 특성을 그대로 유지함으로써 애니메이션에 적합하여 PSY 의 <강남스타일>과 같은 춤을 자연스럽게 표현하였다. 개선할 점으로 인체와 의류 부분의 좀 더 정확한 분할과 분할과정에서 발생할 수 있는 노이즈의 제거 등을 확인되었다.

  • PDF

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.