• Title/Summary/Keyword: Pose Recognition

Search Result 278, Processing Time 0.026 seconds

Pose-Normalized 3D Face Modeling (포즈 정규화된 3D 얼굴 모델링 기법)

  • Yu, Sun-Jin;Kim, Sang-Ki;Kim, Il-Do;Lee, Sang-Youn
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.455-456
    • /
    • 2006
  • This paper presents an automatic pose-normalized 3D face data acquisition method using 2D and 3D information. We propose an automatic pose-normalized 3D face acquisition method that accomplishes 3D face modeling and 3D face pose-normalization at once. The proposed method uses 2D information with AAM (Active Appearance Model) and 3D information with 3D normal vector. The 3D face modeling system consists of 2 cameras and 1 projector. In order to verify proposed pose-normalized 3D modeling method, we made an experiment for 2.5D face recognition. The experimental result shows that proposed method is robust against pose variation.

  • PDF

Optimal Camera Arrangement for Automatic Recognition of Steel Material based on Augmented Reality in Outdoor Environment (실외 환경에서의 증강 현실 기반의 자재 인식을 위한 최적의 카메라 배치)

  • Do, Hyun-Min;Kim, Bong-Keun
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.2
    • /
    • pp.143-151
    • /
    • 2010
  • Automation and robotization has been required in construction for several decades and construction industry has become one of the important research areas in the field of service robotics. Especially in the steel construction, automatic recognition of structural steel members in the stockyard is emphasized. However, since the pose of steel frame in the stockyard is site dependent and also the stockyard is usually in the outdoor environment, it is difficult to determine the pose automatically. This paper adopts the recognition method based on the augmented reality to cope with this problem. Particularly focusing on the light condition of the outdoor environment, we formulated the optimization problem with the constraint and suggested the methodology to evaluate the optimal camera arrangement. From simulation results, sub-optimal solution for the position of the camera can be obtained.

A Dangerous Situation Recognition System Using Human Behavior Analysis (인간 행동 분석을 이용한 위험 상황 인식 시스템 구현)

  • Park, Jun-Tae;Han, Kyu-Phil;Park, Yang-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.

The Object 3D Pose Recognition Using Stereo Camera (스테레오 카메라를 이용한 물체의 3D 포즈 인식)

  • Yoo, Sung-Hoon;Kang, Hyo-Seok;Cho, Young-Wan;Kim, Eun-Tai;Park, Mig-Non
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1123-1124
    • /
    • 2008
  • In this paper, we develop a program that recognition of the object 3D pose using stereo camera. In order to detect the object, this paper is applied to canny edge detection algorithm and also used stereo camera to get the 3D point about the object and applied to recognize the pose of the object using iterative closest point(ICP) algorithm.

  • PDF

Exercise posture correction system based on image recognition (영상인식 기반 운동 자세 교정 시스템)

  • Dong-uk Kim;Gi-beom Ham;Gang-min Lee;Tae-ho Lim;Hyeon-hyeok Lim;Sang-ho Yeom;Tae-jin Yun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.489-490
    • /
    • 2023
  • 본 논문에서는 신체 영상 인식 기술을 이용한 운동 자세 교정 시스템을 제안하고 개발하였다. 구글에서 제공하는 미디어파이프 포즈(MediaPipe Pose) 오픈소스를 사용하여 웹캠으로 사용자의 운동 동작을 실시간으로 인식하여, 인식된 신체 구조의 33개의 관절 위치로 Pose Landmark를 사용하여 사용자의 운동 자세에 대한 횟수 카운트, 운동 동작의 정확도 측정을 할 수 있게 하여 혼자 운동하거나 처음 운동하는 사람들에게 운동의 접근성을 높이고, 올바른 자세로 운동을 하도록 유도할 수 있다.

  • PDF

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

Recognition method using stereo images-based 3D information for improvement of face recognition (얼굴인식의 향상을 위한 스테레오 영상기반의 3차원 정보를 이용한 인식)

  • Park Chang-Han;Paik Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.30-38
    • /
    • 2006
  • In this paper, we improved to drops recognition rate according to distance using distance and depth information with 3D from stereo face images. A monocular face image has problem to drops recognition rate by uncertainty information such as distance of an object, size, moving, rotation, and depth. Also, if image information was not acquired such as rotation, illumination, and pose change for recognition, it has a very many fault. So, we wish to solve such problem. Proposed method consists of an eyes detection algorithm, analysis a pose of face, md principal component analysis (PCA). We also convert the YCbCr space from the RGB for detect with fast face in a limited region. We create multi-layered relative intensity map in face candidate region and decide whether it is face from facial geometry. It can acquire the depth information of distance, eyes, and mouth in stereo face images. Proposed method detects face according to scale, moving, and rotation by using distance and depth. We train by using PCA the detected left face and estimated direction difference. Simulation results with face recognition rate of 95.83% (100cm) in the front and 98.3% with the pose change were obtained successfully. Therefore, proposed method can be used to obtain high recognition rate with an appropriate scaling and pose change according to the distance.

Development of Human Following Method of Mobile Robot Using TRT Pose (TRT Pose를 이용한 모바일 로봇의 사람 추종 기법)

  • Choi, Jun-Hyeon;Joo, Kyeong-Jin;Yun, Sang-Seok;Kim, Jong-Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.6
    • /
    • pp.281-287
    • /
    • 2020
  • In this paper, we propose a method for estimating a walking direction by which a mobile robots follows a person using TRT (Tensor RT) pose, which is motion recognition based on deep learning. Mobile robots can measure individual movements by recognizing key points on the person's pelvis and determine the direction in which the person tries to move. Using these information and the distance between robot and human, the mobile robot can follow the person stably keeping a safe distance from people. The TRT Pose only extracts key point information to prevent privacy issues while a camera in the mobile robot records video. To validate the proposed technology, experiment is carried out successfully where human walks away or toward the mobile robot in zigzag form and the robot continuously follows human with prescribed distance.

HMM-based Upper-body Gesture Recognition for Virtual Playing Ground Interface (가상 놀이 공간 인터페이스를 위한 HMM 기반 상반신 제스처 인식)

  • Park, Jae-Wan;Oh, Chi-Min;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.11-17
    • /
    • 2010
  • In this paper, we propose HMM-based upper-body gesture. First, to recognize gesture of space, division about pose that is composing gesture once should be put priority. In order to divide poses which using interface, we used two IR cameras established on front side and side. So we can divide and acquire in front side pose and side pose about one pose in each IR camera. We divided the acquired IR pose image using SVM's non-linear RBF kernel function. If we use RBF kernel, we can divide misclassification between non-linear classification poses. Like this, sequences of divided poses is recognized by gesture using HMM's state transition matrix. The recognized gesture can apply to existent application to do mapping to OS Value.