• Title/Summary/Keyword: 사람 자세 추정

Search Result 47, Processing Time 0.024 seconds

Real-time Avatar Animation using Component-based Human Body Tracking (구성요소 기반 인체 추적을 이용한 실시간 아바타 애니메이션)

  • Lee Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.65-74
    • /
    • 2006
  • Human tracking is a requirement for the advanced human-computer interface (HCI), This paper proposes a method which uses a component-based human model, detects body parts, estimates human postures, and animates an avatar, Each body part consists of color, connection, and location information and it matches to a corresponding component of the human model. For human tracking, the 2D information of human posture is used for body tracking by computing similarities between frames, The depth information is decided by a relative location between components and is transferred to a moving direction to build a 2-1/2D human model. While each body part is modelled by posture and directions, the corresponding component of a 3D avatar is rotated in 3D using the information transferred from the human model. We achieved 90% tracking rate of a test video containing a variety of postures and the rate increased as the proposed system processed more frames.

  • PDF

SAAnnot-C3Pap: Ground Truth Collection Technique of Playing Posture Using Semi Automatic Annotation Method (SAAnnot-C3Pap: 반자동 주석화 방법을 적용한 연주 자세의 그라운드 트루스 수집 기법)

  • Park, So-Hyun;Kim, Seo-Yeon;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.409-418
    • /
    • 2022
  • In this paper, we propose SAAnnot-C3Pap, a semi-automatic annotation method for obtaining ground truth of a player's posture. In order to obtain ground truth about the two-dimensional joint position in the existing music domain, openpose, a two-dimensional posture estimation method, was used or manually labeled. However, automatic annotation methods such as the existing openpose have the disadvantages of showing inaccurate results even though they are fast. Therefore, this paper proposes SAAnnot-C3Pap, a semi-automated annotation method that is a compromise between the two. The proposed approach consists of three main steps: extracting postures using openpose, correcting the parts with errors among the extracted parts using supervisely, and then analyzing the results of openpose and supervisely. Perform the synchronization process. Through the proposed method, it was possible to correct the incorrect 2D joint position detection result that occurred in the openpose, solve the problem of detecting two or more people, and obtain the ground truth in the playing posture. In the experiment, we compare and analyze the results of the semi-automated annotation method openpose and the SAAnnot-C3Pap proposed in this paper. As a result of comparison, the proposed method showed improvement of posture information incorrectly collected through openpose.

우리 나라 성인 중 73.4%가 불면증 경험

  • KOREA ASSOCIATION OF HEALTH PROMOTION
    • 건강소식
    • /
    • v.24 no.4 s.257
    • /
    • pp.11-11
    • /
    • 2000
  • 수면은 우리 삶의 많은 부분을 차지하는 중요한 삶의 영역이다. 그러나 여러 가지 원인으로 인하여 많은 현대인들이 수면의 즐거움을 제대로 누리지 못하고 있다. 미국의 경우 수면 장애에 의한 졸리움증으로 생기는 사회적ㆍ개인적 손실의 규모를 연간 약 150억 달러로 추정하고 있을 정도다. 우리나라 성인 중에 수면 장애의 하나인 불면증을 경험한 사람의 비율은 73.4%나 될 정도로 광범위하게 나타나고 있지만 정작 그 해결책을 찾지 못하고 있는 실정이다. 수면장애에는 크게 불면증을 보이는 수면 장애, 낮에 과도한 졸리움을 일으키는 수면 장애, 자다가 이상한 행동을 보이는 수면 장애로 구분되어지는데 우리가 일상에서 인식하는 수면 장애는 불면증 정도로 그치는 것이 일반적이다. 더욱이 이러한 수면 장애를 해결하는 방법의 이해 부족으로 수면제나 담배, 술 등에 의존하기 쉽고 단순히 잠을 자야 한다는 강박관념에 시달려 오히려 증상을 악화시키는 경우가 많다. 그러나 수면 장애를 올바르게 이해하고 있다면 그 해결 방안은 의외로 간단하다. 우리의 몸속에 있는 생체 시계의 주기를 규칙적으로 맞추어 가는 것이 중요하다. 또한 수면은 신체적인 요소보다는 정신적인 요소가 많이 작용하기 때문에 무엇보다도 편안한 마음가짐과 무의식적인 수면 유도가 필요하다. ('이달의 건강길라잡이'에 관한 자세한 내용은 건강길라잡이 홈페이지(http://healthguide.hihasa.re.kr)에 있습니다.)

  • PDF

Multi-Attitude Heading Reference System-based Motion-Tracking and Localization of a Person/Walking Robot (다중 자세방위기준장치 기반 사람/보행로봇의 동작추적 및 위치추정)

  • Cho, Seong Yun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.66-73
    • /
    • 2016
  • An Inertial Measurement Unit (IMU)-based Attitude and Heading Reference System (AHRS) can calculate attitude and heading information with long-term accuracy and stability by combining gyro, accelerometer, and magnetic compass signals. Motivated by this characteristic of the AHRS, this paper presents a Motion-Tracking and Localization (MTL) method for a person or walking robot using multi-AHRSs. Five AHRSs are attached to the two calves, two thighs, and waist of a person/walking robot. Joints, links, and coordinate frames are defined on the body. The outputs of the AHRSs are integrated with link data. In addition, a supporting foot is distinguished from a moving foot. With this information, the locations of the joints on the local coordinate frame are calculated. The experimental results show that the presented MTL method can track the motion of and localize a person/walking robot with long-term accuracy in an infra-less environment.

Four-legged walking robot for school security using Lidar SLAM (라이다 SLAM을 이용한 교내경비용 4족 로봇)

  • Lee, Ki-Hyeon;Chung, Chang-Hyun;Ahn, Seung-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.740-742
    • /
    • 2022
  • 본 프로젝트에서는 다양한 지형에 구애받지 않고 전천후로 활동할 수 있는 로봇을 구현하기 위해 바퀴형 로봇 보다는 4족 보행 로봇을 채택하여 지형 극복에 더 유리하고 안정적인 자세 제어와 보행을 할 수 있는 동시에 LiDAR 센서와 카메라 모듈을 이용한 SLAM(동시적 위치 추정 및 지도작성)과 원격으로 사물과 사람들을 파악할 수 있는 원격조종 탐사로봇을 개발하고자 한다.

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.

Object detection within the region of interest based on gaze estimation (응시점 추정 기반 관심 영역 내 객체 탐지)

  • Seok-Ho Han;Hoon-Seok Jang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.3
    • /
    • pp.117-122
    • /
    • 2023
  • Gaze estimation, which automatically recognizes where a user is currently staring, and object detection based on estimated gaze point, can be a more accurate and efficient way to understand human visual behavior. in this paper, we propose a method to detect the objects within the region of interest around the gaze point. Specifically, after estimating the 3D gaze point, a region of interest based on the estimated gaze point is created to ensure that object detection occurs only within the region of interest. In our experiments, we compared the performance of general object detection, and the proposed object detection based on region of interest, and found that the processing time per frame was 1.4ms and 1.1ms, respectively, indicating that the proposed method was faster in terms of processing speed.

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.

LSTM(Long Short-Term Memory)-Based Abnormal Behavior Recognition Using AlphaPose (AlphaPose를 활용한 LSTM(Long Short-Term Memory) 기반 이상행동인식)

  • Bae, Hyun-Jae;Jang, Gyu-Jin;Kim, Young-Hun;Kim, Jin-Pyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.187-194
    • /
    • 2021
  • A person's behavioral recognition is the recognition of what a person does according to joint movements. To this end, we utilize computer vision tasks that are utilized in image processing. Human behavior recognition is a safety accident response service that combines deep learning and CCTV, and can be applied within the safety management site. Existing studies are relatively lacking in behavioral recognition studies through human joint keypoint extraction by utilizing deep learning. There were also problems that were difficult to manage workers continuously and systematically at safety management sites. In this paper, to address these problems, we propose a method to recognize risk behavior using only joint keypoints and joint motion information. AlphaPose, one of the pose estimation methods, was used to extract joint keypoints in the body part. The extracted joint keypoints were sequentially entered into the Long Short-Term Memory (LSTM) model to be learned with continuous data. After checking the behavioral recognition accuracy, it was confirmed that the accuracy of the "Lying Down" behavioral recognition results was high.

Performance Comparison for Exercise Motion classification using Deep Learing-based OpenPose (OpenPose기반 딥러닝을 이용한 운동동작분류 성능 비교)

  • Nam Rye Son;Min A Jung
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.59-67
    • /
    • 2023
  • Recently, research on behavior analysis tracking human posture and movement has been actively conducted. In particular, OpenPose, an open-source software developed by CMU in 2017, is a representative method for estimating human appearance and behavior. OpenPose can detect and estimate various body parts of a person, such as height, face, and hands in real-time, making it applicable to various fields such as smart healthcare, exercise training, security systems, and medical fields. In this paper, we propose a method for classifying four exercise movements - Squat, Walk, Wave, and Fall-down - which are most commonly performed by users in the gym, using OpenPose-based deep learning models, DNN and CNN. The training data is collected by capturing the user's movements through recorded videos and real-time camera captures. The collected dataset undergoes preprocessing using OpenPose. The preprocessed dataset is then used to train the proposed DNN and CNN models for exercise movement classification. The performance errors of the proposed models are evaluated using MSE, RMSE, and MAE. The performance evaluation results showed that the proposed DNN model outperformed the proposed CNN model.