• Title/Summary/Keyword: Human joints

Search Result 295, Processing Time 0.032 seconds

A study on an improvement of the robot motion control by the robot ergonomics (Robot Ergonomic에 의한 로보트의 동작제어 개선에 관한 연구)

  • 이순요;권규식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.8 no.2
    • /
    • pp.19-26
    • /
    • 1989
  • This study, as a part of integrated human-robot ergonomics, improves the robot motion control on the robot task in the TOES/WCS whose purpose is improving the teaching task constructed in the previous study. First, the updated combined fuzzy process using a new membership function with Weber's law is constructed for the purpose of coordinate reading of the end points in the macro motion control. Second, an algorithm using the geometric analysis is desinged in order to calculate position values and posture values of the robot joints. Third, the MGSLM method is designed to remove unnecessary the robot motion control caused by the GSLM method in the micro motion control. Consequently, proposed methods in this study lessen burdcn of a human of an improvement of the robot motion control and reduce the teaching time of a human operator and inaccuracy of the teaching task, which contribute to the integrated human-robot ergonomics.

  • PDF

Gait Pattern Generation of S-link Biped Robot Based on Trajectory Images of Human's Center of Gravity (인간의 COG 궤적의 분석을 통한 5-link 이족 로봇의 보행 패턴 생성)

  • Kim, Byoung-Hyun;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.131-143
    • /
    • 2009
  • Based on the fact that a human being walks naturally and stably with consuming a minimum energy, this paper proposes a new method of generating a natural gait of 5-link biped robot like human by analyzing a COG (Center Of Gravity) trajectory of human's gait. In order to generate a natural gait pattern for 5-link biped robot, it considers the COG trajectory measured from human's gait images on the sagittal and frontal plane. Although the human and 5-link biped robot are similar in the side of the kinematical structure, numbers of their DOFs(Degree Of Freedom) are different. Therefore, torques of the human's joints cannot are applied to robot's ones directly. In this paper, the proposed method generates the gait pattern of the 5-link biped robot from the GA algorithm which utilize human's ZMP trajectory and torques of all joints. Since the gait pattern of the 5-link biped robot model is generated from human's ones, the proposed method creates the natural gait pattern of the biped robot that minimizes an energy consumption like human. In the side of visuality and energy efficiency, the superiority of the proposed method have been improved by comparative experiments with a general method that uses a inverse kinematics.

Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM

  • Kamal, Shaharyar;Jalal, Ahmad;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1857-1862
    • /
    • 2016
  • Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition.

A Kidnapping Detection Using Human Pose Estimation in Intelligent Video Surveillance Systems

  • Park, Ju Hyun;Song, KwangHo;Kim, Yoo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2018
  • In this paper, a kidnapping detection scheme in which human pose estimation is used to classify accurately between kidnapping cases and normal ones is proposed. To estimate human poses from input video, human's 10 joint information is extracted by OpenPose library. In addition to the features which are used in the previous study to represent the size change rates and the regularities of human activities, the human pose estimation features which are computed from the location of detected human's joints are used as the features to distinguish kidnapping situations from the normal accompanying ones. A frame-based kidnapping detection scheme is generated according to the selection of J48 decision tree model from the comparison of several representative classification models. When a video has more frames of kidnapping situation than the threshold ratio after two people meet in the video, the proposed scheme detects and notifies the occurrence of kidnapping event. To check the feasibility of the proposed scheme, the detection accuracy of our newly proposed scheme is compared with that of the previous scheme. According to the experiment results, the proposed scheme could detect kidnapping situations more 4.73% correctly than the previous scheme.

Motion Visualization of a Vehicle Driver Based on Virtual Reality (가상현실 기반에서 차량 운전자 거동의 가시화)

  • Jeong, Yun-Seok;Son, Kwon;Choi, Kyung-Hyun
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.11 no.5
    • /
    • pp.201-209
    • /
    • 2003
  • Virtual human models are widely used to save time and expense in vehicle safety studies. A human model is an essential tool to visualize and simulate a vehicle driver in virtual environments. This research is focused on creation and application of a human model fer virtual reality. The Korean anthropometric data published are selected to determine basic human model dimensions. These data are applied to GEBOD, a human body data generation program, which computes the body segment geometry, mass properties, joints locations and mechanical properties. The human model was constituted using MADYMO based on data from GEBOD. Frontal crash and bump passing test were simulated and the driver's motion data calculated were transmitted into the virtual environment. The human model was organized into scene graphs and its motion was visualized by virtual reality techniques including OpenGL Performer. The human model can be controlled by an arm master to test driver's behavior in the virtual environment.

Human Action Recognition Based on 3D Human Modeling and Cyclic HMMs

  • Ke, Shian-Ru;Thuc, Hoang Le Uyen;Hwang, Jenq-Neng;Yoo, Jang-Hee;Choi, Kyoung-Ho
    • ETRI Journal
    • /
    • v.36 no.4
    • /
    • pp.662-672
    • /
    • 2014
  • Human action recognition is used in areas such as surveillance, entertainment, and healthcare. This paper proposes a system to recognize both single and continuous human actions from monocular video sequences, based on 3D human modeling and cyclic hidden Markov models (CHMMs). First, for each frame in a monocular video sequence, the 3D coordinates of joints belonging to a human object, through actions of multiple cycles, are extracted using 3D human modeling techniques. The 3D coordinates are then converted into a set of geometrical relational features (GRFs) for dimensionality reduction and discrimination increase. For further dimensionality reduction, k-means clustering is applied to the GRFs to generate clustered feature vectors. These vectors are used to train CHMMs separately for different types of actions, based on the Baum-Welch re-estimation algorithm. For recognition of continuous actions that are concatenated from several distinct types of actions, a designed graphical model is used to systematically concatenate different separately trained CHMMs. The experimental results show the effective performance of our proposed system in both single and continuous action recognition problems.

Statistical Analysis of Major Joint Motions During Level Walking for Men and Women (보행에서 남성과 여성에 대한 주요 관절 운동의 통계학적 분석)

  • Kim, Min-Kyoung;Park, Jung-Hong;Son, Kwon;Seo, Kuk-Woong
    • Proceedings of the KSME Conference
    • /
    • 2007.05a
    • /
    • pp.786-791
    • /
    • 2007
  • Statistical differences between men and women are investigated for a total of eleven joint motions during level walking. Human locomotion which exhibits nonlinear dynamical behaviors is quantified by the chaos analysis. Time series of joint motions was obtained from gait experiments with ten young males and ten young females. Body motions were captured using eight video cameras, and the corresponding angular displacements of the neck and the upper body and lower extremity were computed by motion analysis software. The maximal Lyapunov exponents for eleven joints were calculated from attractors constructed and then were analyzed statistically by one-way ANOVA test to find any difference between the genders. This study shows that sexual differences in joint motions were statistically significant at the shoulder, knee and hip joints.

  • PDF

Development of a Simulator for the biped-walking robot using the open inventor (Open Inventor를 이용한 이족보행로봇의 시뮬레이터의 개발)

  • 최형식;김영식;전대원;우정재;김명훈
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.04a
    • /
    • pp.296-299
    • /
    • 2001
  • We developed a motion capture system to get angle data of human joints in the walking mode. The motion capture system is a pair of leg-shape device, which is composed of three links with ankle, knee and pelvis joints. The sensors for measurement of the joint angle are potentiometers. We used an A/D converter to get digital data from joint angles, and which are used to simulate and coordinate the biped-walking robot developed in our laboratory. To simulate and analyze walking motion, animation based on three-dimension motion is performed using the open inventor software.

  • PDF

Walking motion capture system for the biped-walking robot (이족 보행로봇의 걸음새구현을 위한 모셔냅쳐 시스템)

  • 최형식;김영식;전대원;김명훈
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.114-117
    • /
    • 2000
  • We developed a motion capture system to get angle data of human joints in walking mode. The data are used to coordinate the biped-walking robot developed in our laboratory. A pair of motion capture system is composed of three links with the ankle, knee, and pelvis joints. The system has six axes attached with potentiometers. We used an A/D converter was used to get digital data from joint angles. We filterd the data using the Butterworth 4th order digital filter, and simulated walking motion based on the data using the Matlab.

  • PDF

Quantitative Discomfort Evaluation for Car Ingress/Egress Motions (승용차 승하차 동작의 정량적인 불편도 평가 방법)

  • Choi, Nam-Chul;Shim, Ji-Sung;Kim, Jae-Ho;Lee, Sang-Muk;Lee, Sang-Hun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.15 no.5
    • /
    • pp.333-342
    • /
    • 2010
  • This paper describes a novel quantitative discomfort evaluation method based on motion data and its application to discomfort analysis of ingress/egress motions for cars. To develop the discomfort evaluation model, we introduced the discomfort regression curve and the range of motion for each degree-of-freedom of the joints of a whole human body. The maximum discomfort value for the joints at a specific time is selected to represent the discomfort value of the whole body at the time. The results of the experiments and questionnaires support the claim that our discomfort measure matches experimental subjective discomfort levels.