• Title/Summary/Keyword: Robot Joint Angle Estimation

Search Result 15, Processing Time 0.024 seconds

A Study on the Effect of Weighting Matrix of Robot Vision Control Algorithm in Robot Point Placement Task (점 배치 작업 시 제시된 로봇 비젼 제어알고리즘의 가중행렬의 영향에 관한 연구)

  • Son, Jae-Kyung;Jang, Wan-Shik;Sung, Yoon-Gyung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.9
    • /
    • pp.986-994
    • /
    • 2012
  • This paper is concerned with the application of the vision control algorithm with weighting matrix in robot point placement task. The proposed vision control algorithm involves four models, which are the robot kinematic model, vision system model, the parameter estimation scheme and robot joint angle estimation scheme. This proposed algorithm is to make the robot move actively, even if relative position between camera and robot, and camera's focal length are unknown. The parameter estimation scheme and joint angle estimation scheme in this proposed algorithm have form of nonlinear equation. In particular, the joint angle estimation model includes several restrictive conditions. For this study, the weighting matrix which gave various weighting near the target was applied to the parameter estimation scheme. Then, this study is to investigate how this change of the weighting matrix will affect the presented vision control algorithm. Finally, the effect of the weighting matrix of robot vision control algorithm is demonstrated experimentally by performing the robot point placement.

An Experimental Study on the Optimal Arrangement of Cameras Used for the Robot's Vision Control Scheme (로봇 비젼 제어기법에 사용된 카메라의 최적 배치에 대한 실험적 연구)

  • Min, Kwan-Ung;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.19 no.1
    • /
    • pp.15-25
    • /
    • 2010
  • The objective of this study is to investigate the optimal arrangement of cameras used for the robot's vision control scheme. The used robot's vision control scheme involves two estimation models, which are the parameter estimation and robot's joint angle estimation models. In order to perform this study, robot's working region is divided into three work spaces such as left, central and right spaces. Also, cameras are positioned on circular arcs with radius of 1.5m, 2.0m and 2.5m. Seven cameras are placed on each circular arc. For the experiment, nine cases of camera arrangement are selected in each robot's work space, and each case uses three cameras. Six parameters are estimated for each camera using the developed parameter estimation model in order to show the suitability of the vision system model in nine cases of each robot's work space. Finally, the robot's joint angles are estimated using the joint angle estimation model according to the arrangement of cameras for robot's point-position control. Thus, the effect of camera arrangement used for the robot's vision control scheme is shown for robot's point-position control experimentally.

Estimation of the Frictional Coefficient of Contact Point between the Terrain and the Wheel-Legged Robot with Hip Joint Actuation (고관절 구동 방식을 갖는 바퀴-다리형 로봇과 지면 간 접촉점에서의 마찰계수 추정)

  • Shin, Dong-Hwan;An, Jin-Ung;Moon, Jeon-Il
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.284-291
    • /
    • 2011
  • This paper presents the estimation of the frictional coefficient of the wheel-legged robot with hip joint actuation producing maximum tractive force. Slip behavior for wheel-legged robot is analytically explored and physically understood by identification of the non-slip condition and derivation of the torque limits satisfying it. Utilizing results of the analysis of slip behavior, the frictional coefficients of the wheel-legged robot during stance phase are numerically estimated and finally this paper suggests the pseudo-algorithm which can not only estimate the frictional coefficients of the wheel-legged robot, but also produce the candidate of the touch down angle for the next stance.

Estimation of Attitude Control for Quadruped Walking Robot Using Load Cell (로드셀을 이용한 4족 보행로봇의 자세제어 평가)

  • Eom, Han-Sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.6
    • /
    • pp.1235-1241
    • /
    • 2012
  • In this paper, each driving motor for leg joints on a robot is controlled by estimating the direction of the legs measuring each joint angle and attitude angle of robot. We used quadruped working robot named TITAN-VIII in order to carry out this experimental study. 4 load cells are installed under the bottom of 4 legs to measure the pressed force on each leg while it's walking. The walking experiments of the robot were performed in 8 different conditions combined with duty factor, the length of a stride, the trajectory height of the foot and walking period of robot. The validity of attitude control for quadruped walking robot is evaluated by comparing the pressed force on a leg and the power consumption of joint driving motor. As a result, it was confirmed that the slip-condition of which the foot leave the ground late at the beginning of new period of the robot during walking process, which means the attitude control of the robot during walking process wasn't perfect only by measuring joint and attitude angle for estimating the direction of the foot.

External Force Estimation by Modifying RLS using Joint Torque Sensor for Peg-in-Hole Assembly Operation (수정된 RLS 기반으로 관절 토크 센서를 이용한 로봇에 가해진 외부 힘 예측 및 펙인홀 작업 구현)

  • Jeong, Yoo-Seok;Lee, Cheol-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.55-62
    • /
    • 2018
  • In this paper, a method for estimation of external force on an end-effector using joint torque sensor is proposed. The method is based on portion of measure torque caused by external force. Due to noise in the torque measurement data from the torque sensor, a recursive least-square estimation algorithm is used to ensure a smoother estimation of the external force data. However it is inevitable to create a delay for the sensor to detect the external force. In order to reduce the delay, modified recursive least-square is proposed. The performance of the proposed estimation method is evaluated in an experiment on a developed six-degree-of-freedom robot. By using NI DAQ device and Labview, the robot control, data acquisition and The experimental results output are processed in real time. By using proposed modified RLS, the delay to estimate the external force with the RLS is reduced by 54.9%. As an experimental result, the difference of the actual external force and the estimated external force is 4.11% with an included angle of $5.04^{\circ}$ while in dynamic state. This result shows that this method allows joint torque sensors to be used instead of commonly used external sensory system such as F/T sensors.

A Study on Rigid body Placement Task of based on Robot Vision System (로봇 비젼시스템을 이용한 강체 배치 실험에 대한 연구)

  • 장완식;신광수;안철봉
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.11
    • /
    • pp.100-107
    • /
    • 1998
  • This paper presents the development of estimation model and control method based on the new robot vision. This proposed control method is accomplished using the sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on the model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters, depending on each camera the joint angle of robot is estimated by the iteration method. The method is experimentally tested in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

On the Estimation of the Center of Mass of an Autonomous Bipedal Robot (이족보행 로봇의 무게중심 실시간 추정에 관한 연구)

  • Kwon, Sang-Joo;Oh, Yong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.9
    • /
    • pp.886-892
    • /
    • 2008
  • In this paper, a closed-loop observer to extract the center of mass (CoM) of a bipedal robot is suggested. Comparing with the simple conversion method of just using joint angle measurements, it enables to get more reliable estimates by fusing both joint angle measurements and F/T sensor outputs at ankle joints. First, a nonlinear-type observer is constructed to estimate the flexible rotational motion of the biped in the extended Kalman filter framework. It adopts the flexible inverted pendulum model which is appropriate to address the flexible motion of bipeds, specifically in the single support phase. The predicted estimates of CoM in terms of the flexible motion observer are combined with measurements (that is, output of the CoM conversion equation with joint angles). Then, we have final CoM estimates depending on the weighting values which penalize the flexible motion model and the CoM conversion equation. Simulation results show the effectiveness of the proposed algorithm.

Self-Calibration of a Robot Manipulator by Using the Moving Pattern of an Object (물체의 운동패턴을 이용한 로보트 팔의 자기보정)

  • Young Chul Kay
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.5
    • /
    • pp.777-787
    • /
    • 1995
  • This paper presents a new method for automatically calibrating robot link (Kinematic) parameters during the process of estimating motion parameters of a moving object. The motion estimation is performed based on stereo cameras mounted on the end-effector of a robot manipulator. This approach significantly differs from other calibration approaches in that the calibration is achieved by simply observing the motion of the moving object (without resorting to any other external calibrating tools) at numerous and widely varying joint-angle configurations. A differential error model, which expresses the measurement errors of a robot in terms of robot link parameter errors and motion parameters, is developed. And then a measurement equation representing the true measurement values is derived. By estimating the above two kinds of parameters minimizing the difference between the measurement equations and the true moving pattern, the calibration of the robot link parameters and the estimation of the motion parameters are accomplished at the same time.

  • PDF

An Experimental Study on the Optimal Number of Cameras used for Vision Control System (비젼 제어시스템에 사용된 카메라의 최적개수에 대한 실험적 연구)

  • 장완식;김경석;김기영;안힘찬
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.94-103
    • /
    • 2004
  • The vision system model used for this study involves the six parameters that permits a kind of adaptability in that relationship between the camera space location of manipulable visual cues and the vector of robot joint coordinates is estimated in real time. Also this vision control method requires the number of cameras to transform 2-D camera plane from 3-D physical space, and be used irrespective of location of cameras, if visual cues are displayed in the same camera plane. Thus, this study is to investigate the optimal number of cameras used for the developed vision control system according to the change of the number of cameras. This study is processed in the two ways : a) effectiveness of vision system model b) optimal number of cameras. These results show the evidence of the adaptability of the developed vision control method using the optimal number of cameras.