• Title/Summary/Keyword: image-based visual servoing

Search Result 53, Processing Time 0.025 seconds

Posture Stabilization Control for Mobile Robot using Marker Recognition and Hybrid Visual Servoing (마커인식과 혼합 비주얼 서보잉 기법을 통한 이동로봇의 자세 안정화 제어)

  • Lee, Sung-Goo;Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.8
    • /
    • pp.1577-1585
    • /
    • 2011
  • This paper proposes a posture stabilization control algorithm for a wheeled mobile robot using hybrid visual servo control method with a position based and an image based visual servoing (PBVS and IBVS). To overcome chattering phenomena which were shown in the previous researches using a simple switching function based on a threshold, the proposed hybrid visual servo control law introduces the fusion function based on a blending function. Then, the chattering problem and rapid motion of the mobile robot can be eliminated. Also, we consider the nonlinearity of the wheeled mobile robot unlike the previous visual servo control laws using linear control methods to improve the performances of the visual servo control law. The proposed posture stabilization control law using hybrid visual servoing is verified by a theoretical analysis and simulation and experimental results.

A Study on the Visual Servoing of Autonomous Mobile Inverted Pendulum (자율주행 모바일 역진자의 비주얼서보잉에 대한 연구)

  • Lee, Junmin;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.3
    • /
    • pp.240-247
    • /
    • 2013
  • This paper proposes an optimal three-dimensional coordinate implementation of the vision sensor using two CCD cameras. The PBVS (Position based visual servoing) is implemented using the positional information obtained from images. Stereo vision by PBVS method that has enhanced every frame using calibration parameters is effective in the distance calculation. The IBVS (Image based visual servoing) is also implemented using the difference between reference and obtained images. Stereo vision by IBVS method calculates the distance using rotation angle of motors that correspond eyes and neck without enhanced images. The PBVS method is compared with the IBVS method in terms of advantages, disadvantages, computing time, and performances. Finally, the IBVS method is applied for the dual arm manipulator on the mobile inverted pendulum. The autonomous mobile inverted pendulum is successfully demonstrated using the center of the manipulator's mass.

Robot Manipulator Visual Servoing via Kalman Filter- Optimized Extreme Learning Machine and Fuzzy Logic

  • Zhou, Zhiyu;Hu, Yanjun;Ji, Jiangfei;Wang, Yaming;Zhu, Zefei;Yang, Donghe;Chen, Ji
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2529-2551
    • /
    • 2022
  • Visual servoing (VS) based on the Kalman filter (KF) algorithm, as in the case of KF-based image-based visual servoing (IBVS) systems, suffers from three problems in uncalibrated environments: the perturbation noises of the robot system, error of noise statistics, and slow convergence. To solve these three problems, we use an IBVS based on KF, African vultures optimization algorithm enhanced extreme learning machine (AVOA-ELM), and fuzzy logic (FL) in this paper. Firstly, KF online estimation of the Jacobian matrix. We propose an AVOA-ELM error compensation model to compensate for the sub-optimal estimation of the KF to solve the problems of disturbance noises and noise statistics error. Next, an FL controller is designed for gain adaptation. This approach addresses the problem of the slow convergence of the IBVS system with the KF. Then, we propose a visual servoing scheme combining FL and KF-AVOA-ELM (FL-KF-AVOA-ELM). Finally, we verify the algorithm on the 6-DOF robotic manipulator PUMA 560. Compared with the existing methods, our algorithm can solve the three problems mentioned above without camera parameters, robot kinematics model, and target depth information. We also compared the proposed method with other KF-based IBVS methods under different disturbance noise environments. And the proposed method achieves the best results under the three evaluation metrics.

카메라 디포커싱을 이용한 로보트의 시각 서보

  • 신진우;고국현;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.559-564
    • /
    • 1994
  • Recently, a visual servoing for an eye-in-hand robot has become an interesting problem. A distance between a camera and a task object is very useful information for visual servoing. In the previous works for visual servoing, the distance can be obtained from the difference between a reference and a measured feature value of the object such as area on image plane. However, since this feature depends on the object, the reference feature value must be changed when other task object is taken. To overcome this difficulty, this paper presents a novel method for visual servoing. In the proposed method, a blur is used to obtain the distance. The blur, one of the most important features, depends on the focal length of camera. Since it is not affected by the change of object, the reference feature value is not changed although other task object is taken. In this paper, we show a relationship between the distance and the blur, and define the feature jacobian matrix based on camera defocusing to operate the robot. A series of experiments is performed to verify the proposed method.

  • PDF

A Study on Feature-Based Visual Servoing Control of Robot System by Utilizing Redundant Feature

  • Han, Sung-Hyun;Hideki Hashimoto
    • Journal of Mechanical Science and Technology
    • /
    • v.16 no.6
    • /
    • pp.762-769
    • /
    • 2002
  • This paper presents how effective it is to use many features for improving the speed and accuracy of visual servo systems. Some rank conditions which relate the image Jacobian to the control performance are derived. The focus is to describe that the accuracy of the camera position control in the world coordinate system is increased by utilizing redundant features in this paper. It is also proven that the accuracy is improved by increasing the number of features involved. Effectiveness of the redundant features is evaluated by the smallest singular value of the image Jacobian which is closely related to the accuracy with respect to the world coordinate system. Usefulness of the redundant features is verified by the real time experiments on a Dual-Arm robot manipulator made by Samsung Electronic Co. Ltd..

A Study on Visual Servoing Application for Robot OLP Compensation (로봇 OLP 보상을 위한 시각 서보잉 응용에 관한 연구)

  • 김진대;신찬배;이재원
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.4
    • /
    • pp.95-102
    • /
    • 2004
  • It is necessary to improve the exactness and adaptation of the working environment in the intelligent robot system. The vision sensor have been studied for this reason fur a long time. However, it is very difficult to perform the camera and robot calibrations because the three dimensional reconstruction and many processes are required for the real usages. This paper suggests the image based visual servoing to solve the problem of old calibration technique and supports OLP(Off-Line-Programming) path compensation. Virtual camera can be modeled from the real factors and virtual images obtained from virtual camera gives more easy perception process. Also, Initial path generated from OLP could be compensated by the pixel level acquired from the real and virtual, respectively. Consequently, the proposed visually assisted OLP teaching remove the calibration and reconstruction process in real working space. With a virtual simulation, the better performance is observed and the robot path error is calibrated by the image differences.

A novel visual servoing techniques considering robot dynamics (로봇의 운동특성을 고려한 새로운 시각구동 방법)

  • 이준수;서일홍;김태원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.410-414
    • /
    • 1996
  • A visual servoing algorithm is proposed for a robot with a camera in hand. Specifically, novel image features are suggested by employing a viewing model of perspective projection to estimate relative pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a, commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.