• Title/Summary/Keyword: robot's visual system

Search Result 75, Processing Time 0.024 seconds

Development of a Robot's Visual System for Measuring Distance and Width of Object Algorism (로봇의 시각시스템을 위한 물체의 거리 및 크기측정 알고리즘 개발)

  • Kim, Hoi-In;Kim, Gab-Soon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.88-92
    • /
    • 2011
  • This paper looks at the development of the visual system of robots, and the development of image processing algorism to measure the size of an object and the distance from robot to an object for the visual system. Robots usually get the visual systems with a camera for measuring the size of an object and the distance to an object. The visual systems are accurately impossible the size and distance in case of that the locations of the systems is changed and the objects are not on the ground. Thus, in this paper, we developed robot's visual system to measure the size of an object and the distance to an object using two cameras and two-degree robot mechanism. And, we developed the image processing algorism to measure the size of an object and the distance from robot to an object for the visual system, and finally, carried out the characteristics test of the developed visual system. As a result, it is thought that the developed system could accurately measure the size of an object and the distance to an object.

Development of Visual Servo Control System for the Tracking and Grabbing of Moving Object (이동 물체 포착을 위한 비젼 서보 제어 시스템 개발)

  • Choi, G.J.;Cho, W.S.;Ahn, D.S.
    • Journal of Power System Engineering
    • /
    • v.6 no.1
    • /
    • pp.96-101
    • /
    • 2002
  • In this paper, we address the problem of controlling an end-effector to track and grab a moving target using the visual servoing technique. A visual servo mechanism based on the image-based servoing principle, is proposed by using visual feedback to control an end-effector without calibrated robot and camera models. Firstly, we consider the control problem as a nonlinear least squares optimization and update the joint angles through the Taylor Series Expansion. And to track a moving target in real time, the Jacobian estimation scheme(Dynamic Broyden's Method) is used to estimate the combined robot and image Jacobian. Using this algorithm, we can drive the objective function value to a neighborhood of zero. To show the effectiveness of the proposed algorithm, simulation results for a six degree of freedom robot are presented.

  • PDF

A Study on the Visual Odometer using Ground Feature Point (지면 특징점을 이용한 영상 주행기록계에 관한 연구)

  • Lee, Yoon-Sub;Noh, Gyung-Gon;Kim, Jin-Geol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.3
    • /
    • pp.330-338
    • /
    • 2011
  • Odometry is the critical factor to estimate the location of the robot. In the mobile robot with wheels, odometry can be performed using the information from the encoder. However, the information of location in the encoder is inaccurate because of the errors caused by the wheel's alignment or slip. In general, visual odometer has been used to compensate for the kinetic errors of robot. In case of using the visual odometry under some robot system, the kinetic analysis is required for compensation of errors, which means that the conventional visual odometry cannot be easily applied to the implementation of the other type of the robot system. In this paper, the novel visual odometry, which employs only the single camera toward the ground, is proposed. The camera is mounted at the center of the bottom of the mobile robot. Feature points of the ground image are extracted by using median filter and color contrast filter. In addition, the linear and angular vectors of the mobile robot are calculated with feature points matching, and the visual odometry is performed by using these linear and angular vectors. The proposed odometry is verified through the experimental results of driving tests using the encoder and the new visual odometry.

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Development of a 3D Graphic Simulator for Assembling Robot (조립용 로봇이 3차원 그래픽 시뮬레이터 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1998.03a
    • /
    • pp.227-232
    • /
    • 1998
  • We developed a Off-Line Graphic Simulator which can simulate a robot model in 3D graphics space in Windows 95 version. 4 axes SCARA robot was adopted as an objective model. Forward kinematics, inverse kinematics and robot dynamics modeling were included in the developed program. The interface between users and the off-line program system in the Windows 95's graphic user interface environment was also studied. The developing language is Microsoft Visual C++. Graphic libraries, OpenGL, by Silicon Graphics, Inc. were utilized for 3D graphics.

  • PDF

Dynamic Visual Servo Control of Robot Manipulators Using Neural Networks (신경 회로망을 이용한 로보트의 동력학적 시각 서보 제어)

  • 박재석;오세영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.10
    • /
    • pp.37-45
    • /
    • 1992
  • For a precise manipulator control in the presence of environmental uncertainties, it has long been recognized that the robot should be controlled in a task-referenced space. In this respect, an effective visual servo control system for robot manipulators based on neural networks is proposed. In the proposed control system, a Backpropagation neural network is used first to learn the mapping relationship between the robot's joint space and the video image space. However, in the real control loop, this network is not used in itself, but its first and second derivatives are used to generate servo commands for the robot. Second, and Adaline neural network is used to identify the approximately linear dynamics of the robot and also to generate the proper joint torque commands. Computer simulation has been performed demonstrating the proposed method's superior performance. Futrhermore, the proposed scheme can be effectively utilized in a robot skill acquisition system where the robot can be taught by watching a human behavioral task.

  • PDF

Vision Navigation System by Autonomous Mobile Robot

  • Shin S.Y.;Lee, J.H.;Kang H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.146.3-146
    • /
    • 2001
  • It has been integrated into several navigation systems. This paper shows that system recognizes difficult indoor roads and open area without any specific mark such as painted guide tine or tape. In this method, Robot navigates with visual sensors, which uses visual information to navigate itself along the road. An Artificial Neural Network System was used to decide where to move. It is designed with USB web camera as visual sensor.

  • PDF

UGR Detection and Tracking in Aerial Images from UFR for Remote Control (비행로봇의 항공 영상 온라인 학습을 통한 지상로봇 검출 및 추적)

  • Kim, Seung-Hun;Jung, Il-Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.2
    • /
    • pp.104-111
    • /
    • 2015
  • In this paper, we proposed visual information to provide a highly maneuverable system for a tele-operator. The visual information image is bird's eye view from UFR(Unmanned Flying Robot) shows around UGR(Unmanned Ground Robot). We need UGV detection and tracking method for UFR following UGR always. The proposed system uses TLD(Tracking Learning Detection) method to rapidly and robustly estimate the motion of the new detected UGR between consecutive frames. The TLD system trains an on-line UGR detector for the tracked UGR. The proposed system uses the extended Kalman filter in order to enhance the performance of the tracker. As a result, we provided the tele-operator with the visual information for convenient control.

Design of HCI System of Museum Guide Robot Based on Visual Communication Skill

  • Qingqing Liang
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.328-336
    • /
    • 2024
  • Visual communication is widely used and enhanced in modern society, where there is an increasing demand for spirituality. Museum robots are one of many service robots that can replace humans to provide services such as display, interpretation and dialogue. For the improvement of museum guide robots, the paper proposes a human-robot interaction system based on visual communication skills. The system is based on a deep neural mesh structure and utilizes theoretical analysis of computer vision to introduce a Tiny+CBAM mesh structure in the gesture recognition component. This combines basic gestures and gesture states to design and evaluate gesture actions. The test results indicated that the improved Tiny+CBAM mesh structure could enhance the mean average precision value by 13.56% while maintaining a loss of less than 3 frames per second during static basic gesture recognition. After testing the system's dynamic gesture performance, it was found to be over 95% accurate for all items except double click. Additionally, it was 100% accurate for the action displayed on the current page.

Intergrated Control System Design of SCARA Robot Based-On Off-Line Programming (오프라인 프로그래밍을 이용한 스카라 로봇의 통합제어시스템 설계)

  • 한성현;정동연
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.11 no.3
    • /
    • pp.21-27
    • /
    • 2002
  • In this paper, we have developed a Widows 98 version Off-Line Programming System which can simulate a Robot model in 3D Graphics space. The SCARA robot with four joints (FARA SM5)was adopted as an objective model. Forward kinematics, inverse kinematics and robot dynamics modeling were included in the developed program. The interface between users and the OLP system in the Widows 98's GUI environment was also studied. The developing language is Microsoft Visual C++. Graphic 1ibraries, OpenGL, by silicon Graphics, Inc. were utilized for 3D Graphics.