• Title/Summary/Keyword: robotic vision

Search Result 127, Processing Time 0.03 seconds

Dynamic tracking control of robot manipulators using vision system (비전 시스템을 이용한 로봇 머니퓰레이터의 동력학 추적 제어)

  • 한웅기;국태용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1816-1819
    • /
    • 1997
  • Using the vision system, robotic tasks in unstructured environments can be accompished, which reduces greatly the cost and steup time for the robotic system to fit to he well-defined and structured working environments. This paper proposes a dynamic control scheme for robot manipulator with eye-in-hand camera configuration. To perfom the tasks defined in the image plane, the camera motion Jacobian (image Jacobian) matrix is used to transform the camera motion to the objection position change. In addition, the dynamic learning controller is designed to improve the tracking performance of robotic system. the proposed control scheme is implemented for tasks of tracking moving objects and shown to outperform the conventional visual servo system in convergence and robustness to parameter uncertainty, disturbances, low sampling rate, etc.

  • PDF

A vision-based robotic assembly system

  • Oh, Sang-Rok;Lim, Joonhong;Shin, You-Shik;Bien, Zeungnam
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10a
    • /
    • pp.770-775
    • /
    • 1987
  • In this paper, design and development experiences of a vision based robotic assembly system for electronic components are described. Specifically, the overall system consists of the following three subsystems each of which employs a 16 bit Preprocessor MC 68000 : supervisory controller, real-time vision system, and servo system. The three microprocessors are interconnected using the time shared common memory bus structure with hardwired bus arbitration scheme and operated as a master-slave type in which each slave is functionally fixed in view of software. With this system architecture, the followings are developed and implemented in this research; (i) the system programming language, called 'CLRC', for man-machine interface including the robot motion and vision primitives, (ii) real-time vision system using hardwired chain coder, (iii) the high-precision servo techniques for high speed de motors and high speed stepping motors. The proposed control system were implemented and tested in real-time successfully.

  • PDF

Tele-operating System of Field Robot for Cultivation Management - Vision based Tele-operating System of Robotic Smart Farming for Fruit Harvesting and Cultivation Management

  • Ryuh, Youngsun;Noh, Kwang Mo;Park, Joon Gul
    • Journal of Biosystems Engineering
    • /
    • v.39 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Purposes: This study was to validate the Robotic Smart Work System that can provides better working conditions and high productivity in unstructured environments like bio-industry, based on a tele-operation system for fruit harvesting with low cost 3-D positioning system on the laboratory level. Methods: For the Robotic Smart Work System for fruit harvesting and cultivation management in agriculture, a vision based tele-operating system and 3-D position information are key elements. This study proposed Robotic Smart Farming, an agricultural version of Robotic Smart Work System, and validated a 3-D position information system with a low cost omni camera and a laser marker system in the lab environment in order to get a vision based tele-operating system and 3-D position information. Results: The tasks like harvesting of the fixed target and cultivation management were accomplished even if there was a short time delay (30 ms ~ 100 ms). Although automatic conveyor works requiring accurate timing and positioning yield high productivity, the tele-operation with user's intuition will be more efficient in unstructured environments which require target selection and judgment. Conclusions: This system increased work efficiency and stability by considering ancillary intelligence as well as user's experience and knowhow. In addition, senior and female workers will operate the system easily because it can reduce labor and minimized user fatigue.

Study of Intelligent Vision Sensor for the Robotic Laser Welding

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.4
    • /
    • pp.447-457
    • /
    • 2019
  • The intelligent sensory system is required to ensure the accurate welding performance. This paper describes the development of an intelligent vision sensor for the robotic laser welding. The sensor system includes a PC based vision camera and a stripe-type laser diode. A set of robust image processing algorithms are implemented. The laser-stripe sensor can measure the profile of the welding object and obtain the seam line. Moreover, the working distance of the sensor can be changed and other configuration is adjusted accordingly. The robot, the seam tracking system, and CW Nd:YAG laser are used for the laser welding robot system. The simple and efficient control scheme of the whole system is also presented. The profile measurement and the seam tracking experiments were carried out to validate the operation of the system.

An Automatic Teaching Method by Vision Information for A Robotic Assembly System

  • Ahn, Cheol-Ki;Lee, Min-Cheol;Kim, Jong-Hyung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.65-68
    • /
    • 1999
  • In this study, an off-line automatic teaching method using vision information for robotic assembly task is proposed. Many of industrial robots are still taught and programmed by a teaching pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and played back repetitively to perform the robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and transferred to the robot controller. This teaching process is implemented through an off-line programming(OLP) software. The OLP is developed for the robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on the assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line automatic teaching.

  • PDF

Development of Automotive Position Measuring Vision System

  • Lee, Chan-Ho;Oh, Jong-Kyu;Hur, Jong-Sung;Han, Chul-Hi;Kim, Young-Su;Lee, Kyu-Ho;Hur, Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1511-1515
    • /
    • 2004
  • Machine vision system plays an important role in factory automation. Its many applications are found in automobile manufacturing industries, as an eye for robotic automation system. In this paper, an automobile position measuring vision system(APMVS) applicable to manufacturing line for under body painting of a car is introduced. The APMVS measures position and orientation of the car body to be sealed or painted by the robots. The configuration of the overall robotic sealing/painting system, design and application procedure, and application examples are described.

  • PDF

Computer simulation for seam tracking algorithm using laser vision sensor in robotic welding (레이저 비전 센서를 이용한 용접선 추적에 관한 시뮬레이션)

  • Jung, Taik-Min;Sung, Ki-Eun;Rhee, Se-Hun
    • Laser Solutions
    • /
    • v.13 no.2
    • /
    • pp.17-23
    • /
    • 2010
  • It is very important to track a complicate weld seam for the welding automation. Very recently, laser vision sensor becomes a useful sensing tool to find the seams. Until now, however studies of welding automation using a laser vision sensor, focused on either image processing or feature recognition from CCD camera. Even though it is possible to use a simple algorithm for tracking a simple seam, it is extremely difficult to develop a seam-tracking algorithm when the seam is more complex. To overcome these difficulties, this study introduces a simulation system to develop the seam tracking algorithm. This method was verified experimentally to reduce the time and effort to develop the seam tracking algorithm, and to implement the sensing device.

  • PDF

3D Vision Implementation for Robotic Handling System of Automotive Parts (자동차 부품의 로봇 처리 시스템을 위한 3D 비전 구현)

  • Nam, Ji Hun;Yang, Won Ock;Park, Su Hyeon;Kim, Nam Guk;Song, Chul Ki;Lee, Ho Seong
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.21 no.4
    • /
    • pp.60-69
    • /
    • 2022
  • To keep pace with Industry 4.0, it is imperative for companies to redesign their working environments by adopting robotic automation systems. Automation lines are facilitating the latest cutting-edge technologies, such as 3D vision and industrial robots, to outdo competitors by reducing costs. Considering the nature of the manufacturing industry, a time-saving workflow and smooth linkwork between processes is vital. At Dellics, without any additional new installation in the automation lines, only a few improvements to the working process could raise productivity. Three requirements are the development of gripping technology by utilizing a 3D vision system for the recognition of the material shape and location, research on lighting projectors to target long distances and high illumination, and testing of algorithms/software to improve measurement accuracy and identify products. With some of the functional requisites mentioned above, improved robotic automation systems should provide an improved working environment to maximize overall production efficiency. In this article, the ways in which such a system can become the groundwork for establishing an unmanned working infrastructure are discussed.

A User Interface for Vision Sensor based Indirect Teaching of a Robotic Manipulator (시각 센서 기반의 다 관절 매니퓰레이터 간접교시를 위한 유저 인터페이스 설계)

  • Kim, Tae-Woo;Lee, Hoo-Man;Kim, Joong-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.10
    • /
    • pp.921-927
    • /
    • 2013
  • This paper presents a user interface for vision based indirect teaching of a robotic manipulator with Kinect and IMU (Inertial Measurement Unit) sensors. The user interface system is designed to control the manipulator more easily in joint space, Cartesian space and tool frame. We use the skeleton data of the user from Kinect and Wrist-mounted IMU sensors to calculate the user's joint angles and wrist movement for robot control. The interface system proposed in this paper allows the user to teach the manipulator without a pre-programming process. This will improve the teaching time of the robot and eventually enable increased productivity. Simulation and experimental results are presented to verify the performance of the robot control and interface system.

Machine Vision for Distributed Autonomous Robotic System (자율 분산 이동 로봇 시스템을 위한 머신비젼)

  • 김대욱;박창현;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.94-97
    • /
    • 2004
  • 독립된 자율로봇에서 머신비젼의 구동을 위해 본 논문에서는 DARS(Distributed Autonomous Robotic System)에 적용하기 위한 디지털 이미지 프로세싱을 연구하고, DARS의 개별 로봇에 이를 임베디드화하는 것을 연구한다. 따라서 로봇을 구동하기 위해 필요한 데이터를 CMOS 카메라로부터 수신하여 영상을 스캔한 후, 원영상을 신경망 알고리즘을 통해 클러스터링하여 필요한 데이터를 추출한다. 또 이를 사용자 컴퓨터 단말기 상에 디스플레이하고, 최종적으로 DARS의 자율 이동 로봇이 영상 데이터를 인지하여 특정한 선택 동작을 수행하도록 한다.

  • PDF