• Title/Summary/Keyword: vision camera

Search Result 1,372, Processing Time 0.035 seconds

Camera Modeling for Kinematic Calibration of a Industrial Robot (산업용 로봇의 자세 보정을 위한 카메라 모델링)

  • 왕한흥;장영희;김종수;이종붕;한성현
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2001.10a
    • /
    • pp.117-121
    • /
    • 2001
  • This paper presents a new approach to the calibration of a SCARA robot orientation with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distortion causes an inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is, the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing.

  • PDF

Motion Analysis of a Moving Object using one Camera and Tracking Method (단일 카메라와 Tracking 기법을 이용한 이동 물체의 모션 분석)

  • Shin, Myong-Jun;Son, Young-Ik;Kim, Kab-Il
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2821-2823
    • /
    • 2005
  • When we deal with the image data through camera lens, much works are necessary for removing image distortions and obtaining accurate informations from the raw data. However, the calibration process is very complicated and requires many trials and errors. In this paper, 3 new approach to image processing is presented by developing a H/W vision system with a tracking camera. Using motor control with encoders the proposed tracking method tells us exact displacements of a moving object. Therefore this method does not require any calibration process for pin cusion. Owing to the mobility one camera covers wide ranges and, by lowering its height, the camera also obtains high resolution of the image. We first introduce the structure of the motion analysis system. Then the construced vision system is investigated by some experiments.

  • PDF

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.172-179
    • /
    • 2002
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

Estimation of Rotation of Camera Direction and Distance Between Two Camera Positions by Using Fisheye Lens System

  • Aregawi, Tewodros A.;Kwon, Oh-Yeol;Park, Soon-Yong;Chien, Sung-Il
    • Journal of Sensor Science and Technology
    • /
    • v.22 no.6
    • /
    • pp.393-399
    • /
    • 2013
  • We propose a method of sensing the rotation and distance of a camera by using a fisheye lens system as a vision sensor. We estimate the rotation angle of a camera with a modified correlation method by clipping similar regions to avoid symmetry problems and suppressing highlight areas. In order to eliminate the rectification process of the distorted points of a fisheye lens image, we introduce an offline process using the normalized focal length, which does not require the image sensor size. We also formulate an equation for calculating the distance of a camera movement by matching the feature points of the test image with those of the reference image.

Development of an FPGA-based Sealer Coating Inspection Vision System for Automotive Glass Assembly Automation Equipment (자동차 글라스 조립 자동화설비를 위한 FPGA기반 실러 도포검사 비전시스템 개발)

  • Ju-Young Kim;Jae-Ryul Park
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.320-327
    • /
    • 2023
  • In this study, an FPGA-based sealer inspection system was developed to inspect the sealer applied to install vehicle glass on a car body. The sealer is a liquid or paste-like material that promotes adhesion such as sealing and waterproofing for mounting and assembling vehicle parts to a car body. The system installed in the existing vehicle design parts line does not detect the sealer in the glass rotation section and takes a long time to process. This study developed a line laser camera sensor and an FPGA vision signal processing module to solve this problem. The line laser camera sensor was developed such that the resolution and speed of the camera for data acquisition could be modified according to the irradiation angle of the laser. Furthermore, it was developed considering the mountability of the entire system to prevent interference with the sealer ejection machine. In addition, a vision signal processing module was developed using the Zynq-7020 FPGA chip to improve the processing speed of the algorithm that converted the profile to the sealer shape image acquired from a 2D camera and calculated the width and height of the sealer using the converted profile. The performance of the developed sealer application inspection system was verified by establishing an experimental environment identical to that of an actual automobile production line. The experimental results confirmed the performance of the sealer application inspection at a level that satisfied the requirements of automotive field standards.

Real-time Robotic Vision Control Scheme Using Optimal Weighting Matrix for Slender Bar Placement Task (얇은 막대 배치작업을 위한 최적의 가중치 행렬을 사용한 실시간 로봇 비젼 제어기법)

  • Jang, Min Woo;Kim, Jae Myung;Jang, Wan Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.1
    • /
    • pp.50-58
    • /
    • 2017
  • This paper proposes a real-time robotic vision control scheme using the weighting matrix to efficiently process the vision data obtained during robotic movement to a target. This scheme is based on the vision system model that can actively control the camera parameter and robotic position change over previous studies. The vision control algorithm involves parameter estimation, joint angle estimation, and weighting matrix models. To demonstrate the effectiveness of the proposed control scheme, this study is divided into two parts: not applying the weighting matrix and applying the weighting matrix to the vision data obtained while the camera is moving towards the target. Finally, the position accuracy of the two cases is compared by performing the slender bar placement task experimentally.

A Study on the Effect of Weighting Matrix of Robot Vision Control Algorithm in Robot Point Placement Task (점 배치 작업 시 제시된 로봇 비젼 제어알고리즘의 가중행렬의 영향에 관한 연구)

  • Son, Jae-Kyung;Jang, Wan-Shik;Sung, Yoon-Gyung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.9
    • /
    • pp.986-994
    • /
    • 2012
  • This paper is concerned with the application of the vision control algorithm with weighting matrix in robot point placement task. The proposed vision control algorithm involves four models, which are the robot kinematic model, vision system model, the parameter estimation scheme and robot joint angle estimation scheme. This proposed algorithm is to make the robot move actively, even if relative position between camera and robot, and camera's focal length are unknown. The parameter estimation scheme and joint angle estimation scheme in this proposed algorithm have form of nonlinear equation. In particular, the joint angle estimation model includes several restrictive conditions. For this study, the weighting matrix which gave various weighting near the target was applied to the parameter estimation scheme. Then, this study is to investigate how this change of the weighting matrix will affect the presented vision control algorithm. Finally, the effect of the weighting matrix of robot vision control algorithm is demonstrated experimentally by performing the robot point placement.

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

Vision-Based Displacement Measurement System Operable at Arbitrary Positions (임의의 위치에서 사용 가능한 영상 기반 변위 계측 시스템)

  • Lee, Jun-Hwa;Cho, Soo-Jin;Sim, Sung-Han
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.18 no.6
    • /
    • pp.123-130
    • /
    • 2014
  • In this study, a vision-based displacement measurement system is developed to accurately measure the displacement of a structure with locating the camera at arbitrary position. The previous vision-based system brings error when the optical axis of a camera has an angle with the measured structure, which limits the applicability at large structures. The developed system measures displacement by processing the images of a target plate that is attached on the measured position of a structure. To measure displacement regardless of the angle between the optical axis of the camera and the target plate, planar homography is employed to match two planes in image and world coordinate systems. To validate the performance of the present system, a laboratory test is carried out using a small 2-story shear building model. The result shows that the present system measures accurate displacement of the structure even with a camera significantly angled with the target plate.

Development of Vision system for Back Light Unit of Defect (백라이트 유닛의 결함 검사를 위한 비전 시스템 개발)

  • Han, Chang-Ho;Oh, Choon-Suk;Ryu, Young-Kee;Cho, Sang-Hee
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.161-164
    • /
    • 2006
  • In this thesis we designed the vision system to inspect the defect of a back light unit of plat panel display device. The vision system is divided into hardware and inspection algorithm of defect. Hardware components consist of illumination part, robot-arm controller part and image-acquisition part. Illumination part is made of acrylic panel for light diffusion and five 36W FPL's(Fluorescent Parallel Lamp) and electronic ballast with low frequency harmonics. The CCD(Charge-Coupled Device) camera of image-acquisition part is able to acquire the bright image by the light coming from lamp. The image-acquisition part is composed of CCD camera and frame grabber. The robot-arm controller part has a role to let the CCD camera move to the desired position. To take inspections of surface images of a flat panel display it can be controlled and located every nook and comer. Images obtained by robot-arm and image-acquisition board are saved on the hard-disk through windows programming and are tested whether there are defects by using the image processing algorithms.