• Title/Summary/Keyword: Vision Based System

Search Result 1,683, Processing Time 0.032 seconds

Development of Welding Quality Vision Inspection System for Sinking Seat (차량용 싱킹시트의 용접품질 비젼 검사 시스템 개발)

  • Yun, Sang-Hwan;Kim, Han-Jong;Moon, Sang-In;Kim, Sung-Gaun
    • Proceedings of the KSME Conference
    • /
    • 2007.05a
    • /
    • pp.1553-1558
    • /
    • 2007
  • This paper presents a vision based autonomous inspection system for welding quality control of car sinking seat. In order to overcome the precision error that arises from a visible inspection by operator in the manufacturing process of a car sinking seat, this paper proposes the MVWQC (machine vision based welding quality control) system. This system consists of the CMOS camera and NI machine vision system. The image processing software for the system has been developed using the NI vision builder system. The geometry of welding bead, which is the welding quality criteria, is measured by using the captured image with median filter applied on it. Experiments have been performed to verify the proposed MVWQC of car sinking seat.

  • PDF

Design and Implementation of Vision Box Based on Embedded Platform (Embedded Platform 기반 Vision Box 설계 및 구현)

  • Kim, Pan-Kyu;Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.191-197
    • /
    • 2007
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and vehicle recognition is ole of them. There have been many proposals about algorithm of vehicle recognition. But have complex calculation processing. So they need long processing time and sometimes they make problems. In this research we suggested vehicle type recognition system using vision bpx based on embedded platform. As a result of testing this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting, noise and angle, rate of recognition is decreased as pattern score is lowered and recognition speed is slowed.

Position Control of Robot Manipulator based on stereo vision system (스테레오 비젼에 기반한 6축 로봇의 위치 결정에 관한 연구)

  • 조환진;박광호;기창두
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2001.04a
    • /
    • pp.590-593
    • /
    • 2001
  • In this paper we describe the 6-axes robot's position determination using a stereo vision and an image based control method. When use a stereo vision, it need a additional time to compare with mono vision system. So to reduce the time required, we use the stereo vision not image Jacobian matrix estimation but depth estimation. Image based control is not needed the high-precision of camera calibration by using a image Jacobian. The experiment is executed as devide by two part. The first is depth estimation by stereo vision and the second is robot manipulator's positioning.

  • PDF

Force monitoring of steel cables using vision-based sensing technology: methodology and experimental verification

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • v.18 no.3
    • /
    • pp.585-599
    • /
    • 2016
  • Steel cables serve as the key structural components in long-span bridges, and the force state of the steel cable is deemed to be one of the most important determinant factors representing the safety condition of bridge structures. The disadvantages of traditional cable force measurement methods have been envisaged and development of an effective alternative is still desired. In the last decade, the vision-based sensing technology has been rapidly developed and broadly applied in the field of structural health monitoring (SHM). With the aid of vision-based multi-point structural displacement measurement method, monitoring of the tensile force of the steel cable can be realized. In this paper, a novel cable force monitoring system integrated with a multi-point pattern matching algorithm is developed. The feasibility and accuracy of the developed vision-based force monitoring system has been validated by conducting the uniaxial tensile tests of steel bars, steel wire ropes, and parallel strand cables on a universal testing machine (UTM) as well as a series of moving loading experiments on a scale arch bridge model. The comparative study of the experimental outcomes indicates that the results obtained by the vision-based system are consistent with those measured by the traditional method for cable force measurement.

SLAM Aided GPS/INS/Vision Navigation System for Helicopter (SLAM 기반 GPS/INS/영상센서를 결합한 헬리콥터 항법시스템의 구성)

  • Kim, Jae-Hyung;Lyou, Joon;Kwak, Hwy-Kuen
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.745-751
    • /
    • 2008
  • This paper presents a framework for GPS/INS/Vision based navigation system of helicopters. GPS/INS coupled algorithm has weak points such as GPS blockage and jamming, while the helicopter is a speedy and high dynamical vehicle amenable to lose the GPS signal. In case of the vision sensor, it is not affected by signal jamming and also navigation error is not accumulated. So, we have implemented an GPS/INS/Vision aided navigation system providing the robust localization suitable for helicopters operating in various environments. The core algorithm is the vision based simultaneous localization and mapping (SLAM) technique. For the verification of the SLAM algorithm, we performed flight tests. From the tests, we confirm the developed system is robust enough under the GPS blockage. The system design, software algorithm, and flight test results are described.

A vision-based robotic assembly system

  • Oh, Sang-Rok;Lim, Joonhong;Shin, You-Shik;Bien, Zeungnam
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10a
    • /
    • pp.770-775
    • /
    • 1987
  • In this paper, design and development experiences of a vision based robotic assembly system for electronic components are described. Specifically, the overall system consists of the following three subsystems each of which employs a 16 bit Preprocessor MC 68000 : supervisory controller, real-time vision system, and servo system. The three microprocessors are interconnected using the time shared common memory bus structure with hardwired bus arbitration scheme and operated as a master-slave type in which each slave is functionally fixed in view of software. With this system architecture, the followings are developed and implemented in this research; (i) the system programming language, called 'CLRC', for man-machine interface including the robot motion and vision primitives, (ii) real-time vision system using hardwired chain coder, (iii) the high-precision servo techniques for high speed de motors and high speed stepping motors. The proposed control system were implemented and tested in real-time successfully.

  • PDF

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

Hybrid Inertial and Vision-Based Tracking for VR applications (가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹)

  • Gu, Jae-Pil;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Ik-Jae;Gu, Yeol-Hoe
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

An embedded vision system based on an analog VLSI Optical Flow vision sensor

  • Becanovic, Vlatako;Matsuo, Takayuki;Stocker, Alan A.
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.285-288
    • /
    • 2005
  • We propose a novel programmable miniature vision module based on a custom designed analog VLSI (aVLSI) chip. The vision module consists of the optical flow vision sensor embedded with commercial off-the-shelves digital hardware; in our case is the Intel XScale PXA270 processor enforced with a programmable gate array device. The aVLSI sensor provides gray-scale imager data as well as smooth optical flow estimates, thus each pixel gives a triplet of information that can be continuously read out as three independent images. The particular computational architecture of the custom designed sensor, which is fully parallel and also analog, allows for efficient real-time estimations of the smooth optical flow. The Intel XScale PXA270 controls the sensor read-out and furthermore allows, together with the programmable gate array, for additional higher level processing of the intensity image and optical flow data. It also provides the necessary standard interface such that the module can be easily programmed and integrated into different vision systems, or even form a complete stand-alone vision system itself. The low power consumption, small size and flexible interface of the proposed vision module suggests that it could be particularly well suited as a vision system in an autonomous robotics platform and especially well suited for educational projects in the robotic sciences.

  • PDF

Compensation of Installation Errors in a Laser Vision System and Dimensional Inspection of Automobile Chassis

  • Barkovski Igor Dunin;Samuel G.L.;Yang Seung-Han
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.4
    • /
    • pp.437-446
    • /
    • 2006
  • Laser vision inspection systems are becoming popular for automated inspection of manufactured components. The performance of such systems can be enhanced by improving accuracy of the hardware and robustness of the software used in the system. This paper presents a new approach for enhancing the capability of a laser vision system by applying hardware compensation and using efficient analysis software. A 3D geometrical model is developed to study and compensate for possible distortions in installation of gantry robot on which the vision system is mounted. Appropriate compensation is applied to the inspection data obtained from the laser vision system based on the parameters in 3D model. The present laser vision system is used for dimensional inspection of car chassis sub frame and lower arm assembly module. An algorithm based on simplex search techniques is used for analyzing the compensated inspection data. The details of 3D model, parameters used for compensation and the measurement data obtained from the system are presented in this paper. The details of search algorithm used for analyzing the measurement data and the results obtained are also presented in the paper. It is observed from the results that, by applying compensation and using appropriate algorithms for analyzing, the error in evaluation of the inspection data can be significantly minimized, thus reducing the risk of rejecting good parts.