• Title/Summary/Keyword: Machine vision inspection

Search Result 241, Processing Time 0.027 seconds

An Automatic Weight Measurement of Rope Using Computer Vision

  • Joo, Ki-See
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.2 no.1
    • /
    • pp.141-146
    • /
    • 1998
  • Recently, the computer vision such as part measurement, and product inspection is very popular to achieve the factory automation since the labor cost is dramatically increasing. In this paper, the diameter and the length of rope are measured by CCD camera which is orthogonally mounted on the ceiling. Two parameters which are the diameter and the length of rope are used to measure the weight of rope. If the weight of rope is reached to predetermined weight, the information is transmitted to PLC(programmable logic control) to cut the rope on the wheel. The cutting machine cuts the rope according to the information obtained from the CCD camera. To measure the diameter and length of rope on real time, the searching space for image segmentation is restricted the predetermined area according to the camera calibration position. Finally, to estimate the weight of rope, the knowledge base system which depends on the diameter, the length of rope, and weight relation between these information are constructed according to diameters of rope. This method contributes to achieve the factory automation, and reduce the production cost since the operators are unnecessary to measure the weight of rope by try-and-error method.

  • PDF

An Image Processing Algorithm for a Visual Weld Defects Detection on Weld Joint in Steel Structure (강구조물 용접이음부 외부결함의 자동검출 알고리즘)

  • Seo, Won Chan;Lee, Dong Uk
    • Journal of Korean Society of Steel Construction
    • /
    • v.11 no.1 s.38
    • /
    • pp.1-11
    • /
    • 1999
  • The aim of this study is to construct a machine vision monitoring system for an automatic visual inspection of weld joint in steel structure. An image processing algorithm for a visual weld defects detection on weld bead is developed using the intensity image. An optic system for getting four intensity images was set as a fixed camera position and four different illumination directions. The input images were thresholded and segmented after a suitable preprocessing and the features of each region were defined and calculated. The features were used in the detection and the classification of the visual weld defects. It is confirmed that the developed algorithm can detect weld defects that could not be detected by previously developed techniques. The recognized results were evaluated and compared to expert inspectors' results.

  • PDF

A Study on the Visualization of Suzi Mora Defect of FPD Color Filter (FPD용 컬러 필터의 수지 얼룩 결함 형상화에 관한 연구)

  • Kwon, Oh-Min;Lee, Jung-Seob;Park, Duck-Chun;Joo, Hyo-Nam;Kim, Joon-Seek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.8
    • /
    • pp.761-771
    • /
    • 2009
  • Detecting defects on FPD (Flat Panel Display) color filter before the full panel is made is important to reduce the manufacturing cost. Among many types of defects, the low contrast blemish such as Suzi Mura is difficult to detect using standard CCD cameras. Even skilled inspectors in the inspection line can hardly identify such defects using bare eyes. To overcome this difficulty, point spectrometer has been used to analyze the spectrum to differentiate such defects from normal color filters. However, scanning ever increasing-size color filters by a point spectrometer takes too long time to be used in real production line. We propose a system using a spectral camera which can be viewed as a line scan camera composed of an array of point spectrometers. Three types of lighting system that exhibit different illumination spectrums are devised together with a calibration method of the proposed spectral camera system. To visualize the defect areas, various processing algorithms to identify and to enhance the small differences in spectrum between defective and normal areas are developed. Experiments shows 85% successful visualization. of real samples using the proposed system.

Seoul National University of Science and Technology (칼라 나사 검사를 위한 표면 영역 자동 검출)

  • Song, Tae-Hoon;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.1
    • /
    • pp.107-112
    • /
    • 2016
  • Fastener is a very important component that is used in various areas in industry. Recently, various color fasteners are introduced. According to this, online inspection is required in this area. In this paper, an algorithm for the automatic extraction of the surface of color fastener using color information and dynamic programming is presented. The outer boundary of fastener is found using the difference of color that enables robust processing. The inner boundary of fastener is found by dynamic programming that uses the difference of brightness value within fixed area after converting image to polar coordinate. Experiments are done using the same parameters.

Development of Real-Time TCP/COF Inspection System using Differential Image (차영상을 이용한 실시간 TCP/COF 검사 시스템 개발)

  • Lee, Sang-Won;Choi, Hwan-Yong;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.1
    • /
    • pp.87-93
    • /
    • 2012
  • In this paper, we proposed a faulty pattern detection algorithm of TCP(Tape Carrier Package)/COF(Chip On Film), and implemented a real-time system for inspecting TCP/COF. Since TCP/COF has very high resolution having several micro meters, the human operator should visually inspect all the parts through microscope. In this work, we implement an inspection system to detect the faulty pattern, so the operator can visually inspect only the designated parts by the inspection system through the monitor. The proposed defects detection algorithm for TCP/COF packages is implemented by the pattern matching method based on subtracting the reference image from test image. To evaluate performance of the proposal system. we made various experiments according to type of CCD camera and light source as well as illumination projection method. From experimental results, it is confirmed that the proposed system makes it possible to detect effectively the defective TCP/COF film.

Vibration Control of Working Booms on Articulated Bridge Inspection Robots (교량검사 굴절로봇 작업붐의 진동제어)

  • Hwang, In-Ho;Lee, Hu-Seok;Lee, Jong-Seh
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.21 no.5
    • /
    • pp.421-427
    • /
    • 2008
  • A robot crane truck is developed by the Bridge Inspection Robot Development Interface(BRIDI) for an automated and/or teleoperated bridge inspection. This crane truck looks similar to the conventional bucket crane, but is much smaller in size and light-weight. At the end of the telescoping boom which is 12m long, a robot platform is mounted which allows the operator to scan the bridge structure under the deck trough the camera. Boom vibration induced by wind and deck movement can cause serious problems in this scanning system. This paper presents a control system to mitigate such vibration of the robot boom. In the proposed control system, an actuator is installed at the end of the working boom. This control system is studied using a mathematical model analysis with LQ control algorithm and a scaled model test in the laboratory. The study indicates that the proposed system is efficient for the vibration control of the robot booms, thereby demonstrating its immediate applicability in the field.

Algorithm for Discrimination of Brown Rice Kernels Using Machine Vision (기계시각을 이용한 현미의 개체 품위 판별 알고리즘 개발)

  • 노상하;황창선;이종환
    • Journal of Biosystems Engineering
    • /
    • v.22 no.3
    • /
    • pp.295-302
    • /
    • 1997
  • An ultimate purpose of this study was to develop an automatic system for brown rice quality inspection using image processing technique. In this study emphasis was put on developing an algorithm for discriminating the brown rice kernels depending on their external quality with a color image processing system equipped with an adaptor magnifying the input image and optical fiber for oblique lightening. Primarily, geometical and optical features of images were analyzed with paddy and the various brown rice kernel samples such as a sound, cracked, peen-transparent, green-opaque, colored, white-opaque and brokens. Secondary, geometrical and optical parameters significant for identifying each rice kernels were screened by a statistical analysis(STEPWISE and DISCRIM procedure, SAS wer. 6) and an algorithm fur on- line discrimination of the rice kernels in static state were developed, and finally its performance was evaluated. The results are summarized as follows. 1) It was ascertained that the cracked kernels can be detected when e incident angle of the oblique light is less than 2$0^{\circ}C$ but detectivity was significantly affected by the angle between the direction of the oblique light and the longitudinal axis of the rice kernel and also by the location of the embryo with respect to the oblique light. 2) The most significant Parameters which can discriminate brown rice kernels are area, length and R, B and r values among the several geometrical and optical parameters. 3) Discrimination accuracies of the algorithm were ranged from 90% to 96% for a sound, cracked, colored, broken and unhulled, about 81 % for green-transparent and white-opaque and 75 % for green-opaque, respectively.

  • PDF

Efficient Eye Location for Biomedical Imaging using Two-level Classifier Scheme

  • Nam, Mi-Young;Wang, Xi;Rhee, Phill-Kyu
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.6
    • /
    • pp.828-835
    • /
    • 2008
  • We present a novel method for eye location by means of a two-level classifier scheme. Locating the eye by machine-inspection of an image or video is an important problem for Computer Vision and is of particular value to applications in biomedical imaging. Our method aims to overcome the significant challenge of an eye-location that is able to maintain high accuracy by disregarding highly variable changes in the environment. A first level of computational analysis processes this image context. This is followed by object detection by means of a two-class discrimination classifier(second algorithmic level).We have tested our eye location system using FERET and BioID database. We compare the performance of two-level classifier with that of non-level classifier, and found it's better performance.

Detection of Calibration Patterns for Camera Calibration with Irregular Lighting and Complicated Backgrounds

  • Kang, Dong-Joong;Ha, Jong-Eun;Jeong, Mun-Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.5
    • /
    • pp.746-754
    • /
    • 2008
  • This paper proposes a method to detect calibration patterns for accurate camera calibration under complicated backgrounds and uneven lighting conditions of industrial fields. Required to measure object dimensions, the preprocessing of camera calibration must be able to extract calibration points from a calibration pattern. However, industrial fields for visual inspection rarely provide the proper lighting conditions for camera calibration of a measurement system. In this paper, a probabilistic criterion is proposed to detect a local set of calibration points, which would guide the extraction of other calibration points in a cluttered background under irregular lighting conditions. If only a local part of the calibration pattern can be seen, input data can be extracted for camera calibration. In an experiment using real images, we verified that the method can be applied to camera calibration for poor quality images obtained under uneven illumination and cluttered background.

Visual Bean Inspection Using a Neural Network

  • Kim, Taeho;Yongtae Do
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.644-647
    • /
    • 2003
  • This paper describes a neural network based machine vision system designed for inspecting yellow beans in real time. The system consists of a camera. lights, a belt conveyor, air ejectors, and a computer. Beans are conveyed in four lines on a belt and their images are taken by a monochrome line scan camera when they fall down from the belt. Beans are separated easily from their background on images by back-lighting. After analyzing the image, a decision is made by a multilayer artificial neural network (ANN) trained by the error back-propagation (EBP) algorithm. We use the global mean, variance and local change of gray levels of a bean for the input nodes of the network. In an our experiment, the system designed could process about 520kg/hour.

  • PDF