• Title/Summary/Keyword: line CCD sensor

Search Result 39, Processing Time 0.029 seconds

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

A Study on the Vision Sensor System for Tracking the I-Butt Weld Joints (I형 맞대기 용접선 추적용 시각센서 시스템에 관한 연구)

  • Bae, Hee-Soo;Kim, Jae-Woong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.9
    • /
    • pp.179-185
    • /
    • 2001
  • In this study, a visual sensor system for weld seam tracking the I-butt weld joints in GMA welding was constructed. The sensor system consists of a CCD camera, a diode laser with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and arc light. In order to obtain the enhanced image, quantitative relationship between laser intensity and iris number was investigated. Throughout the repeated experiments, the shutter speed was set at 1-milisecond for minimizing the effect of spatters on the image, and therefore most of the spatter trace in the image have been found to be reduced. Region of interest was defined from the entire image and gray level of searched laser line was compared to that of weld line. The differences between these gray levels lead to spot the position of weld joint using central difference method. The results showed that, as long as weld line was within $^\pm$15$^\circ$from the longitudinal straight fine, the system constructed in this study could track the weld line successful1y. Since the processing time reduced to 0.05 sec, it is expected that the developed method could be adopted to high speed welding such as laser welding.

  • PDF

A Study on a Vision Sensor System for Tracking the I-Butt Weld Joints

  • Kim Jae-Woong;Bae Hee-Soo
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.10
    • /
    • pp.1856-1863
    • /
    • 2005
  • In this study, a visual sensor system for weld seam tracking the I-butt weld joints in GMA welding was constructed. The sensor system consists of a CCD camera, a diode laser with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and arc light. In order to obtain the enhanced image, quantitative relationship between laser intensity and iris opening was investigated. Throughout the repeated experiments, the shutter speed was set at 1/1000 second for minimizing the effect of spatters on the image, and therefore the image without the spatter traces could be obtained. Region of interest was defined from the entire image and gray level of the searched laser stripe was compared to that of weld line. The differences between these gray levels lead to spot the position of weld joint using central difference method. The results showed that, as long as weld line is within $\pm15^{o}$ from the longitudinal straight line, the system constructed in this study could track the weld line successfully. Since the processing time is no longer than 0.05 sec, it is expected that the developed method could be adopted to high speed welding such as laser welding.

A Study for Detecting AGV Driving Information using Vision Sensor (비전 센서를 이용한 AGV의 주행정보 획득에 관한 연구)

  • Lee, Jin-Woo;Sohn, Ju-Han;Choi, Sung-Uk;Lee, Young-Jin;Lee, Kwon-Soon
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2575-2577
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

A Study for AGV Steering Control using Evolution Strategy (진화전략 알고리즘을 이용한 AGV 조향제어에 관한 연구)

  • 이진우;손주한;최성욱;이영진;이권순
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.149-149
    • /
    • 2000
  • We experimented on AGV driving test with color CCD camera which is setup on it. This paper can be divided into two parts. One is image processing part to measure the condition of the guideline and AGV. The other is part that obtains the reference steering angle through using the image processing parts. First, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, AGV knows the driving conditions of AGV. After then using of those information, AGV calculates the reference steering angle changed by the speed of AGV. In the case of low speed, it focuses on the left/right error values of the guide line. As increasing of the speed of AGV, it focuses on the slop of guide line. Lastly, we are to model the above descriptions as the type of PID controller and regulate the coefficient value of it the speed of AGV.

  • PDF

3D Map Building of The Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.123.1-123
    • /
    • 2001
  • For Autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use an sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate $\pm$ $30{\Circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center poings. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

3D Map Building of the Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.123.5-123
    • /
    • 2001
  • For autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use a sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate$\pm$30$^{\circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center points. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

A Defect Inspection Method in TFT-LCD Panel Using LS-SVM (LS-SVM을 이용한 TFT-LCD 패널 내의 결함 검사 방법)

  • Choi, Ho-Hyung;Lee, Gun-Hee;Kim, Ja-Geun;Joo, Young-Bok;Choi, Byung-Jae;Park, Kil-Houm;Yun, Byoung-Ju
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.852-859
    • /
    • 2009
  • Normally, to extract the defect in TFT-LCD inspection system, the image is obtained by using line scan camera or area scan camera which is achieved by CCD or CMOS sensor. Because of the limited dynamic range of CCD or CMOS sensor as well as the effect of the illumination, these images are frequently degraded and the important features are hard to decern by a human viewer. In order to overcome this problem, the feature vectors in the image are obtained by using the average intensity difference between defect and background based on the weber's law and the standard deviation of the background region. The defect detection method uses non-linear SVM (Supports Vector Machine) method using the extracted feature vectors. The experiment results show that the proposed method yields better performance of defect classification methods over conveniently method.

A Study on Weld Line Detection and Wire Feeding Rate Control in GMAW with Vision Sensor (GMAW에서 시각센서를 이용한 용접선 정보의 추출과 와이어 승급속도의 제어에 관한 연구)

  • 조택동;김옥현;양상민;조만호
    • Journal of Welding and Joining
    • /
    • v.19 no.6
    • /
    • pp.600-607
    • /
    • 2001
  • A CCD camera with a laser stripe was applied to realize the automatic weld seam tracking in GMAW. It takes relatively long time to process image on-line control using the basic Hough transformation, but it has a tendency of robustness over the noises such as spatter and arc light. For this reason. it was complemented with adaptive Hough transformation to have an on-line processing ability for scanning specific weld points. The adaptive Hough transformation was used to extract laser stripes and to obtain specific weld points. The 3-dimensional information obtained from the vision system made it possible to generate the weld torch path and to obtain the information such as width and depth of weld line. We controled the wire feeding rate using informations of weld line.

  • PDF

Minimization of Motion Blur and Dynamic MTF Analysis in the Electro-Optical TDI CMOS Camera on a Satellite (TDI CMOS 센서를 이용한 인공위성 탑재용 전자광학 카메라의 Motion Blur 최소화 방법 및 Dynamic MTF 성능 분석)

  • Heo, HaengPal;Ra, SungWoong
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.85-99
    • /
    • 2015
  • TDI CCD sensors are being used for most of the electro-optical camera mounted on the low earth orbit satellite to meet high performance requirements such as SNR and MTF. However, the CMOS sensors which have a lot of implementation advantages over the CCD, are being upgraded to have the TDI function. A few methods for improving the issue of motion blur which is apparent in the CMOS sensor than the CCD sensor, are being introduced. Each pixel can be divided into a few sub-pixels to be read more than once as is the same case with three or four phased CCDs. The fill factor can be reduced intentionally or even a kind of mask can also be implemented at the edge of pixels to reduce the blur. The motion blur can also be reduced in the TDI CMOS sensor by reducing the integration time from the full line scan time. Because the integration time can be controlled easily by the versatile control electronics, one of two performance parameters, MTF and SNR, can be concentrated dynamically depending on the aim of target imaging. MATLAB simulation has been performed and the results are presented in this paper. The goal of the simulation is to compare dynamic MTFs affected by the different methods for reducing the motion blur in the TDI CMOS sensor.