• Title/Summary/Keyword: vision board

Search Result 138, Processing Time 0.025 seconds

The Recognition of Crack Detection Using Difference Image Analysis Method based on Morphology (모폴로지 기반의 차영상 분석기법을 이용한 균열검출의 인식)

  • Byun Tae-bo;Kim Jang-hyung;Kim Hyung-soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.197-205
    • /
    • 2006
  • This paper presents the moving object tracking method using vision system. In order to track object in real time, the image of moving object have to be located the origin of the image coordinate axes. Accordingly, Fuzzy Control System is investigated for tracking the moving object, which control the camera module with Pan/Tilt mechanism. Hereafter, so the this system is applied to mobile robot, we design and implement image processing board for vision system. Also fuzzy controller is implemented to the StrongArm board. Finally, the proposed fuzzy controller is useful for the real-time moving object tracking system by experiment.

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김재웅;김동호
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.643-646
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame process, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data of weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam, such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

Development of PCB board vision inspection system using image recognition based on deep learning (딥러닝 영상인식을 이용한 PCB 기판 비전 검사 시스템 개발)

  • Chang-hoon Lee;Min-sung Lee;Jeong-min Sim;Dong-won Kang;Tae-jin Yun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.289-290
    • /
    • 2024
  • PCB(Printed circuit board)생산시에 중요한 역할을 담당하는 비전검사 시스템의 성능은 지속적으로 발전해왔다. 기존 머신 비전 검사 시스템은 이미지가 불규칙하고 비정형일 경우 해석이 어렵고 전문가의 경험에 의존한다. 그리고 비전검사 시스템 개발 당시의 기준과 다른 불량이 발생한다면 검출이 불가능 하거나 정확도가 낮게 나온다. 본 논문에서는 이를 개선하고자 딥러닝 영상인식을 이용한 PCB 기판 비전 검사 시스템을 구현하였다. 딥러닝 영상인식 알고리즘은 YOLOv4를 이용하고, 워핑(warping)과 시킨 PCB 이미지를 학습하여 비전검사 시스템을 구성하였다. 딥러닝 영상인식 기술의 처리 속도를 보완하고자 QR코드로 PCB 기판 종류를 인식하고, 해당 PCB 부품의 미삽은 정답 이미지 바운딩 박스 좌표와 비교하여 불량품을 발견하면 표시해준다. 기판의 부품 인식을 위해 기판 데이터는 직접 촬영하여 수집하였다. 이를 활용하여 PCB 생산 공정에서 비전검사 시스템의 성능이 향상되었고,, 다양한 PCB를 생산에 신속하게 대응할 수 있다.

  • PDF

Development of Vision system for Back Light Unit of Defect (백라이트 유닛의 결함 검사를 위한 비전 시스템 개발)

  • Han, Chang-Ho;Oh, Choon-Suk;Ryu, Young-Kee;Cho, Sang-Hee
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.161-164
    • /
    • 2006
  • In this thesis we designed the vision system to inspect the defect of a back light unit of plat panel display device. The vision system is divided into hardware and inspection algorithm of defect. Hardware components consist of illumination part, robot-arm controller part and image-acquisition part. Illumination part is made of acrylic panel for light diffusion and five 36W FPL's(Fluorescent Parallel Lamp) and electronic ballast with low frequency harmonics. The CCD(Charge-Coupled Device) camera of image-acquisition part is able to acquire the bright image by the light coming from lamp. The image-acquisition part is composed of CCD camera and frame grabber. The robot-arm controller part has a role to let the CCD camera move to the desired position. To take inspections of surface images of a flat panel display it can be controlled and located every nook and comer. Images obtained by robot-arm and image-acquisition board are saved on the hard-disk through windows programming and are tested whether there are defects by using the image processing algorithms.

OnBoard Vision Based Object Tracking Control Stabilization Using PID Controller

  • Mariappan, Vinayagam;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.81-86
    • /
    • 2016
  • In this paper, we propose a simple and effective vision-based tracking controller design for autonomous object tracking using multicopter. The multicopter based automatic tracking system usually unstable when the object moved so the tracking process can't define the object position location exactly that means when the object moves, the system can't track object suddenly along to the direction of objects movement. The system will always looking for the object from the first point or its home position. In this paper, PID control used to improve the stability of tracking system, so that the result object tracking became more stable than before, it can be seen from error of tracking. A computer vision and control strategy is applied to detect a diverse set of moving objects on Raspberry Pi based platform and Software defined PID controller design to control Yaw, Throttle, Pitch of the multicopter in real time. Finally based series of experiment results and concluded that the PID control make the tracking system become more stable in real time.

Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision (다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.7 no.1
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

Detection of Surface Cracks in Eggshell by Machine Vision and Artificial Neural Network (기계 시각과 인공 신경망을 이용한 파란의 판별)

  • 이수환;조한근;최완규
    • Journal of Biosystems Engineering
    • /
    • v.25 no.5
    • /
    • pp.409-414
    • /
    • 2000
  • A machine vision system was built to obtain single stationary image from an egg. This system includes a CCD camera, an image processing board and a lighting system. A computer program was written to acquire, enhance and get histogram from an image. To minimize the evaluation time, the artificial neural network with the histogram of the image was used for eggshell evaluation. Various artificial neural networks with different parameters were trained and tested. The best network(64-50-1 and 128-10-1) showed an accuracy of 87.5% in evaluating eggshell. The comparison test for the elapsed processing time per an egg spent by this method(image processing and artificial neural network) and by the processing time per an egg spent by this method(image processing and artificial neural network) and by the previous method(image processing only) revealed that it was reduced to about a half(5.5s from 10.6s) in case of cracked eggs and was reduced to about one-fifth(5.5s from 21.1s) in case of normal eggs. This indicates that a fast eggshell evaluation system can be developed by using machine vision and artificial neural network.

  • PDF

Implementation of a High-speed Template Matching System for Wafer-vision Alignment Using FPGA

  • Jae-Hyuk So;Minjoon Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2366-2380
    • /
    • 2024
  • In this study, a high-speed template matching system is proposed for wafer-vision alignment. The proposed system is designed to rapidly locate markers in semiconductor equipment used for wafer-vision alignment. We optimized and implemented a template-matching algorithm for the high-speed processing of high-resolution wafer images. Owing to the simplicity of wafer markers, we removed unnecessary components in the algorithm and designed the system using a field-programmable gate array (FPGA) to implement high-speed processing. The hardware blocks were designed using the Xilinx ZCU104 board, and the pyramid and matching blocks were designed using programmable logic for accelerated operations. To validate the proposed system, we established a verification environment using stage equipment commonly used in industrial settings and reference-software-based validation frameworks. The output results from the FPGA were transmitted to the wafer-alignment controller for system verification. The proposed system reduced the data-processing time by approximately 30% and achieved a level of accuracy in detecting wafer markers that was comparable to that achieved by reference software, with minimal deviation. This system can be used to increase precision and productivity during semiconductor manufacturing processes.

Three Degrees of Freedom Global Calibration Method for Measurement Systems with Binocular Vision

  • Xu, Guan;Zhang, Xinyuan;Li, Xiaotao;Su, Jian;Lu, Xue;Liu, Huanping;Hao, Zhaobing
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.1
    • /
    • pp.107-117
    • /
    • 2016
  • We develop a new method to globally calibrate the feature points that are derived from the binocular systems at different positions. A three-DOF (degree of freedom) global calibration system is established to move and rotate the 3D calibration board to an arbitrary position. A three-DOF global calibration model is constructed for the binocular systems at different positions. The three-DOF calibration model unifies the 3D coordinates of the feature points from different binocular systems into a unique world coordinate system that is determined by the initial position of the calibration board. Experiments are conducted on the binocular systems at the coaxial and diagonal positions. The experimental root-mean-square errors between the true and reconstructed 3D coordinates of the feature points are 0.573 mm, 0.520 mm and 0.528 mm at the coaxial positions. The experimental root-mean-square errors between the true and reconstructed 3D coordinates of the feature points are 0.495 mm, 0.556 mm and 0.627 mm at the diagonal positions. This method provides a global and accurate calibration to unity the measurement points of different binocular vision systems into the same world coordinate system.

An Vision System for Traffic sign Recognition (교통표지판 인식을 위한 비젼시스템)

  • Kim, Tae-Woo;Kang, Yong-Seok;Cha, Sam;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.45-50
    • /
    • 2009
  • This paper presents an active vision system for on-line traffic sign recognition. The system is composed of two cameras, one is equipped with a wide-angle lens and the other with a telephoto lends, and a PC with an image processing board. The system first detects candidates for traffic signs in the wide-angle image using color, intensity, and shape information. For each candidate, the telephoto-camera is directed to its predicted position to capture the candidate in a large size in the image. The recognition algorithm is designed by intensively using built in functions of an off-the-shelf image processing board to realize both easy implementation and fast recognition. The results of on-road experiments show the feasibility of the system.

  • PDF