• Title/Summary/Keyword: camera vision

Search Result 1,389, Processing Time 0.024 seconds

Development of Vision system for Back Light Unit of Defect (백라이트 유닛의 결함 검사를 위한 비전 시스템 개발)

  • Han, Chang-Ho;Oh, Choon-Suk;Ryu, Young-Kee;Cho, Sang-Hee
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.161-164
    • /
    • 2006
  • In this thesis we designed the vision system to inspect the defect of a back light unit of plat panel display device. The vision system is divided into hardware and inspection algorithm of defect. Hardware components consist of illumination part, robot-arm controller part and image-acquisition part. Illumination part is made of acrylic panel for light diffusion and five 36W FPL's(Fluorescent Parallel Lamp) and electronic ballast with low frequency harmonics. The CCD(Charge-Coupled Device) camera of image-acquisition part is able to acquire the bright image by the light coming from lamp. The image-acquisition part is composed of CCD camera and frame grabber. The robot-arm controller part has a role to let the CCD camera move to the desired position. To take inspections of surface images of a flat panel display it can be controlled and located every nook and comer. Images obtained by robot-arm and image-acquisition board are saved on the hard-disk through windows programming and are tested whether there are defects by using the image processing algorithms.

A Study on Dynamically Visual System that Vision and Sense of Equilibrium are Fused (시각과 평형각이 융합된 다이나믹한 시각 시스템에 관한 연구)

  • 문용선;정남채
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.7
    • /
    • pp.1354-1360
    • /
    • 2001
  • Calculated velocity distribution was used to visual information by image that is obtained from camera. The visual velocity of object that is obtained from this visual information were fused and experimented. That is, need motion of eye that motion of head that happen by external disturbance or move of camera itself to get stable image in environment that receive external disturbance can be compensated. In this treatise, algorithm that control gaze which vision and sense of equilibrium are fused in environment with external disturbance proposed, and thing that compare with that it controls gaze only that control gaze which vision and sense of equilibrium are fused in the experiment result and position deflection is few confirmed. This was because action of camera prop is effect that record conclusion error of the speed because the appearance speed is decreased being compensated by angular velocity sensor.

  • PDF

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information (융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

Vision-based Self Localization Using Ceiling Artificial Landmark for Ubiquitous Mobile Robot (유비쿼터스 이동로봇용 천장 인공표식을 이용한 비젼기반 자기위치인식법)

  • Lee Ju-Sang;Lim Young-Cheol;Ryoo Young-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.560-566
    • /
    • 2005
  • In this paper, a practical technique for correction of a distorted image for vision-based localization of ubiquitous mobile robot. The localization of mobile robot is essential and is realized by using camera vision system. In order to wide the view angle of camera, the vision system includes a fish-eye lens, which distorts the image. Because a mobile robot moves rapidly, the image processing should he fast to recognize the localization. Thus, we propose the practical correction technique for a distorted image, verify the Performance by experimental test.

Vision Inspection Module for Dimensional Measurement in CMM having Vision Probe (비젼프로브를 가지는 3차원 측정기를 위한 형상 측정 시스템 묘듈 개발)

  • 이일환;박희재;김구영
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.379-383
    • /
    • 1995
  • In this paper, vision inspection module for dimensional measurement has been developed. For high accuracy of CMM, camera calibration and edge detection with subpixel accuracy have been implemented. In measurement process, the position of vision probe can be recognized in PC by serial communication with CMM controller. The developed vision inspection module can be widely applied to the practical measurement process.

  • PDF

Conversion of Camera Lens Distortions between Photogrammetry and Computer Vision (사진측량과 컴퓨터비전 간의 카메라 렌즈왜곡 변환)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.267-277
    • /
    • 2019
  • Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.

An Impletation of FPGA-based Pattern Matching System for PCB Pattern Detection (PCB 패턴 검출을 위한 FPGA 기반 패턴 매칭 시스템 구현)

  • Jung, Kwang-Sung;Moon, Cheol-Hong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.5
    • /
    • pp.465-472
    • /
    • 2016
  • This study materialized an FPGA-based system to extract PCB patterns. The Printed Circuit Boards that are produced these days are becoming more detailed and complex. Therefore, the importance of a vision system to extract defects of detailed patterns is increasing. This study produced an FPGA-based system that has high speed handling for vision automation of the PCB production process. A vision library that is used to extract defect patterns was also materialized in IPs to optimize the system. The IPs materialized are Camera Link IP, pattern matching IP, VGA IP, edge extraction IP, and memory IP.

Dimension Measurement for Large-scale Moving Objects Using Stereo Camera with 2-DOF Mechanism (스테레오 카메라와 2축 회전기구를 이용한 대형 이동물체의 치수측정)

  • Cuong, Nguyen Huu;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.6
    • /
    • pp.543-551
    • /
    • 2015
  • In this study, a novel method for dimension measurement of large-scale moving objects using stereo camera with 2-degree of freedom (2-DOF) mechanism is presented. The proposed method utilizes both the advantages of stereo vision technique and the enlarged visibility range of camera due to 2-DOF rotary mechanism in measuring large-scale moving objects. The measurement system employs a stereo camera combined with a 2-DOF rotary mechanism that allows capturing separate corners of the measured object. The measuring algorithm consists of two main stages. First, three-dimensional (3-D) positions of the corners of the measured object are determined based on stereo vision algorithms. Then, using the rotary angles of the 2-DOF mechanism the dimensions of the measured object are calculated via coordinate transformation. The proposed system can measure the dimensions of moving objects with relatively slow and steady speed. We showed that the proposed system guarantees high measuring accuracy with some experiments.

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.