• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.032 seconds

Implementation of a system for detecting defects on optical fiber coating (Vision System을 이용한 광섬유 코팅 결함 검출 System 구현)

  • 서상일;최우창;김학일
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.796-799
    • /
    • 1996
  • 광섬유는 코어(Core), 클레드(Clad), 그리고 1,2차 코팅(Coating)으로 구성되어 있다. 본 연구에서는 광섬유의 코팅에 생기는 결함의 유무 및 종류와 크기를 분류하는 Vision System을 구현하였다. 전처리 과정으로, CCD Camera를 이용하여 얻은 화상에 대하여 Sobel 연산자로 경계선을 추출하고, 문턱값(Threshold Value)을 적용하여 이진 화상을 만든다. 외경 정보 추출을 위하여, 투영 정보, 수리 형태학(Mathematical Morphology)적 연산을 수행하고, 결함의 종류와 크기를 효율적으로 분류하도록 Tree Classifier를 설계하였다. 실험 결과로서 각 결함 별 오차율, 전체 오차율(Total Error Rate)등을 제시하였다.

  • PDF

Vision-based Line Tracking and steering control of AGVs

  • Lee, Hyeon-Ho;Lee, Chang-Goo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.180.4-180
    • /
    • 2001
  • This paper describes a vision-based line-tracking system for AGV and steering control scheme. For detect the guideline quickly and exactly, We use four line-points which complement and predict each other. This low-cost line-tracking system is efficiently using PC-based real-time vision processing, Steering control is studied through an steering controller with guide-line angle and line-point error. This method is tested via a typical AGV with a single camera in laboratory environment.

  • PDF

A study on the DGPS Data compensation useing vision system for Autonomous land vehicle (자율 주행을 위한 비젼 시스템을 이용한 DGPS 데이터 보정에 관한 연구)

  • 문성룡;박장훈;정준익;장홍석;노도환
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.279-282
    • /
    • 2002
  • This Paper of use vision system, there is no DGPS's information, DGPS data value receives real time exactly without being influenced in surroundings environment because using vision system that is used in self-regulation traveling by car system. Therefore, conversion and DGPS of received in camera coordinate changing coordinate error correct and wish to grasp correctly position of vehicles.

  • PDF

A study on Vision based Steering Control for Dual Motor Drive AGV (영상시스템을 이용한 이륜속도차방식 AGV 조향제어)

  • Lee, Hyeon-Ho;Lee, Chang-Goo;Kim, Sung-Jong
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2277-2279
    • /
    • 2001
  • This paper describes a vision-based steering control method for AGV which use dual motor drive. We suggest an algorithm which can be detect the guideline quickly and exactly for real time vision processing, and control the steering through an assign the CP (Control - Point) of input image. This method is tested via a IAGV which dual motor drive with a single camera in laboratory environment.

  • PDF

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Human Detection in the Images of a Single Camera for a Corridor Navigation Robot (복도 주행 로봇을 위한 단일 카메라 영상에서의 사람 검출)

  • Kim, Jeongdae;Do, Yongtae
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.238-246
    • /
    • 2013
  • In this paper, a robot vision technique is presented to detect obstacles, particularly approaching humans, in the images acquired by a mobile robot that autonomously navigates in a narrow building corridor. A single low-cost color camera is attached to the robot, and a trapezoidal area is set as a region of interest (ROI) in front of the robot in the camera image. The lower parts of a human such as feet and legs are first detected in the ROI from their appearances in real time as the distance between the robot and the human becomes smaller. Then, the human detection is confirmed by detecting his/her face within a small search region specified above the part detected in the trapezoidal ROI. To increase the credibility of detection, a final decision about human detection is made when a face is detected in two consecutive image frames. We tested the proposed method using images of various people in corridor scenes, and could get promising results. This method can be used for a vision-guided mobile robot to make a detour for avoiding collision with a human during its indoor navigation.

Inspection of combination quality for automobile steel balance weight using laser line projector and USB camera (레이저 선 프로젝터와 USB 카메라를 이용한 자동차용 철 밸런스 웨이트의 결합상태 검사)

  • Choi, Kyung Jin;Park, Se Je;Lim, Ho;Park, Chong Kug
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.1
    • /
    • pp.15-21
    • /
    • 2013
  • In this paper, sensor system and inspection algorithm in order to inspect steel balance weight for automobile is described. Steel balance weight is composed of clip and weight, which is joined by press process. The defective one has a gap between clip and weight. To detect whether there is a gap, sensor system is simply configured with laser line projector and USB camera, which make it possible to measure the height difference of clip and weight area. Laser line pattern which is made on the surface of a balance weight is captured by USB camera. In case that USB camera is used in machine vision, barrel distortion caused by wide angle lens makes the captured image distorted. Image warping function is applied to correct the distortion. Simple image processing algorithm is applied to extract the laser line information and whether it is good or not is judged through the extracted information.

Efficient Tracking of a Moving Object using Optimal Representative Blocks

  • Kim, Wan-Cheol;Hwang, Cheol-Ho;Lee, Jang-Myung
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.4
    • /
    • pp.495-502
    • /
    • 2003
  • This paper focuses on the implementation of an efficient tracking method of a moving object using optimal representative blocks by way of a pan-tilt camera. The key idea is derived from the fact that when the image size of a moving object is shrunk in an image frame according to the distance between the mobile robot camera and the object in motion, the tracking performance of a moving object can be improved by reducing the size of representative blocks according to the object image size. Motion estimations using Edge Detection (ED) and Block-Matching Algorithm (BMA) are regularly employed to track objects by vision sensors. However, these methods often neglect the real-time vision data since these schemes suffer from heavy computational load. In this paper, a representative block able to significantly reduce the amount of data to be computed, is defined and optimized by changing the size of representative blocks according to the size of the object in the image frame in order to improve tracking performance. The proposed algorithm is verified experimentally by using a two degree-of- freedom active camera mounted on a mobile robot.

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

A Vision-based Position Estimation Method Using a Horizon (지평선을 이용한 영상기반 위치 추정 방법 및 위치 추정 오차)

  • Shin, Jong-Jin;Nam, Hwa-Jin;Kim, Byung-Ju
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.169-176
    • /
    • 2012
  • GPS(Global Positioning System) is widely used for the position estimation of an aerial vehicle. However, GPS may not be available due to hostile jamming or strategic reasons. A vision-based position estimation method can be effective if GPS does not work properly. In mountainous areas without any man-made landmark, a horizon is a good feature for estimating the position of an aerial vehicle. In this paper, we present a new method to estimate the position of the aerial vehicle equipped with a forward-looking infrared camera. It is assumed that INS(Inertial Navigation System) provides the attitudes of an aerial vehicle and a camera. The horizon extracted from an infrared image is compared with horizon models generated from DEM(Digital Elevation Map). Because of a narrow field of view of the camera, two images with a different camera view are utilized to estimate a position. The algorithm is tested using real infrared images acquired on the ground. The experimental results show that the method can be used for estimating the position of an aerial vehicle.