• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.038 seconds

Development of camera caliberation technique using neural-network (신경회로망을 이용함 카메라 보정기법 개발)

  • 한성현;왕한홍;장영희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1617-1620
    • /
    • 1997
  • This paper describes the camera caliberation based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distoriton causes an inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera aclibration is illustrated by simulation and experiment.

  • PDF

Detection of Calibration Patterns for Camera Calibration with Irregular Lighting and Complicated Backgrounds

  • Kang, Dong-Joong;Ha, Jong-Eun;Jeong, Mun-Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.5
    • /
    • pp.746-754
    • /
    • 2008
  • This paper proposes a method to detect calibration patterns for accurate camera calibration under complicated backgrounds and uneven lighting conditions of industrial fields. Required to measure object dimensions, the preprocessing of camera calibration must be able to extract calibration points from a calibration pattern. However, industrial fields for visual inspection rarely provide the proper lighting conditions for camera calibration of a measurement system. In this paper, a probabilistic criterion is proposed to detect a local set of calibration points, which would guide the extraction of other calibration points in a cluttered background under irregular lighting conditions. If only a local part of the calibration pattern can be seen, input data can be extracted for camera calibration. In an experiment using real images, we verified that the method can be applied to camera calibration for poor quality images obtained under uneven illumination and cluttered background.

A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM (사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법)

  • Lee, Jae-Min;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

Landmark Initialization for Unscented Kalman Filter Sensor Fusion in Monocular Camera Localization

  • Hartmann, Gabriel;Huang, Fay;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.1-11
    • /
    • 2013
  • The determination of the pose of the imaging camera is a fundamental problem in computer vision. In the monocular case, difficulties in determining the scene scale and the limitation to bearing-only measurements increase the difficulty in estimating camera pose accurately. Many mobile phones now contain inertial measurement devices, which may lend some aid to the task of determining camera pose. In this study, by means of simulation and real-world experimentation, we explore an approach to monocular camera localization that incorporates both observations of the environment and measurements from accelerometers and gyroscopes. The unscented Kalman filter was implemented for this task. Our main contribution is a novel approach to landmark initialization in a Kalman filter; we characterize the tolerance to noise that this approach allows.

Development of Camera Calibration Technique Using Neural-Network (뉴럴네트워크를 이용한 카메라 보정기법 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1997.10a
    • /
    • pp.225-229
    • /
    • 1997
  • This paper describes the camera calibration based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distortion causes and inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is, the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera calibration is illustrated by simulation and experiment.

  • PDF

A New Linear Explicit Camera Calibration Method (새로운 선형의 외형적 카메라 보정 기법)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.66-71
    • /
    • 2014
  • Vision is the most important sensing capability for both men and sensory smart machines, such as intelligent robots. Sensed real 3D world and its 2D camera image can be related mathematically by a process called camera calibration. In this paper, we present a novel linear solution of camera calibration. Unlike most existing linear calibration methods, the proposed technique of this paper can identify camera parameters explicitly. Through the step-by-step procedure of the proposed method, the real physical elements of the perspective projection transformation matrix between 3D points and the corresponding 2D image points can be identified. This explicit solution will be useful for many practical 3D sensing applications including robotics. We verified the proposed method by using various cameras of different conditions.

Measurement of two-dimensional vibration and calibration using the low-cost machine vision camera (저가의 머신 비전 카메라를 이용한 2차원 진동의 측정 및 교정)

  • Kim, Seo Woo;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.2
    • /
    • pp.99-109
    • /
    • 2018
  • The precision of the vibration-sensors, contact or non-contact types, is usually satisfactory for the practical measurement applications, but a sensor is confined to the measurement of a point or a direction. Although the precision and frequency span of the low-cost camera are inferior to these sensors, it has the merits in the cost and in the capability of simultaneous measurement of a large vibrating area. Furthermore, a camera can measure multi-degrees-of-freedom of a vibrating object simultaneously. In this study, the calibration method and the dynamic characteristics of the low-cost machine vision camera as a sensor are studied with a demonstrating example of the two-dimensional vibration of a cantilever beam. The planar image of the camera shot reveals two rectilinear and one rotational motion. The rectilinear vibration motion of a single point is first measured using a camera and the camera is experimentally calibrated by calculating error referencing the LDV (Laser Doppler Vibrometer) measurement. Then, by measuring the motion of multiple points at once, the rotational vibration motion and the whole vibration motion of the cantilever beam are measured. The whole vibration motion of the cantilever beam is analyzed both in time and frequency domain.

Development of Welding Quality Vision Inspection System for Sinking Seat (차량용 싱킹시트의 용접품질 비젼 검사 시스템 개발)

  • Yun, Sang-Hwan;Kim, Han-Jong;Moon, Sang-In;Kim, Sung-Gaun
    • Proceedings of the KSME Conference
    • /
    • 2007.05a
    • /
    • pp.1553-1558
    • /
    • 2007
  • This paper presents a vision based autonomous inspection system for welding quality control of car sinking seat. In order to overcome the precision error that arises from a visible inspection by operator in the manufacturing process of a car sinking seat, this paper proposes the MVWQC (machine vision based welding quality control) system. This system consists of the CMOS camera and NI machine vision system. The image processing software for the system has been developed using the NI vision builder system. The geometry of welding bead, which is the welding quality criteria, is measured by using the captured image with median filter applied on it. Experiments have been performed to verify the proposed MVWQC of car sinking seat.

  • PDF

The development of a micro robot system for robot soccer game (로봇 축구 대회를 위한 마이크로 로봇 시스템의 개발)

  • 이수호;김경훈;김주곤;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.507-510
    • /
    • 1996
  • In this paper we present the multi-agent robot system developed for participating in micro robot soccer tournament. The multi-agent robot system consists of micro robot, a vision system, a host computer and a communication module. Mcro robot are equipped with two mini DC motors with encoders and gearboxes, a R/F receiver, a CPU and infrared sensors for obstacle detection. A vision system is used to recognize the position of the ball and opponent robots, position and orientation of our robots. The vision system is composed of a color CCD camera and a vision processing unit. Host computer is a Pentium PC, and it receives information from the vision system, generates commands for each robot using a robot management algorithm and transmits commands to the robots by the R/F communication module. And in order to achieve a given mission in micro robot soccer game, cooperative behaviors by robots are essential. Cooperative work between individual agents is achieved by the command of host computer.

  • PDF

Effective Real Time Tracking System using Stereo Vision

  • Lee, Hyun-Jin;Kuc, Tae-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.70.1-70
    • /
    • 2001
  • Recently, research of visual control is getting more essential in robotic application, and acquiring 3D informations from the 2D images is becoming more important with development of vision system. For this application, we propose the effective way of controlling stereo vision tracking system for target tracking and calculating distance between target and camera. In this paper we address improved controller using dual-loop visual servo which is more effective compared with using single-loop visual servo for stereo vision tracking system. The speed and the accuracy for realizing a real time tracking are important. However, the vision processing speed is too slow to track object in real time by using only vision feedback data. So we use another feedback data from controller parts which offer state feedback ...

  • PDF