• Title/Summary/Keyword: Frame Camera

Search Result 613, Processing Time 0.027 seconds

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

The Camera Calibration Parameters Estimation using The Projection Variations of Line Widths (선폭들의 투영변화율을 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Moon, Sung-Young;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2372-2374
    • /
    • 2003
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as focal length, scale factor, pose, orientations, and distance. But, radial lens distortion is not modeled. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1,2,3,4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Development of Single-Frame PIV Velocity Field Measurement Technique Using a High Resolution CCD Camera (고해상도 CCD카메라를 이용한 Single-Frame PIV 속도장 측정기법 개발)

  • Lee, Sang-Joon;Shin, Dae-Sig
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.24 no.1
    • /
    • pp.21-28
    • /
    • 2000
  • Although commercial PIV systems have been widely used for the non-intrusive velocity field measurement of fluid flows, they are still under development and have considerable room for improvement. In this study, a single-frame double-exposure PIV system using a high-resolution CCD camera was developed. A pulsed Nd:Yag laser and high-resolution CCD camera were synchronized by a home-made control circuit. In order to resolve the directional ambiguity problem encountered in the single-frame PIV technique, the second particle image was genuinely shifted in the CCD sensor array during the time interval dt. The velocity vector field was determined by calculating the displacement vector at each interrogation window using cross-correlation with 50% overlapping. In order to check the effect of spatial resolution of CCD camera on the accuracy of PIV velocity field measurement, the developed PIV system with three different resolution modes of the CCD camera (512 ${\times}$ 512, lK ${\times}$ IK, 2K ${\times}$ 2K) was applied to a turbulent flow which simulate the Zn plating process of a steel strip. The experimental model consists of a snout and a moving belt. Aluminum flakes about $1{\mu}m$ diameter were used as scattering particles for the liquid flow in the zinc pot and the gas flow above the zinc surface was seeded with atomized olive oil with an average diameter of 1-$3{\mu}m$. Velocity field measurements were carried out at the strip speed $V_s$=1.0 m/s. The 2K ${\times}$ 2K high-resolution PIV technique was significantly superior compared to the smaller pixel resolution PIV system. For the cases of 512 ${\times}$ 512 and 1K ${\times}$ 1K pixel resolution PIV system, it was difficult to get accurate flow structure of viscous flow near the wall and small vortex structure in the region of large velocity gradient.

Golf Green Slope Estimation Using a Cross Laser Structured Light System and an Accelerometer

  • Pham, Duy Duong;Dang, Quoc Khanh;Suh, Young Soo
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.508-518
    • /
    • 2016
  • In this paper, we propose a method combining an accelerometer with a cross structured light system to estimate the golf green slope. The cross-line laser provides two laser planes whose functions are computed with respect to the camera coordinate frame using a least square optimization. By capturing the projections of the cross-line laser on the golf slope in a static pose using a camera, two 3D curves’ functions are approximated as high order polynomials corresponding to the camera coordinate frame. Curves’ functions are then expressed in the world coordinate frame utilizing a rotation matrix that is estimated based on the accelerometer’s output. The curves provide some important information of the green such as the height and the slope’s angle. The curves estimation accuracy is verified via some experiments which use OptiTrack camera system as a ground-truth reference.

Offline Camera Movement Tracking from Video Sequences

  • Dewi, Primastuti;Choi, Yeon-Seok;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.69-72
    • /
    • 2011
  • In this paper, we propose a method to track the movement of camera from the video sequences. This method is useful for video analysis and can be applied as pre-processing step in some application such as video stabilizer and marker-less augmented reality. First, we extract the features in each frame using corner point detection. The features in current frame are then compared with the features in the adjacent frames to calculate the optical flow which represents the relative movement of the camera. The optical flow is then analyzed to obtain camera movement parameter. The final step is camera movement estimation and correction to increase the accuracy. The method performance is verified by generating a 3D map of camera movement and embedding 3D object to the video. The demonstrated examples in this paper show that this method has a high accuracy and rarely produce any jitter.

  • PDF

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

Face Detection Algorithm for Video Conference Camera Control (화상회의 카메라 제어를 위한 안면 검출 알고리듬)

  • 온승엽;박재현;박규식;이준희
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.218-221
    • /
    • 2000
  • In this paper, we propose a new algorithm to detect human faces for controling a camera used in video conference. We model the distribution of skin color and set up the standard skin color in YIQ color space. An input video frame image is segmented into skin and non-skin segments by comparing the standard skin color and each pixels in the input video frame. Then, shape filler is applied to select face segments from skin segments. Our algorithm detects human faces in real time to control a camera to capture a human face with a proper size and position.

  • PDF

Dynamic Mosaic based Compression (동적 모자이크 기반의 압축)

  • 박동진;김동규;정영기
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1944-1947
    • /
    • 2003
  • In this paper, we propose a dynamic-based compression system by creating mosaic background and transmitting the change information. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate affine motion parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the back-ground region.

  • PDF

Design and implementation of motion tracking based no double difference with PTZ control (PTZ 제어에 의한 이중차영상 기반의 움직임 추적 시스템의 설계 및 구현)

  • Yang Geum-Seok;Yang Seung Min
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.301-312
    • /
    • 2005
  • Three different cases should be considered for motion tracking: moving object with fixed camera, fixed object with moving camera and moving object with moving camera. Two methods are widely used for motion tracking: the optical flow method and the difference frame method. The optical new method is mainly used when either one, object or camera is fixed. This method tracks object using time-space vector which compares object position frame by frame. This method requires heavy computation, and is not suitable for real-time monitoring system such as DVR(Digital Video Recorder). The different frame method is used for moving object with fixed camera. This method tracks object by comparing the difference between background images. This method is good for real-time applications because computation is small. However, it is not applicable if the camera is moving. This thesis proposes and implements the motion tracking system using the difference frame method with PTZ(Pan-Tilt-Zoom) control. This system can be used for moving object with moving camera. Since the difference frame method is used, the system is suitable for real-time applications such as DVR.

REAL-TIME DETECTION OF MOVING OBJECTS IN A ROTATING AND ZOOMING CAMERA

  • Li, Ying-Bo;Cho, Won-Ho;Hong, Ki-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.71-75
    • /
    • 2009
  • In this paper, we present a real-time method to detect moving objects in a rotating and zooming camera. It is useful for camera surveillance of fixed but rotating camera, camera on moving car, and so on. We first compensate the global motion, and then exploit the displaced frame difference (DFD) to find the block-wise boundary. For robust detection, we propose a kind of image to combine the detections from consecutive frames. We use the block-wise detection to achieve the real-time speed, except the pixel-wise DFD. In addition, a fast block-matching algorithm is proposed to obtain local motions and then global affine motion. In the experimental results, we demonstrate that our proposed algorithm can handle the real-time detection of common object, small object, multiple objects, the objects in low-contrast environment, and the object in zooming camera.

  • PDF