• Title/Summary/Keyword: Frame Camera

Search Result 610, Processing Time 0.031 seconds

A Study on a Development of a Measurement Technique for Diffusion of Oil Spill in the Ocean (디지털 화상처리에 의한 해양유출기름확산 계측기법개발에 관한 연구)

  • 이중우;강신영;도덕희;김기철
    • Journal of Korean Port Research
    • /
    • v.12 no.2
    • /
    • pp.291-302
    • /
    • 1998
  • A digital image processing technique which is able to be used for getting the velocity vector distribution of a surface of the spilt oil in the ocean without contacting the flow itself. This technique is based upon the PIV(Particle Imaging Velocimetry) technique and its system mainly consists of a high sensitive camera, a CCD camera, an image grabber, and a host computer in which an image processing algorithm is adopted for velocity vector acquisition. For the acquisition of the advective velocity vector of floating matters on the ocean, a new multi-frame tracking algorithm is proposed, and for the acquisition of the diffusion velocity vector distribution of the spilt oil onto the water surface, a high sensitive gray-level cross-correlation algorithm is proposed.

  • PDF

A Fast Motion Detection and Tracking Algorithm for Automatic Control of an Object Tracking Camera (객체 추적 카메라 제어를 위한 고속의 움직임 검출 및 추적 알고리즘)

  • 강동구;나종범
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.181-191
    • /
    • 2002
  • Video based surveillance systems based on an active camera require a fast algorithm for real time detection and tracking of local motion in the presence of global motion. This paper presents a new fast and efficient motion detection and tracking algorithm using the displaced frame difference (DFD). In the Proposed algorithm, first, a Previous frame is adaptively selected according to the magnitude of object motion, and the global motion is estimated by using only a few confident matching blocks for a fast and accurate result. Then, a DFD is obtained between the current frame and the selected previous frame displaced by the global motion. Finally, a moving object is extracted from the noisy DFD by utilizing the correlation between the DFD and current frame. We implement this algorithm into an active camera system including a pan-tilt unit and a standard PC equipped with an AMD 800MHz processor. The system can perform the exhaustive search for a search range of 120, and achieve the processing speed of about 50 frames/sec for video sequences of 320$\times$240. Thereby, it provides satisfactory tracking results.

Robust Camera Calibration using TSK Fuzzy Modeling

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.216-220
    • /
    • 2007
  • Camera calibration in machine vision is the process of determining the intrinsic camera parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion (가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정)

  • Park, Jong-Seung;Lee, Bum-Jong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.499-506
    • /
    • 2006
  • This Paper describes a fast and stable camera pose estimation method for real-time augmented reality systems. From the feature tracking results of a marker on a single frame, we estimate the camera rotation matrix and the translation vector. For the camera pose estimation, we use the shape factorization method based on the scaled orthographic Projection model. In the scaled orthographic factorization method, all feature points of an object are assumed roughly at the same distance from the camera, which means the selected reference point and the object shape affect the accuracy of the estimation. This paper proposes a flexible and stable selection method for the reference point. Based on the proposed method, we implemented a video augmentation system that inserts virtual 3D objects into the input video frames. Experimental results showed that the proposed camera pose estimation method is fast and robust relative to the previous methods and it is applicable to various augmented reality applications.

Multi-camera Calibration Method for Optical Motion Capture System (광학식 모션캡처를 위한 다중 카메라 보정 방법)

  • Shin, Ki-Young;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.41-49
    • /
    • 2009
  • In this paper, the multi-camera calibration algorithm for optical motion capture system is proposed. This algorithm performs 1st camera calibration using DLT(Direct linear transformation} method and 3-axis calibration frame with 7 optical markers. And 2nd calibration is performed by waving with a wand of known length(so called wand dance} throughout desired calibration volume. In the 1st camera calibration, it is obtained not only camera parameter but also radial lens distortion parameters. These parameters are used initial solution for optimization in the 2nd camera calibration. In the 2nd camera calibration, the optimization is performed. The objective function is to minimize the difference of distance between real markers and reconstructed markers. For verification of the proposed algorithm, re-projection errors are calculated and the distance among markers in the 3-axis frame and in the wand calculated. And then it compares the proposed algorithm with commercial motion capture system. In the 3D reconstruction error of 3-axis frame, average error presents 1.7042mm(commercial system) and 0.8765mm(proposed algorithm). Average error reduces to 51.4 percent in commercial system. In the distance between markers in the wand, the average error shows 1.8897mm in the commercial system and 2.0183mm in the proposed algorithm.

Efficient Tracking of a Moving Object using Optimal Representative Blocks

  • Kim, Wan-Cheol;Hwang, Cheol-Ho;Lee, Jang-Myung
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.4
    • /
    • pp.495-502
    • /
    • 2003
  • This paper focuses on the implementation of an efficient tracking method of a moving object using optimal representative blocks by way of a pan-tilt camera. The key idea is derived from the fact that when the image size of a moving object is shrunk in an image frame according to the distance between the mobile robot camera and the object in motion, the tracking performance of a moving object can be improved by reducing the size of representative blocks according to the object image size. Motion estimations using Edge Detection (ED) and Block-Matching Algorithm (BMA) are regularly employed to track objects by vision sensors. However, these methods often neglect the real-time vision data since these schemes suffer from heavy computational load. In this paper, a representative block able to significantly reduce the amount of data to be computed, is defined and optimized by changing the size of representative blocks according to the size of the object in the image frame in order to improve tracking performance. The proposed algorithm is verified experimentally by using a two degree-of- freedom active camera mounted on a mobile robot.

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF

SPECIFIC ANALYSIS OF WEB CAMERA AND HIGH RESOLUTION PLANETARY IMAGING (웹 카메라의 특성 분석 및 고해상도 행성촬영)

  • Park, Young-Sik;Lee, Dong-Ju;Jin, Ho;Han, Won-Yong;Park, Jang-Hyun
    • Journal of Astronomy and Space Sciences
    • /
    • v.23 no.4
    • /
    • pp.453-464
    • /
    • 2006
  • Web camera is usually used for video communication between PC, it has small sensing area, cannot using long exposure application, so that is insufficient for astronomical application. But web camera is suitable for bright planet, moon, it doesn't need long exposure time. So many amateur astronomer using web camera for planetary imaging. We used ToUcam manufactured by Phillips for planetary imaging and Registax commercial program for a video file combining. And then, we are measure a property of web camera, such as linearity, gain that is usually using for analysis of CCD performance. Because of using combine technic selected high quality image from video frame, this method on take higher resolution planetary imaging than one shot image by film, digital camera and CCD. We describe a planetary observing method and a video frame combine method.

A Guideline for Motion-Image-Quality Improvement of LCD-TVs

  • Kurita, Taiichiro
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1164-1167
    • /
    • 2009
  • Motion-image-quality of LCD-TVs is discussed by dynamic spatial frequency response. Smaller temporal aperture or higher frame rate can improve dynamic response, but an increase of motion velocity easily cancels the improvement. A guideline for deciding the desirable temporal aperture and frame rate of LCD-TVs is described, under the condition that camera and display have the same parameters. Two candidates of the desirable parameter sets will be (240 or 300 Hz, 50 to 100% aperture) and (120Hz, 25 to 50% aperture), from the viewpoint of "limit of acceptance" on motion-imagequality-deterioration for critical picture materials.

  • PDF

Object Tracking for Elimination using LOD Edge Maps Generated from Canny Edge Maps (캐니 에지 맵을 LOD로 변환한 맵을 이용하여 객체 소거를 위한 추적)

  • Jang, Young-Dae;Park, Ji-Hun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.333-336
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. Our method consists of two parts: first we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. To reduce side-effects because of irrelevant edges, we start our basic tracking by using strong Canny edges generated from large image intensity gradients of an input image. We get more edge pixels along LOD hierarchy. LOD Canny edge pixels become nodes in routing, and LOD values of adjacent edge pixels determine routing costs between the nodes. We find the best route to follow Canny edge pixels favoring stronger Canny edge pixels. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.