• Title/Summary/Keyword: single-view camera

Search Result 98, Processing Time 0.025 seconds

Investigation on the Applicability of Defocus Blur Variations to Depth Calculation Using Target Sheet Images Captured by a DSLR Camera

  • Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.109-121
    • /
    • 2020
  • Depth calculation of objects in a scene from images is one of the most studied processes in the fields of image processing, computer vision, and photogrammetry. Conventionally, depth is calculated using a pair of overlapped images captured at different view points. However, there have been studies to calculate depths from a single image. Theoretically, it is known to be possible to calculate depth using the diameter of CoC (Circle of Confusion) caused by defocus under the assumption of a thin lens model. Thus, this study aims to verify the validity of the thin lens model to calculate depth from edge blur amount which corresponds to the radius of CoC. For this study, a commercially available DSLR (Digital Single Lens Reflex) camera was used to capture a set of target sheets which had different edge contrasts. In order to find out the pattern of the variations of edge blur against varying combination of FD (Focusing Distance) and OD (Object Distance), the camera was set to varying FD and target sheet images were captured at varying OD under each FD. Then, the edge blur and edge displacement were estimated from edge slope profiles using a brute-force method. The experimental results show that the pattern of the variations of edge blur observed in the target images was apart from their corresponding theoretical amounts derived under the thin lens assumption but can still be utilized to calculate depth from a single image for the cases similar to the limited conditions experimented under which the tendency between FD and OD is manifest.

Development of Green-Sheet Measurement Algorithm by Image Processing Technique (영상처리기법을 이용한 그린시트 측정알고리즘 개발)

  • Pyo, C.R.;Yang, S.M.;Kang, S.H.;Yoon, S.M.
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2007.05a
    • /
    • pp.51-54
    • /
    • 2007
  • The purpose of this paper is the development of measurement algorithm for green-sheet based on the digital image processing technique. The Low Temperature Cofired Ceramic (LTCC) technology can be defined as a way to produce multilayer circuits with the help of single tapes, which are used to apply conductive, dielectric and / or resistive pastes on. These single green-sheets have to be laminated together and fired in one step all. Main functionality of the green-sheet film measurement algorithm is to measure the position and size of the punching hole in each single layer. The line scan camera coupled with motorized X-Y stage is used for developing the algorithm. In order to measure the entire film area using several scanning steps, the overlapping method is used. In the process of development of the algorithm based on the image processing and analysis, strong background technology and know-how have been accumulated.

  • PDF

Efficient Circular Object Pose Determination

  • Kim, Sungbok;Kim, Byungho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.276-276
    • /
    • 2000
  • This paper presents the efficient algorithms for the pose determination of a circular object with and without a priori knowledge of the object radius. The developed algorithms valid for a circular object are the result of the elaboration of Ma's work [2], which determines the pose of a conic object from two perspective views. First, the geometric constraint of a circular object and its projection on the image plane of a camera is described. The number of perspective views required for the object pose determination with and without a priori knowledge of the object radius is also discussed. Second, with a priori knowledge of the object radius, the pose of a circular object is determined from a single perspective view. The object pose information, expressed by two surface normal vectors and one position vector, is given in a closed form and with no ambiguity. Third, without a priori knowledge of the object radius, the pose of a circular object is determined from two perspective views. While the surface normal vectors are obtained from the first view, the position vector is obtained from the two views.

  • PDF

A Study on the Panoramic Image Generation in the Sea Environment (해상 환경에서의 파노라믹 영상 생성 기법에 관한 연구)

  • 김효성;김길중
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.41-46
    • /
    • 2002
  • We generally used electric optical sensors in order to detect and identify sea objects efficiently. However due to the limitation of view-angle, the region acquired from the sense is restricted. So it is necessary to generate panoramic image from sea images acquired from pan-tilt camera. Previous mosaicing method is not able to generate panoramic image for sea environment because intensity is similar to all region and time varying. In this paper, we proposed new algorithm for generating high-resolution panoramic image on sea environment. Proposed algorithm use single-view point model, applying mosaicing result in feature environment to in sea environment, we overcame the limitation of previous method. In the result of virtual and real experiment, we showed that proposed algorithm is efficient for generating sea panoramic image.

  • PDF

An effective indoor video surveillance system based on wide baseline cameras (Wide baseline 카메라 기반의 효과적인 실내공간 감시시스템)

  • Kim, Woong-Chang;Kim, Seung-Kyun;Choi, Kang-A;Jung, June-Young;Ko, Sung-Jea
    • Journal of IKEEE
    • /
    • v.14 no.4
    • /
    • pp.317-323
    • /
    • 2010
  • The video surveillance system is adopted in many places due to its efficiency and constancy in monitoring a specific area over a long period of time. However, many surveillance systems composed of a single static camera often produce unsatisfactory results due to their lack of field of view. In this paper, we present a video surveillance system based on wide baseline stereo cameras to overcome the limitation. We adopt the codebook algorithm and mathematical morphology to robustly model the foreground pixels of the moving object in the scene and calculate the trajectory of the moving object via 3D reconstruction. The experimental results show that the proposed system detects a moving object and generates a top view trajectory successfully to track the location of the object in the world coordinates.

A Study on Detecting Moving Objects using Multiple Fisheye Cameras (다중 어안 카메라를 이용한 움직이는 물체 검출 연구)

  • Bae, Kwang-Hyuk;Suhr, Jae-Kyu;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.32-40
    • /
    • 2008
  • Since vision-based surveillance system uses a conventional camera which has a narrow field of view, it is difficult to apply it into the environment whose the ceiling is low and the monitoring area is wide. To overcome this problem, the method of increasing the number of camera causes the increase of the cost and the difficulties of camera set-up For these problems, we propose a new surveillance system based on multiple fisheye cameras which have 180 degree field of view. The proposed method handles occlusions using the homography relation between the multiple fisheye cameras. In the experiment, four fisheye cameras were set up within the area of $17{\times}14m$ at height of 2.5 m and five people wandered and crossed with one another within this area. The detection rates of the proposed system was 83.0% while that of a single camera was 46.1%.

The 3D Geometric Information Acquisition Algorithm using Virtual Plane Method (가상 평면 기법을 이용한 3차원 기하 정보 획득 알고리즘)

  • Park, Sang-Bum;Lee, Chan-Ho;Oh, Jong-Kyu;Lee, Sang-Hun;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.11
    • /
    • pp.1080-1087
    • /
    • 2009
  • This paper presents an algorithm to acquire 3D geometric information using a virtual plane method. The method to measure 3D information on the plane is easy, because it's not concerning value on the z-axis. A plane can be made by arbitrary three points in the 3D space, so the algorithm is able to make a number of virtual planes from feature points on the target object. In this case, these geometric relations between the origin of each virtual plane and the origin of the target object coordinates should be expressed as known homogeneous matrices. To include this idea, the algorithm could induce simple matrix formula which is only concerning unknown geometric relation between the origin of target object and the origin of camera coordinates. Therefore, it's more fast and simple than other methods. For achieving the proposed method, a regular pin-hole camera model and a perspective projection matrix which is defined by a geometric relation between each coordinate system is used. In the final part of this paper, we demonstrate the techniques for a variety of applications, including measurements in industrial parts and known patches images.

A Void Fraction Measurement Technique by Single Camera and Its Application (단일 카메라를 이용한 이상유동 기포율 측정방법의 개발과 응용)

  • Choi, Dong-Whan;Yoo, Jung-Yul;Song, Jin-Ho;Sung, Jae-Yong
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.31 no.11
    • /
    • pp.904-911
    • /
    • 2007
  • A measurement technique fur void fraction has been proposed using a time-resolved two-phase PIV system and the bubble dynamics has been investigated in gas-liquid two-phase flows. For the three-dimensional evaluation of the bubble information, both the images from the front and side views are simultaneously recorded into a high speed CCD camera by reflecting the side view image on a $45^{\circ}$ oriented mirror to be juxtaposed with the front view image. Then, a stereo-matching technique is applied to calculate the void fraction, bubble size and shape. To obtain the rising bubble velocities, the 2-frame PTV method was adopted. The present technique is applied to freely rising bubby flows in stagnant liquid. The results show that the increase of bubble flow rate gives rise to the increase of bubble size and rising velocity at first. If it goes over a certain level, the rising velocity becomes constant and the horizontal velocity grows bigger instead due to the obstruction of other bubbles.

Real-time Full-view 3D Human Reconstruction using Multiple RGB-D Cameras

  • Yoon, Bumsik;Choi, Kunwoo;Ra, Moonsu;Kim, Whoi-Yul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.224-230
    • /
    • 2015
  • This manuscript presents a real-time solution for 3D human body reconstruction with multiple RGB-D cameras. The proposed system uses four consumer RGB/Depth (RGB-D) cameras, each located at approximately $90^{\circ}$ from the next camera around a freely moving human body. A single mesh is constructed from the captured point clouds by iteratively removing the estimated overlapping regions from the boundary. A cell-based mesh construction algorithm is developed, recovering the 3D shape from various conditions, considering the direction of the camera and the mesh boundary. The proposed algorithm also allows problematic holes and/or occluded regions to be recovered from another view. Finally, calibrated RGB data is merged with the constructed mesh so it can be viewed from an arbitrary direction. The proposed algorithm is implemented with general-purpose computation on graphics processing unit (GPGPU) for real-time processing owing to its suitability for parallel processing.

Lane Detection-based Camera Pose Estimation (차선검출 기반 카메라 포즈 추정)

  • Jung, Ho Gi;Suhr, Jae Kyu
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.23 no.5
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.