• Title/Summary/Keyword: defocus technique

Search Result 14, Processing Time 0.032 seconds

Depth From Defocus using Wavelet Transform (웨이블릿 변환을 이용한 Depth From Defocus)

  • Choi, Chang-Min;Choi, Tae-Sun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.19-26
    • /
    • 2005
  • In this paper, a new method for obtaining three-dimensional shape of an object by measuring relative blur between images using wavelet analysis has been described. Most of the previous methods use inverse filtering to determine the measure of defocus. These methods suffer from some fundamental problems like inaccuracies in finding the frequency domain representation, windowing effects, and border effects. Besides these deficiencies, a filter, such as Laplacian of Gaussian, that produces an aggregate estimate of defocus for an unknown texture, can not lead to accurate depth estimates because of the non-stationary nature of images. We propose a new depth from defocus (DFD) method using wavelet analysis that is capable of performing both the local analysis and the windowing technique with variable-sized regions for non-stationary images with complex textural properties. We show that normalized image ratio of wavelet power by Parseval's theorem is closely related to blur parameter and depth. Experimental results have been presented demonstrating that our DFD method is faster in speed and gives more precise shape estimates than previous DFD techniques for both synthetic and real scenes.

Bokeh Effect Algorithm using Defocus Map in Single Image (단일 영상에서 디포커스 맵을 활용한 보케 효과 알고리즘)

  • Lee, Yong-Hwan;Kim, Heung Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.87-91
    • /
    • 2022
  • Bokeh effect is a stylistic technique that can produce blurring the background of photos. This paper implements to produce a bokeh effect with a single image by post processing. Generating depth map is a key process of bokeh effect, and depth map is an image that contains information relating to the distance of the surfaces of scene objects from a viewpoint. First, this work presents algorithms to determine the depth map from a single input image. Then, we obtain a sparse defocus map with gradient ratio from input image and blurred image. Defocus map is obtained by propagating threshold values from edges using matting Laplacian. Finally, we obtain the blurred image on foreground and background segmentation with bokeh effect achieved. With the experimental results, an efficient image processing method with bokeh effect applied using a single image is presented.

Photometric Defocus Observations of Transiting Extrasolar Planets

  • Hinse, Tobias C.;Han, Wonyong;Yoon, Joh-Na;Lee, Chung-Uk;Kim, Yong-Gi;Kim, Chun-Hwey
    • Journal of Astronomy and Space Sciences
    • /
    • v.32 no.1
    • /
    • pp.21-32
    • /
    • 2015
  • We have carried out photometric follow-up observations of bright transiting extrasolar planets using the CbNUOJ 0.6 m telescope. We have tested the possibility of obtaining high photometric precision by applying the telescope defocus technique, allowing the use of several hundred seconds in exposure time for a single measurement. We demonstrate that this technique is capable of obtaining a root-mean-square scatter of sub-millimagnitude order over several hours for a V~10 host star, typical for transiting planets detected from ground-based survey facilities. We compared our results with transit observations from a telescope operated in in-focus mode. High photometric precision was obtained due to the collection of a larger amount of photons, resulting in a higher signal compared to other random and systematic noise sources. Accurate telescope tracking is likely to further contribute to lowering systematic noise by exposing the same pixels on the CCD. Furthermore, a longer exposure time helps reduce the effect of scintillation noise which otherwise has a significant effect for small-aperture telescopes operated in in-focus mode. Finally we present the results of modelling four light-curves in which a root-mean-square scatter of 0.70 to 2.3 milli-magnitudes was achieved.

A New Method of Noncontact Measurement for 3D Microtopography in Semiconductor Wafer Implementing a New Optical Probe based on the Precision Defocus Measurement (비초점 정밀 계측 방식에 의한 새로운 광학 프로브를 이용한 반도체 웨이퍼의 삼차원 미소형상 측정 기술)

  • 박희재;안우정
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.1
    • /
    • pp.129-137
    • /
    • 2000
  • In this paper, a new method of noncontact measurement has been developed for a 3 dimensional topography in semiconductor wafer, implementing a new optical probe based on the precision defocus measurement. The developed technique consists of the new optical probe, precision stages, and the measurement/control system. The basic principle of the technique is to use the reflected slit beam from the specimen surface, and to measure the deviation of the specimen surface. The defocusing distance can be measured by the reflected slit beam, where the defocused image is measured by the proposed optical probe, giving very high resolution. The distance measuring formula has been proposed for the developed probe, using the laws of geometric optics. The precision calibration technique has been applied, giving about 10 nanometer resolution and 72 nanometer of four sigma uncertainty. In order to quantitize the micro pattern in the specimen surface, some efficient analysis algorithms have been developed to analyse the 3D topography pattern and some parameters of the surface. The developed system has been successfully applied to measure the wafer surface, demonstrating the line scanning feature and excellent 3 dimensional measurement capability.

  • PDF

Follow-up Observations of Transiting Planets using Heavy Defocus Technique

  • Hinse, Tobias C.;Han, Wonyong;Yoon, Joh-Na;Lee, Jae Woo;Lee, Chung-Uk;Park, Jang-Ho;Kim, Chun-Hwey
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.1
    • /
    • pp.56.1-56.1
    • /
    • 2013
  • We have carried out follow-up observations of transiting extrasolar planets using small- to medium-sized reflectors located in Korea. Using the 0.60m telescope stationed at CbNUO (Chungbuk National University Observatory) we have achieved a photometric precision of 1.48 milli-magnitudes (root-mean-square scatter of data) of a HAT-P-09b (transit duration of 3.43 hrs) transit light curve (transit depth ~ 1.3%) with V=12.3 mag for the host star. We expect a photometric precision of 1.0 - 1.2 milli-magnitude for brighter targets (V ~ 10 - 11 mag). The transit technique and its application will be outlined. The results of test observations will be presented and the defocus technique will be discussed.

  • PDF

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Performance Criterion of Bispectral Speckle Imaging Technique (북스펙트럼 스펙클 영상법의 성능기준)

  • 조두진
    • Korean Journal of Optics and Photonics
    • /
    • v.4 no.1
    • /
    • pp.28-35
    • /
    • 1993
  • In the case of an imaging system affected by aberrations which are not precisely known, the effect of aberrations can be minimized and near-diffraction-limited images can be restored by introducing artificial random phase fluctuations in the exit pupil of the imaging system and using bispectral speckle imaging. In order to determine the optimum value of the correlation length for Gaussian random phase model, computer simulation is performed for 50 image frames of a point object in the presence of defocus, spherical aberration, coma, astigmatism of 1 wave, respectively. In terms of the criterion of performance, the FWHM of the point spread function, normalized peak intensity, MTF and visual inspection of the restored object are employed. The optimum value for the rms difference $\sigma$ of aberration on the exit pupil in the interval of Fried parameter ${\Upsilon}_0$ is given by 0.27-0.53 wave for spherical aberration, and 0.24-0.36 wave for defocus and astigmatism, respectively. It is found that the bispectral speckle imaging technique does not give good results in the case of coma.

  • PDF

Depth Map Generation Using Infocused and Defocused Images (초점 영상 및 비초점 영상으로부터 깊이맵을 생성하는 방법)

  • Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.362-371
    • /
    • 2014
  • Blur variation caused by camera de-focusing provides a proper cue for depth estimation. Depth from Defocus (DFD) technique calculates the blur amount present in an image considering that blur amount is directly related to scene depth. Conventional DFD methods use two defocused images that might yield the low quality of an estimated depth map as well as a reconstructed infocused image. To solve this, a new DFD methodology based on infocused and defocused images is proposed in this paper. In the proposed method, the outcome of Subbaro's DFD is combined with a novel edge blur estimation method so that improved blur estimation can be achieved. In addition, a saliency map mitigates the ill-posed problem of blur estimation in the region with low intensity variation. For validating the feasibility of the proposed method, twenty image sets of infocused and defocused images with 2K FHD resolution were acquired from a camera with a focus control in the experiments. 3D stereoscopic image generated by an estimated depth map and an input infocused image could deliver the satisfactory 3D perception in terms of spatial depth perception of scene objects.

A Study on Create Depth Map using Focus/Defocus in single frame (단일 프레임 영상에서 초점을 이용한 깊이정보 생성에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.191-197
    • /
    • 2012
  • In this paper we present creating 3D image from 2D image by extract initial depth values calculated from focal values. The initial depth values are created by using the extracted focal information, which is calculated by the comparison of original image and Gaussian filtered image. This initial depth information is allocated to the object segments obtained from normalized cut technique. Then the depth of the objects are corrected to the average of depth values in the objects so that the single object can have the same depth. The generated depth is used to convert to 3D image using DIBR(Depth Image Based Rendering) and the generated 3D image is compared to the images generated by other techniques.

Compensate and analyze of Optical Characteristics of AR display using Zernike Polynomials

  • Narzulloev Oybek Mirzaevich;Jumamurod Aralov Farhod Ugle;Leehwan Hwang;Seunghyun Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.77-84
    • /
    • 2024
  • Aberration is still a problem for making augmented reality displays. The existing methods to solve this problem are either slow and inefficient, consume too much battery, or are too complex for straightforward implementation. There are still some problems with image quality, and users may suffer from eye strain and headaches because the images provided to each eye lack accuracy, causing the brain to receive mismatched cues between the vergence and accommodation of the eyes. In this paper, we implemented a computer simulation of an optical aberration using Zernike polynomials which are defocus, trefoil, coma, and spherical. The research showed that these optical aberrations impact the Point Spread Function (PSF) and Modulation Transfer Function (MTF). We employed the phase conjugate technique to mitigate aberrations. The findings revealed that the most significant impact on the PSF and MTF comes from the influence of spherical aberration and coma aberration.