• Title/Summary/Keyword: Depth of focus

Search Result 823, Processing Time 0.029 seconds

Spatio-Angular Consistent Edit Propagation for 4D Light Field Image (4 차원 Light Field 영상에서의 일관된 각도-공간적 편집 전파)

  • Williem, Williem;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.11a
    • /
    • pp.180-181
    • /
    • 2015
  • In this paper, we present a consistent and efficient edit propagation method that is applied for light field data. Unlike conventional sparse edit propagation, the coherency between light field sub-aperture images is fully considered by utilizing light field consistency in the optimization framework. Instead of directly solving the optimization function on all light field sub-aperture images, the proposed optimization framework performs sparse edit propagation in the extended focus image domain. The extended focus image is the representative image that contains implicit depth information and the well-focused region of all sub-aperture images. The edit results in the extended focus image are then propagated back to each light field sub-aperture image. Experimental results on test images captured by a Lytro off-the-shelf light field camera confirm that the proposed method provides robust and consistent results of edited light field sub-aperture images.

  • PDF

Terahertz Nondestructive Time-of-flight Imaging with a Large Depth Range

  • Kim, Hwan Sik;Kim, Jangsun;Ahn, Yeong Hwan
    • Current Optics and Photonics
    • /
    • v.6 no.6
    • /
    • pp.619-626
    • /
    • 2022
  • In this study, we develop a three-dimensional (3D) terahertz time-of-flight (THz-TOF) imaging technique with a large depth range, based on asynchronous optical sampling (ASOPS) methods. THz-TOF imaging with the ASOPS technique enables rapid scanning with a time-delay span of 10 ns. This means that a depth range of 1.5 m is possible in principle, whereas in practice it is limited by the focus depth determined by the optical geometry, such as the focal length of the scan lens. We characterize the spatial resolution of objects at different vertical positions with a focal length of 5 cm. The lateral resolution varies from 0.8-1.8 mm within the vertical range of 50 mm. We obtain THz-TOF images for samples with multiple reflection layers; the horizontal and vertical locations of the objects are successfully determined from the 2D cross-sectional images, or from reconstructed 3D images. For instance, we can identify metallic objects embedded in insulating enclosures having a vertical depth range greater than 30 mm. For feasible practical use, we employ the proposed technique to locate a metallic object within a thick chocolate bar, which is not accessible via conventional transmission geometry.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

Quantitative analysis of increase in depth of focus using Wigner distribution function (Wigner 분포 함수를 초점 심도 증가의 정량적 해석)

  • 장남영;강호정;은재정;최평석
    • Korean Journal of Optics and Photonics
    • /
    • v.11 no.6
    • /
    • pp.385-389
    • /
    • 2000
  • A phase-retardation function which was derived from Wigner distribution function (WDF) is used to increase a focal depth of a radially symmetric optical system. The WDF for one-dimensional signal is represented as a two-dimensional function of phasespace ($\chi,\zeta$), and a normalized irradiance is described as a form of the Strehl ratio (SR). The increase in the focal depth is accomplished by delivering a shearing tilt a that represents a characteristic of free space propagation with simple manipulation in the WDF space. In this paper we propose a method for evaluating the focal depth quantitatively by representing the phaseretardation function in terms of the focal depth term. In order to verify the validity of the proposed method, we compared the numerically analyzed result with that of J. Sochki's study. study.

  • PDF

A Survey of Human Action Recognition Approaches that use an RGB-D Sensor

  • Farooq, Adnan;Won, Chee Sun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.281-290
    • /
    • 2015
  • Human action recognition from a video scene has remained a challenging problem in the area of computer vision and pattern recognition. The development of the low-cost RGB depth camera (RGB-D) allows new opportunities to solve the problem of human action recognition. In this paper, we present a comprehensive review of recent approaches to human action recognition based on depth maps, skeleton joints, and other hybrid approaches. In particular, we focus on the advantages and limitations of the existing approaches and on future directions.

Auto-focus of Optical Scanning Holographic Microscopy Using Partial Region Analysis (광 스캐닝 홀로그램 현미경에서 부분 영역 해석을 통한 자동 초점)

  • Kim, You-Seok;Kim, Tae-Geun
    • Korean Journal of Optics and Photonics
    • /
    • v.22 no.1
    • /
    • pp.10-15
    • /
    • 2011
  • In this paper, we propose an auto-focusing algorithm which extracts a depth parameter by analyzing a selected part of a hologram, and we use experimental results to show that the algorithm is practical. First, we record a complex hologram using Optical Scanning Holography. Next we select some part of hologram and extract depth information through Gaussian low pass filtering, synthesizing a real-only hologram, power fringe-adjusted filtering and inverting to a new frequency axis. Finally, we reconstruct the hologram automatically using the extracted depth location.

Projection-Type Integral Imaging Using a Pico-projector

  • Yang, Yucheol;Min, Sung-Wook
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.714-719
    • /
    • 2014
  • A pico-projector is a compact and mobile projector that has an infinite focus. We apply the pico-projector to a projection-type integral imaging system, which can expand the image depth to form multiple central depth planes. In a projection-type integral imaging system, the image flipping problem arises because the expanded elemental images pass through a lens array. To solve this problem, we propose the ray tracing of a pico-projector at a central depth plane and compensate the elemental image using a pixel-mapping process. Experiments to verify the proposed method are performed, and the results are presented.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF