• Title/Summary/Keyword: panoramic vision

Search Result 33, Processing Time 0.016 seconds

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

Catadioptric Omnidirectional Optical System Using a Spherical Mirror with a Central Hole and a Plane Mirror for Visible Light (중심 구멍이 있는 구면거울과 평면거울을 이용한 가시광용 반사굴절식 전방위 광학계)

  • Seo, Hyeon Jin;Jo, Jae Heung
    • Korean Journal of Optics and Photonics
    • /
    • v.26 no.2
    • /
    • pp.88-97
    • /
    • 2015
  • An omnidirectional optical system can be described as a special optical system that images in real time a panoramic image with an azimuthal angle of $360^{\circ}$ and the altitude angle corresponding to the upper and lower fields of view from the horizon line. In this paper, for easy fabrication and compact size, we designed and fabricated a catadioptric omnidirectional optical system consisting of the mirror part of a spherical mirror with a central hole (that is, obscuration), a plane mirror, the imaging lens part of 3 single spherical lenses, and a spherical doublet in the visible light spectrum. We evaluated its image performance by measuring the cut-off spatial frequency using automobile license plates, and the vertical field of view using an ISO 12233 chart. We achieved a catadioptric omnidirectional optical system with vertical field of view from $+53^{\circ}$ to $-17^{\circ}$ and an azimuthal angle of $360^{\circ}$. This optical system cleaniy imaged letters on a car's front license plate at the object distance of 3 meters, which corresponds to a cut-off spatial frequency of 135 lp/mm.

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.