• Title/Summary/Keyword: Video Stitching

Search Result 50, Processing Time 0.048 seconds

Fixed Homography-Based Real-Time SW/HW Image Stitching Engine for Motor Vehicles

  • Suk, Jung-Hee;Lyuh, Chun-Gi;Yoon, Sanghoon;Roh, Tae Moon
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1143-1153
    • /
    • 2015
  • In this paper, we propose an efficient architecture for a real-time image stitching engine for vision SoCs found in motor vehicles. To enlarge the obstacle-detection distance and area for safety, we adopt panoramic images from multiple telegraphic cameras. We propose a stitching method based on a fixed homography that is educed from the initial frame of a video sequence and is used to warp all input images without regeneration. Because the fixed homography is generated only once at the initial state, we can calculate it using SW to reduce HW costs. The proposed warping HW engine is based on a linear transform of the pixel positions of warped images and can reduce the computational complexity by 90% or more as compared to a conventional method. A dual-core SW/HW image stitching engine is applied to stitching input frames in parallel to improve the performance by 70% or more as compared to a single-core engine operation. In addition, a dual-core structure is used to detect a failure in state machines using rock-step logic to satisfy the ISO26262 standard. The dual-core SW/HW image stitching engine is fabricated in SoC with 254,968 gate counts using Global Foundry's 65 nm CMOS process. The single-core engine can make panoramic images from three YCbCr 4:2:0 formatted VGA images at 44 frames per second and frequency of 200 MHz without an LCD display.

A study on Web-based Video Panoramic Virtual Reality for Hose Cyber Shell Museum (비디오 파노라마 가상현실을 기반으로 하는 호서 사이버 패류 박물관의 연구)

  • Hong, Sung-Soo;khan, Irfan;Kim, Chang-ki
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.1468-1471
    • /
    • 2012
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient algorithm for Image registration and stitching of captured imaged from a video stream. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is use for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and a nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

3D Panorama Generation Using Depth-MapStitching

  • Cho, Seung-Il;Kim, Jong-Chan;Ban, Kyeong-Jin;Park, Kyoung-Wook;Kim, Chee-Yong;Kim, Eung-Kon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.6
    • /
    • pp.780-784
    • /
    • 2011
  • As the popularization and development of 3D display makes common users easy to experience a solid 3D virtual reality, the demand for virtual reality contents are increasing. In this paper, we propose 3D panorama system using vanishing point locationbased depth map generation method. 3D panorama using depthmap stitching gives an effect that makes users feel staying at real place and looking around nearby circumstances. Also, 3D panorama gives free sight point for both nearby object and remote one and provides solid 3D video.

Image Stitching focused on Priority Object using Deep Learning based Object Detection (딥러닝 기반 사물 검출을 활용한 우선순위 사물 중심의 영상 스티칭)

  • Rhee, Seongbae;Kang, Jeonho;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.882-897
    • /
    • 2020
  • Recently, the use of immersive media contents representing Panorama and 360° video is increasing. Since the viewing angle is limited to generate the content through a general camera, image stitching is mainly used to combine images taken with multiple cameras into one image having a wide field of view. However, if the parallax between the cameras is large, parallax distortion may occur in the stitched image, which disturbs the user's content immersion, thus an image stitching overcoming parallax distortion is required. The existing Seam Optimization based image stitching method to overcome parallax distortion uses energy function or object segment information to reflect the location information of objects, but the initial seam generation location, background information, performance of the object detector, and placement of objects may limit application. Therefore, in this paper, we propose an image stitching method that can overcome the limitations of the existing method by adding a weight value set differently according to the type of object to the energy value using object detection based on deep learning.

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

Panoramic Video Generation Method Based on Foreground Extraction (전경 추출에 기반한 파노라마 비디오 생성 기법)

  • Kim, Sang-Hwan;Kim, Chang-Su
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.441-445
    • /
    • 2011
  • In this paper, we propose an algorithm for generating panoramic videos using fixed multiple cameras. We estimate a background image from each camera. Then we calculate perspective relationships between images using extracted feature points. To eliminate stitching errors due to different image depths, we process background images and foreground images separately in the overlap regions between adjacent cameras by projecting regions of foreground images selectively. The proposed algorithm can be used to enhance the efficiency and convenience of wide-area surveillance systems.

2D Adjacency Matrix Generation using DCT for UWV contents

  • Li, Xiaorui;Lee, Euisang;Kang, Dongjin;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.39-42
    • /
    • 2016
  • Since a display device such as TV or signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. In order to stich those video clips, it is required to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips. Discrete Cosine Transform (DCT), which is used as a compression transform method, can convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned compressed features, 2D adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

  • PDF

Binary Image Based Fast DoG Filter Using Zero-Dimensional Convolution and State Machine LUTs

  • Lee, Seung-Jun;Lee, Kye-Shin;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.5 no.2
    • /
    • pp.131-138
    • /
    • 2018
  • This work describes a binary image based fast Difference of Gaussian (DoG) filter using zero-dimensional (0-d) convolution and state machine look up tables (LUTs) for image and video stitching hardware platforms. The proposed approach for using binary images to obtain DoG filtering can significantly reduce the data size compared to conventional gray scale based DoG filters, yet binary images still preserve the key features of the image such as contours, edges, and corners. Furthermore, the binary image based DoG filtering can be realized with zero-dimensional convolution and state machine LUTs which eliminates the major portion of the adder and multiplier blocks that are generally used in conventional DoG filter hardware engines. This enables fast computation time along with the data size reduction which can lead to compact and low power image and video stitching hardware blocks. The proposed DoG filter using binary images has been implemented with a FPGA (Altera DE2-115), and the results have been verified.

Video Stitching Algorithm Using Improved Graphcut Algorithm (개선된 그래프 컷 알고리즘을 이용한 비디오 정합 알고리즘)

  • Yoon, Yeo Kyung;Rhee, Kwang Jin;Lee, Hoon Min;Lee, Yun Gu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.112-115
    • /
    • 2018
  • 본 논문에서는 공간적 일관성과 시간적 일관성을 모두 고려하는 그래프 컷(graph cuts, GC) 알고리즘을 적용한 새로운 비디오 정합(video stitching) 방법을 제안한다. 먼저 입력 비디오로부터 취득한 전체 프레임에 대해서 정렬(frame alignment) 작업이 완료된 후, 프레임 합성(frame composition)을 위한 정합선 찾기(seam finding) 과정을 진행한다. 정합선을 찾는 과정에서 개선된 그래프 컷 알고리즘을 이용해 정렬된 프레임들을 자연스럽게 합성할 수 있는 최적의 정합선을 찾는다. 우선, 첫번째 입력 프레임에서 찾은 최적 정합선을 참조 정합선으로 설정한다. 그 다음, 연속된 프레임들의 정합선 찾기 과정을 수행할 때, 참조 정합선과의 거리 값을 가중치로 이용하는 새로운 비용 함수를 적용한다. 본 논문에서 제안하는 알고리즘으로 찾은 최적 정합선은 입력 프레임의 중첩 영역에 움직이는 물체가 존재할 때, 물체의 모양을 손상시키지 않으면서 동시에 연속된 프레임의 정합선을 유사한 형태로 유지시킨다. 결과적으로 공간적, 시간적 자연스러움이 보장되는 고품질의 비디오 정합 결과를 얻을 수 있다.

  • PDF

Fast Stitching Algorithm and Cubic Panoramic Image Reducing Distortions (빠른 스티칭 알고리즘과 왜곡현상을 해소하는 큐브 파노라마 영상)

  • Kim Eung-Kon;Seo Seung-Wan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.580-584
    • /
    • 2005
  • One of the problems of panoramic image stitching methods is that its computational cost is so high that the image processing required usually cannot be done in real-time. Real-time performance is important in applications such as video surveillance becausewe must see current scenes. But it takes more than several seconds to calculate transform coefficients between images. Panoramic VR technologies such as Apple QuickTime VR have problem that distorts images of top and bottom. This paper presents a fast stitching method and a methpd reducing distortions of top and bottom in cubic panoramic image.

  • PDF