• Title/Summary/Keyword: 2D Video

Search Result 910, Processing Time 0.033 seconds

Video Mosaics in 3D Space

  • Chon, Jaechoon;Fuse, Takashi;Shimizu, Eihan
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.390-392
    • /
    • 2003
  • Video mosaicing techniques have been widely used in virtual reality environments. Especially in GIS field, video mosaics are becoming more and more common in representing urban environments. Such applications mainly use spherical or panoramic mosaics that are based on images taken from a rotating camera around its nodal point. The viewpoint, however, is limited to location within a small area. On the other hand, 2D-mosaics, which are based on images taken from a translating camera, can acquire data in wide area. The 2D-mosaics still have some problems : it can‘t be applied to images taken from a rotational camera in large angle. To compensate those problems , we proposed a novel method for creating video mosaics in 3D space. The proposed algorithm consists of 4 steps: feature -based optical flow detection, camera orientation, 2D-image projection, and image registration in 3D space. All of the processes are fully automatic and successfully implemented and tested with real images.

  • PDF

A Trend Study on 2D to 3D Video Conversion Technology using Analysis of Patent Data (특허 분석을 통한 2D to 3D 영상 데이터 변환 기술 동향 연구)

  • Kang, Michael M.;Lee, Wookey;Lee, Rich. C.
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.495-504
    • /
    • 2014
  • This paper present a strategy of intellectual property acquisition and core technology development direction using analysis of 2D to 3D video conversion technology patent data. As a result of analysis of trends in patent 2D to 3D technology, it is very promising technology field. Using a strategic patent map using research of patent trend, you will keep ahead of the competition in 2D3D image data conversion market.

3D Conversion of 2D Video Encoded by H.264

  • Hong, Ho-Ki;Ko, Min-Soo;Seo, Young-Ho;Kim, Dong-Wook;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.6
    • /
    • pp.990-1000
    • /
    • 2012
  • In this paper, we propose an algorithm that creates three-dimensional (3D) stereoscopic video from two-dimensional (2D) video encoded by H.264 instead of using two cameras conventionally. Very accurate motion vectors are available in H.264 bit streams because of the availability of a variety of block sizes. 2D/3D conversion algorithm proposed in this paper can create left and right images by using extracted motion information. Image type of a given image is first determined from the extracted motion information and each image type gives a different conversion algorithm. The cut detection has also been performed in order to prevent overlapping of two totally different scenes for left and right images. We show an improved performance of the proposed algorithm through experimental results.

Implementation of AR Remote Rendering Techniques for Real-time Volumetric 3D Video

  • Lee, Daehyeon;Lee, Munyong;Lee, Sang-ha;Lee, Jaehyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.90-97
    • /
    • 2020
  • Recently, with the growth of mixed reality industrial infrastructure, relevant convergence research has been proposed. For real-time mixed reality services such as remote video conferencing, the research on real-time acquisition-process-transfer methods is required. This paper aims to implement an AR remote rendering method of volumetric 3D video data. We have proposed and implemented two modules; one, the parsing module of the volumetric 3D video to a game engine, and two, the server rendering module. The result of the experiment showed that the volumetric 3D video sequence data of about 15 MB was compressed by 6-7%. The remote module was streamed at 27 fps at a 1200 by 1200 resolution. The results of this paper are expected to be applied to an AR cloud service.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

Enhancement of 3D Point Cloud Contents Using 2D Image Super Resolution Network

  • Seonghwan Park;Junsik Kim;Yonghae Hwang;Doug Young Suh;Kyuheon Kim
    • Journal of Web Engineering
    • /
    • v.21 no.2
    • /
    • pp.425-442
    • /
    • 2021
  • Media technology has been developed to give users a sense of immersion. Recent media using 3D spatial data, such as augmented reality and virtual reality, has attracted attention. A point cloud is a data format that consists of a number of points, and thus can express 3D media using coordinates and color information for each point. Since a point cloud has a larger capacity than 2D images, a technology to compress the point cloud is required, i.e., standardized in the international standard organization MPEG as a video-based point cloud compression (V-PCC). V-PCC decomposes 3D point cloud data into 2D patches along orthogonal directions, and those patches are placed into a 2D image sequence, and then compressed using existing 2D video codecs. However, data loss may occur while converting a 3D point cloud into a 2D image sequence and encoding this sequence using a legacy video codec. This data loss can cause deterioration in the quality of a reconstructed point cloud. This paper proposed a method of enhancing a reconstructed point cloud by applying a super resolution network to the 2D patch image sequence of a 3D point cloud.

Coding Technology for Strereoscopic 3D Broadcasting (스테레오 3D 방송을 위한 비디오 부호화 기술)

  • Choe, Byeong-Ho;Kim, Yong-Hwan;Kim, Je-U;Park, Ji-Ho
    • Broadcasting and Media Magazine
    • /
    • v.15 no.1
    • /
    • pp.24-36
    • /
    • 2010
  • Nowadays, digital broadcasting providers have plan to extend their service area to 3D broadcasting without exchanging conventional system and equipments. The maintenance of backward compatibility to conventional 2D broadcasting system is very importance issue on digital broadcasting. To satisfy the requirement, highly-optimized MPEG-2 video encoder is essential for coding left-view and new video coding techniques having higher performance than MPEG-4 AVC/H.264 is needed for right-view since terrestrial broadcasting system has very limited and fixed bandwidth. In this paper, conventional video coding algorithms and new video coding algorithms are analyzed to present a capable solution for the best quality stereoscopic 3D broadcasting keeping backward compatibility within the bandwidth.

Subjective Video Quality Comparison of 3D Display Monitors (3D 디스플레이 모니터의 주관적 화질 상관도 비교)

  • Youn, Sungwook;Ok, Jiheon;Yim, Donghyun;Han, Taehwan;Lee, Chulhee
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.416-424
    • /
    • 2013
  • Recently, efforts have been made to develop international standards related to 3DTV quality assessment underway in International Telecommunication Union and Video Quality Experts Group. Unlike conventional 2D displays, there are several types of 3D display monitors: passive glasses, active glasses and auto-stereoscopic. In this paper, we performed subjective video quality tests using various 3D display monitors, in order to examine whether these display monitors can produce consistent perceptual video quality scores for processed video sequences. The experimental results show that the subjective scores of those 3D monitors are highly correlated and it appears that similar subjective scores will be obtained even when different types of 3D displays are used.

Visual Semantic Based 3D Video Retrieval System Using HDFS

  • Ranjith Kumar, C.;Suguna, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.8
    • /
    • pp.3806-3825
    • /
    • 2016
  • This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose we intent to hitch on BOVW and Mapreduce in 3D framework. Here, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook. Further, matching is performed using soft weighting scheme with L2 distance function. As a final step, retrieved results are ranked according to the Index value and produce results .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we fiture the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

Study of Capturing Real-Time 360 VR 3D Game Video for 360 VR E-Sports Broadcast (360 VR E-Sports 중계를 위한 실시간 360 VR 3D Stereo 게임 영상 획득에 관한 연구)

  • Kim, Hyun Wook;Lee, Jun Suk;Yang, Sung Hyun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.876-885
    • /
    • 2018
  • Although e-sports broadcasting market based on VR(Virtual Reality) is growing in these days, technology development for securing market competitiveness is quite inadequate in Korea. Global companies such as SLIVER and Facebook already developed and are trying to commercialize 360 VR broadcasting technology which is able to broadcast e-sports in 4K 30FPS VR video. However, 2D video is too poor to use for 360 VR video in that it brings less immersive experience and dizziness and has low resolution in the scene. this paper, we not only proposed and implemented virtual camera technology which is able to capture in-game space as 360 video with 4K 3D by 60FPS for e-sports VR broadcasting but also verified feasibleness of obtaining stereo 360 video up to 4K/60FPS by conducting experiment after setting up virtual camera in sample games from game engine and commercial games.