• Title/Summary/Keyword: 2D Video

Search Result 910, Processing Time 0.03 seconds

Design and Implementation of a Realistic Multi-View Scalable Video Coding Scheme (실감형 다시점 스케일러블 비디오 코딩 방법의 설계 및 구현)

  • Park, Min-Woo;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.703-720
    • /
    • 2009
  • This paper proposes a realistic multi-view scalable video coding scheme designed for user's interest in 3D content services and the usage in the future computing environment. Future video coding schemes should support realistic services that make users feel the 3-D presence through stereoscopic or multi-view videos, as well as to accomplish the so-called one-source multi-use services in order to comprehensively support diverse transmission environments and terminals. Unlike the most of video coding methods which only support two-dimensional display, the proposed coding scheme in this paper is the method which can support such realistic services. This paper designs and also implements the proposed coding scheme through integrating Multi-view Video Coding scheme and Scalable Video Coding scheme, then shows its possibility of realization of 3D services by the simulation. The simulation results show the proposed structure remarkably improves the performance of random access with almost the same coding efficiency.

3D-Based Monitoring System and Cloud Computing for Panoramic Video Service (3차원 기반의 모니터링 시스템과 클라우드 컴퓨팅을 이용한 파노라믹 비디오 서비스)

  • Cho, Yongwoo;Seok, Joo Myoung;Suh, Doug Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.9
    • /
    • pp.590-597
    • /
    • 2014
  • This paper proposes multi-camera system that relies on 3D views for panoramic video and distribution method about panoramic video generation algorithm by using cloud computing. The proposed monitoring system monitors the projected 3D model view, instead of individual 2D views, to detect image distortions. This can minimize compensation errors caused by parallax, thereby improving the quality of the resulting panoramic video. Panoramic video generation algorithm can be divided into registration part and compositing part. Therefore we propose off-loading method of these parts with cloud computing for panoramic video service.

Traffic-Oriented Stream Scheduling for 5G-based D2D Streaming Services

  • Lee, Chong-Deuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.95-103
    • /
    • 2022
  • As 5G mobile communication services gradually expand in P2P (peer-to-peer) or D2D (device-to-device) applications, traffic-oriented stream control such as YouTube streaming is emerging as an important technology. In D2D communication, the type of data stream most frequently transmitted by users is a video stream, which has the characteristics of a large-capacity transport stream. In a D2D communication environment, this type of stream not only provides a cause of traffic congestion, but also degrades the quality of service between D2D User Equipments (DUEs). In this paper, we propose a Traffic-Oriented Stream Scheduling (TOSS) scheme to minimize the interruption of dynamic media streams such as video streams and to optimize streaming service quality. The proposed scheme schedules the media stream by analyzing the characteristics of the media stream and the traffic type in the bandwidth of 3.5 GHz and 28 GHz under the 5G gNB environment. We examine the performance of the proposed scheme through simulation, and the simulation results show that the proposed scheme has better performance than other comparative methods.

A Fast Kernel Regression Framework for Video Super-Resolution

  • Yu, Wen-Sen;Wang, Ming-Hui;Chang, Hua-Wen;Chen, Shu-Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.1
    • /
    • pp.232-248
    • /
    • 2014
  • A series of kernel regression (KR) algorithms, such as the classic kernel regression (CKR), the 2- and 3-D steering kernel regression (SKR), have been proposed for image and video super-resolution. In existing KR frameworks, a single algorithm is usually adopted and applied for a whole image/video, regardless of region characteristics. However, their performances and computational efficiencies can differ in regions of different characteristics. To take full advantage of the KR algorithms and avoid their disadvantage, this paper proposes a kernel regression framework for video super-resolution. In this framework, each video frame is first analyzed and divided into three types of regions: flat, non-flat-stationary, and non-flat-moving regions. Then different KR algorithm is selected according to the region type. The CKR and 2-D SKR algorithms are applied to flat and non-flat-stationary regions, respectively. For non-flat-moving regions, this paper proposes a similarity-assisted steering kernel regression (SASKR) algorithm, which can give better performance and higher computational efficiency than the 3-D SKR algorithm. Experimental results demonstrate that the computational efficiency of the proposed framework is greatly improved without apparent degradation in performance.

3D video coding for e-AG using spatio-temporal scalability (e-AG를 위한 시공간적 계위를 이용한 3차원 비디오 압축)

  • 오세찬;이영호;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.199-202
    • /
    • 2003
  • In this paper, we propose a new 3D coding method for heterogeneous systems over enhanced Access Grid (e-AG) with 3D display using spatio-temporal scalability. The proposed encoder produces four bit-streams: one base layer and enhancement layer l, 2 and 3. The base layer represents a video sequence for left eye with lower spatial resolution. An enhancement layer l provides additional bit-stream needed for reproduction of frames produced in base layer with full resolution. Similarly, the enhancement layer 2 represents a video sequence for right eye with lower spatial resolution and an enhancement layer 3 provides additional bit-stream needed for reproduction of its reference pictures with full resolution. In this system, temporal resolution reduction is obtained by dropping B-frames in the receiver according to network condition. The receiver system can select the spatial and temporal resolution of video sequence with its display condition by properly combining bit-streams.

  • PDF

Digital Hologram Coding Technique using Block Matching of Localized Region and MCTF (로컬영역의 정합기법 및 MCTF를 이용한 디지털 홀로그램 부호화 기술)

  • Seo, Young-Ho;Choi, Hyun-Jun;Kim, Dong-Wook
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.415-416
    • /
    • 2006
  • In this paper, we proposed a new coding technique of digital hologram video using 3D scanning method and video compression technique. The proposed coding consists of capturing a digital hologram to separate into RGB color space components, localization by segmenting the fringe pattern, frequency transform using $M{\tiems}N$ (segment size) 2D DCT (2 Dimensional Discrete Cosine Transform) for extracting redundancy, 3D scan of segment to form a video sequence, motion compensated temporal filtering (MCTF) and modified video coding which uses H.264/AVC.

  • PDF

Robust Digital Watermarking for High-definition Video using Steerable Pyramid Transform, Two Dimensional Fast Fourier Transform and Ensemble Position-based Error Correcting

  • Jin, Xun;Kim, JongWeon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3438-3454
    • /
    • 2018
  • In this paper, we propose a robust blind watermarking scheme for high-definition video. In the embedding process, luminance component of each frame is transformed by 2-dimensional fast Fourier transform (2D FFT). A secret key is used to generate a matrix of random numbers for the security of watermark information. The matrix is transformed by inverse steerable pyramid transform (SPT). We embed the watermark into the low and mid-frequency of 2D FFT coefficients with the transformed matrix. In the extraction process, the 2D FFT coefficients of each frame and the transformed matrix are transformed by SPT respectively, to produce two oriented sub-bands. We extract the watermark from each frame by cross-correlating two oriented sub-bands. If a video is degraded by some attacks, the watermarks of frames contain some errors. Thus, we use an ensemble position-based error correcting algorithm to estimate the errors and correct them. The experimental results show that the proposed watermarking algorithm is imperceptible and moreover is robust against various attacks. After embedding 64 bits of watermark into each frame, the average peak signal-to-noise ratio between original frames and embedded frames is 45.7 dB.

A New Objective Video Quality Metric for Stereoscopic Video

  • Zheng, Yan;Seo, Jungdong;Sohn, Kwanghoon
    • Annual Conference of KIPS
    • /
    • 2012.04a
    • /
    • pp.355-358
    • /
    • 2012
  • Although quality metrics for 2D video quality assessment have been proposed, the quality models on stereoscopic video have not been widely studied. In this paper, a new objective video quality metric for s tereoscopic video is proposed. The proposed algorithm consider three factors to evaluate stereoscopic video quality: blocking artifact, blurring artifact, and the difference between left and right view of stereoscopic vide o. The results show that the proposed algorithm has a higher correlation with DMOS than the others.

Medium to Long Range Wireless Video Transmission Scheme in 2.4GHz Band with Beamforming (빔 형성을 적용한 2.4GHz 대역 중장거리 영상 전송 무선 기술)

  • Paik, Junghoon;Kim, Namho;Jee, Minki
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.693-700
    • /
    • 2018
  • In this paper, we propose a wireless video transmission scheme, providing medium and long range communication in the 2.4GHz band with beamforming. With this scheme, it is shown that the transmission rate of 32Mbps and received signal power of -77dBm is achieved with 4 antennas of 5dBi and 16dBm transmit power at each antenna connection for the distance of 3.6km. The scheme also provides transmission distance of 20km for 10~12Mbps with the 4 omni-directional antennas of 5dBi.

Multi-View Video Coding Using Illumination Change-Adaptive Motion Estimation and 2D Direct Mode (조명변화에 적응적인 움직임 검색 기법과 2차원 다이렉트 모드를 사용한 다시점 비디오 부호화)

  • Lee, Yung Ki;Hur, Jae Ho;Lee, Yung Lyul
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.321-327
    • /
    • 2005
  • A MVC (Multi-view Video Coding) method, which uses both an illumination change-adaptive ME (Motion Estimation)/DC (Motion Compensation) and a 2D (Dimensional) direct mode, is proposed. Firstly, a new SAD (Sum of Absolute Difference) measure for ME/MC is proposed to compensate the Luma pixel value changes for spatio-temporal motion vector prediction. Illumination change-adaptive (ICA) ME/MC uses the new SAD to improve both MV (Motion Vector) accuracy and bit saving. Secondly, The proposed 2D direct mode that can be used in inter-view prediction is an extended version of the temporal direct mode in MPEG-4 AVC. The proposed MVC method obtains approximately 0.8dB PSNR (Peak Signal-to-Noise Ratio) increment compared with the MPEG-4 AVC simulcast coding.