• Title/Summary/Keyword: Video Compression

Search Result 775, Processing Time 0.037 seconds

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

A Symmetric Motion Estimation Method by using the Properties of the Distribution of Motion Vectors (움직임 벡터 분포 특성과 블록 움직임의 특성을 이용한 대칭형 움직임 추정 기법)

  • Yoon, Hyo-Sun;Kim, Mi-Young
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.329-336
    • /
    • 2017
  • In video compression, Motion Estimation(ME) limits the performance of image quality and generated bit rates. However, it requires much complexity in the encoder part. Multi-view video uses many cameras at different positions. Multi-view video coding needs huge computational complexity in proportion to the number of the cameras. To reduce computational complexity and maintain the image quality, an effective motion estimation method is proposed in this paper. The proposed method exploiting the characteristics of motion vector distribution and the motion of video. The proposed is a kind of a hierarchical search strategy. This strategy consists of multi-grid rhombus pattern, diagonal pattern, rectangle pattern, and refinement pattern. Experiment results show that the complexity reduction of the proposed method over TZ search method and PBS (Pel Block Search) on JMVC (Joint Multiview Video Coding) can be up to 40~75% and 98% respectively while maintaining similar video image quality and generated bit rates.

Adaptive Multi-view Video Service Framework for Mobile Environments (이동 환경을 위한 적응형 다시점 비디오 서비스 프레임워크)

  • Kwon, Jun-Sup;Kim, Man-Bae;Choi, Chang-Yeol
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.586-595
    • /
    • 2008
  • In this paper, we propose an adaptive multi-view video service framework suitable for mobile environments. The proposed framework generates intermediate views in near-realtime and overcomes the limitations of mobile services by adapting the multi-view video according to the processing capability of a mobile device as well as the user characteristics of a client. By implementing the most of adaptation processes at the server side, the load on a client can be reduced. H.264/AVC is adopted as a compression scheme. The framework could provide an interactive service with efficient video service to a mobile client. For this, we present a multi-view video DIA (Digital Item Adaptation) that adapts the multi-view video according to the MPEG-21 DIA multimedia framework. Experimental results show that our proposed system can support a frame rate of 13 fps for 320{\times}240 video and reduce the time of generating an intermediate view by 20 % compared with a conventional 3D projection method.

A Method of Frame Synchronization for Stereoscopic 3D Video (스테레오스코픽 3D 동영상을 위한 동기화 방법)

  • Park, Youngsoo;Kim, Dohoon;Hur, Namho
    • Journal of Broadcast Engineering
    • /
    • v.18 no.6
    • /
    • pp.850-858
    • /
    • 2013
  • In this paper, we propose a method of frame synchronization for stereoscopic 3D video to solve the viewing problem caused by synchronization errors between a left video and a right video using the temporal frame difference image depending on the movement of objects. Firstly, we compute two temporal frame difference images from the left video and the right video which are corrected the vertical parallax between two videos using rectification, and calculate two horizontal projection profiles of two temporal frame difference images. Then, we find a pair of synchronized frames of the two videos by measuring the mean of absolute difference (MAD) of two horizontal projection profiles. Experimental results show that the proposed method can be used for stereoscopic 3D video, and is robust against Gaussian noise and video compression by H.264/AVC.

An Efficient Video Watermarking Using Re-Estimation and Minimum Modification Technique of Motion Vectors (재예측과 움직임벡터의 변경 최소화 기법을 이용한 효율적인 비디오 워터마킹)

  • Kang Kyung-won;Moon Kwang-seok;Kim Jong-nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.497-504
    • /
    • 2005
  • We propose an efficient video watermarking scheme using re-estimation and minimum modification technique of motion vectors. Conventional methods based on motion vectors do watermarking using modification of motion vectors. However, change of motion vectors results in the degradation of video quality. Thus, our scheme minimizes the modification of the original motion vectors and replaces an original motion vector by the adjacent optimal motion vector using re-estimation of motion vectors to avoid degradation of video quality. Besides, our scheme guarantees the amount of embedded watermark data using the adaptive threshold considering for an efficient video watermarking. In addition, this is compatible with current video compression standards without changing the bitstream. Experimental result shows that the proposed scheme obtains better video quality than other previous algorithms by about $0.6{\sim}1.3\;dB$.

Fast Video Detection Using Temporal Similarity Extraction of Successive Spatial Features (연속하는 공간적 특징의 시간적 유사성 검출을 이용한 고속 동영상 검색)

  • Cho, A-Young;Yang, Won-Keun;Cho, Ju-Hee;Lim, Ye-Eun;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.929-939
    • /
    • 2010
  • The growth of multimedia technology forces the development of video detection for large database management and illegal copy detection. To meet this demand, this paper proposes a fast video detection method to apply to a large database. The fast video detection algorithm uses spatial features using the gray value distribution from frames and temporal features using the temporal similarity map. We form the video signature using the extracted spatial feature and temporal feature, and carry out a stepwise matching method. The performance was evaluated by accuracy, extraction and matching time, and signature size using the original videos and their modified versions such as brightness change, lossy compression, text/logo overlay. We show empirical parameter selection and the experimental results for the simple matching method using only spatial feature and compare the results with existing algorithms. According to the experimental results, the proposed method has good performance in accuracy, processing time, and signature size. Therefore, the proposed fast detection algorithm is suitable for video detection with the large database.

A Study on the Evaluation of MPEG-4 Video Decoding Complexity for HDTV (HDTV를 위한 MPEG-4 비디오 디코딩 복잡도의 평가에 관한 연구)

  • Ahn, Seong-Yeol;Park, Won-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.595-598
    • /
    • 2005
  • MPEG-4 Visual is, and international standard for the object-based video compression, designed for supporting a wide range of applications from multimedia communication to HDTV. To control the minimum decoding complexity required at the decoder, the MPEG-4 Visual standard defines the co-called video buffering mechanism, which includes three video buffer models. Among them, the VCV(Video Complexity Verifier) defines the control of the processing speed for decoding of a macroblock, there are two models: VCV and B-VCV distinguishing the boundary and non-boundary MB. This paper presents the evaluation results of decoding complexity by measuring decoding time of a MB for rectangular, arbitrarily shaped video objects and the various types of objects supporting the resolution of HDTV using the optimized MPEG-4 Reference Software. The experimental results shows that decoding complexity varies depending on the coding type and more effective usage of decoding resources may be possible.

  • PDF

A study on performance evaluation of DVCs with different coding method and feasibility of spatial scalable DVC (분산 동영상 코딩의 코딩 방식에 따른 성능 평가와 공간 계층화 코더로서의 가능성에 대한 연구)

  • Kim, Dae-Yeon;Park, Gwang-Hoon;Kim, Kyu-Heon;Suh, Doug-Young
    • Journal of Broadcast Engineering
    • /
    • v.12 no.6
    • /
    • pp.585-595
    • /
    • 2007
  • Distributed video coding is a new video coding paradigm based on Slepian-Wolf and Wyner-Ziv's information theory Distributed video coding whose decoder exploits side information transfers its computational burden from encoder to decoder, so that encoding with light computational power can be realized. RD performance is superior than that of standard video coding without motion compensation process but still has a gap with that of coding with motion compensation process. This parer introduces basic theory of distributed video coding and its structure and then shows RD performances of DVCs whose coding style is different from each other and of a DVC as a spatial scalable video coder.

Multiresolution Wavelet-Based Disparity Estimation for Stereo Image Compression

  • Tengcharoen, Chompoonuch;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1098-1101
    • /
    • 2004
  • The ordinary stereo image of an object consists of data of left and right views. Therefore, the left and right image pairs have to be transmitted simultaneously in order to display 3-dimentional video at the remote site. However, due to the twice data in comparing with a monoscopic image of the same object, it needs to be compressed for fast transmission and resource saving. Hence, it needs an effective coding algorithm for compressing stereo image. It was found previously that compressing left and right frames independently will achieve the compression ratio lower than compressing by utilizing the spatial redundancy between both frames. Therefore, in this paper, we study the stereo image compression technique based on the multiresolution wavelet transform using varied disparity-block size for estimation and compensation. The size of disparity-block in the stereo pair subbands are scaling on a coarse-to-fine wavelet coefficients strategy. Finally, the reference left image and residual right image after disparity estimation and compensation are coded by using SPIHT coding. The considered method demonstrates good performance in both PSNR measures and visual quality for stereo image.

  • PDF

ROI Image Compression Method Using Eye Tracker for a Soldier (병사의 시선감지를 이용한 ROI 영상압축 방법)

  • Chang, HyeMin;Baek, JooHyun;Yang, DongWon;Choi, JoonSung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.3
    • /
    • pp.257-266
    • /
    • 2020
  • It is very important to share tactical information such as video, images, and text messages among soldiers for situational awareness. Under the wireless environment of the battlefield, the available bandwidth varies dynamically and is insufficient to transmit high quality images, so it is necessary to minimize the distortion of the area of interests such as targets. A natural operating method for soldiers is also required considering the difficulty in handling while moving. In this paper, we propose a natural ROI(region of interest) setting and image compression method for effective image sharing among soldiers. We verify the proposed method through prototype system design and implementation of eye gaze detection and ROI-based image compression.