• Title/Summary/Keyword: video compression.

Search Result 778, Processing Time 0.035 seconds

2D Interpolation 3D Point Cloud using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 차원 포인트 클라우드의 차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.147-150
    • /
    • 2021
  • 최근 컴퓨터 그래픽 기술이 발전함에 따라 가상으로 만들어낸 객체와 현실 객체 사이의 분간이 어려워지고 있으며, AR/VR/XR 등의 서비스를 위해 현실 객체를 컴퓨터 그래픽으로 표현하는 기술의 연구가 활발히 진행되고 있다. 포인트 클라우드는 현실 객체를 표현하는 기술 중의 하나로 객체의 표면을 수많은 3차원의 점으로 표현하며, 2차원 영상보다 더욱 거대한 데이터 크기를 가지게 된다. 이를 다양한 서비스에 응용하기 위해서는 3차원 데이터의 특징에 맞는 고효율의 압축 기술이 필요하며, 국제표준기구인 MPEG에서는 연속적인 움직임을 가지는 동적 포인트 클라우드를 2차원 평면으로 투영하여 비디오 코덱을 사용해 압축하는 Video-based Point Cloud Compression (V-PCC) 기술이 연구되고 있다. 포인트 클라우드를 2차원 평면에 투영하는 방식은 점유 맵 (Occupancy Map), 기하 영상 (Geometry Image), 속성 영상 (Attribute Image) 등의 2차원 정보와 보조 정보를 사용해 압축을 진행하고, 부호화 과정에서는 보조 정보와 2차원 영상들의 정보를 사용해 3차원 포인트 클라우드를 재구성한다. 2차원 영상을 사용해 포인트 클라우드를 생성하는 특징 때문에 압축 과정에서 발생하는 영상 정보의 열화는 포인트 클라우드의 품질에 영향을 미친다. 이와 마찬가지로 추가적인 기술을 사용한 2차원 영상 정보의 향상으로 포인트 클라우드의 품질을 향상할 수 있을 것으로 예상된다. 이에 본 논문은 V-PCC 기술에서 생성되는 영상 정보에 2차원 보간 (Interpolation) 기술을 적용하여 기존의 영상 정보에 포함되지 않은 추가적인 포인트를 생성하는 것으로 재구성되는 포인트 클라우드의 밀도를 증가시키고 그 영향을 분석하고자 한다.

  • PDF

An Improvement of Still Image Quality Based on Error Resilient Entropy Coding for Random Error over Wireless Communications (무선 통신상 임의 에러에 대한 에러내성 엔트로피 부호화에 기반한 정지영상의 화질 개선)

  • Kim Jeong-Sig;Lee Keun-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.9-16
    • /
    • 2006
  • Many image and video compression algorithms work by splitting the image into blocks and producing variable-length code bits for each block data. If variable-length code data are transmitted consecutively over error-prone channel without any error protection technique, the receiving decoder cannot decode the stream properly. So the standard image and video compression algorithms insert some redundant information into the stream to provide some protection against channel errors. One of redundancies is resynchronization marker, which enables the decoder to restart the decoding process from a known state in the event of transmission errors, but its usage should be restricted not to consume bandwidth too much. The Error Resilient Entropy Code(EREC) is well blown method which can regain synchronization without any redundant information. It can work with the overall prefix codes, which many image compression methods use. This paper proposes EREREC method to improve FEREC(Fast Error-Resilient Entropy Coding). It first calculates initial searching position according to bit lengths of consecutive blocks. Second, initial offset is decided using statistical distribution of long and short blocks, and initial offset can be adjusted to insure all offset sequence values can be used. The proposed EREREC algorithm can speed up the construction of FEREC slots, and can improve the compressed image quality in the event of transmission errors. The simulation result shows that the quality of transmitted image is enhanced about $0.3{\sim}3.5dB$ compared with the existing FEREC when random channel error happens.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

A Symmetric Motion Estimation Method by using the Properties of the Distribution of Motion Vectors (움직임 벡터 분포 특성과 블록 움직임의 특성을 이용한 대칭형 움직임 추정 기법)

  • Yoon, Hyo-Sun;Kim, Mi-Young
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.329-336
    • /
    • 2017
  • In video compression, Motion Estimation(ME) limits the performance of image quality and generated bit rates. However, it requires much complexity in the encoder part. Multi-view video uses many cameras at different positions. Multi-view video coding needs huge computational complexity in proportion to the number of the cameras. To reduce computational complexity and maintain the image quality, an effective motion estimation method is proposed in this paper. The proposed method exploiting the characteristics of motion vector distribution and the motion of video. The proposed is a kind of a hierarchical search strategy. This strategy consists of multi-grid rhombus pattern, diagonal pattern, rectangle pattern, and refinement pattern. Experiment results show that the complexity reduction of the proposed method over TZ search method and PBS (Pel Block Search) on JMVC (Joint Multiview Video Coding) can be up to 40~75% and 98% respectively while maintaining similar video image quality and generated bit rates.

Adaptive Multi-view Video Service Framework for Mobile Environments (이동 환경을 위한 적응형 다시점 비디오 서비스 프레임워크)

  • Kwon, Jun-Sup;Kim, Man-Bae;Choi, Chang-Yeol
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.586-595
    • /
    • 2008
  • In this paper, we propose an adaptive multi-view video service framework suitable for mobile environments. The proposed framework generates intermediate views in near-realtime and overcomes the limitations of mobile services by adapting the multi-view video according to the processing capability of a mobile device as well as the user characteristics of a client. By implementing the most of adaptation processes at the server side, the load on a client can be reduced. H.264/AVC is adopted as a compression scheme. The framework could provide an interactive service with efficient video service to a mobile client. For this, we present a multi-view video DIA (Digital Item Adaptation) that adapts the multi-view video according to the MPEG-21 DIA multimedia framework. Experimental results show that our proposed system can support a frame rate of 13 fps for 320{\times}240 video and reduce the time of generating an intermediate view by 20 % compared with a conventional 3D projection method.

A Method of Frame Synchronization for Stereoscopic 3D Video (스테레오스코픽 3D 동영상을 위한 동기화 방법)

  • Park, Youngsoo;Kim, Dohoon;Hur, Namho
    • Journal of Broadcast Engineering
    • /
    • v.18 no.6
    • /
    • pp.850-858
    • /
    • 2013
  • In this paper, we propose a method of frame synchronization for stereoscopic 3D video to solve the viewing problem caused by synchronization errors between a left video and a right video using the temporal frame difference image depending on the movement of objects. Firstly, we compute two temporal frame difference images from the left video and the right video which are corrected the vertical parallax between two videos using rectification, and calculate two horizontal projection profiles of two temporal frame difference images. Then, we find a pair of synchronized frames of the two videos by measuring the mean of absolute difference (MAD) of two horizontal projection profiles. Experimental results show that the proposed method can be used for stereoscopic 3D video, and is robust against Gaussian noise and video compression by H.264/AVC.

An Efficient Video Watermarking Using Re-Estimation and Minimum Modification Technique of Motion Vectors (재예측과 움직임벡터의 변경 최소화 기법을 이용한 효율적인 비디오 워터마킹)

  • Kang Kyung-won;Moon Kwang-seok;Kim Jong-nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.497-504
    • /
    • 2005
  • We propose an efficient video watermarking scheme using re-estimation and minimum modification technique of motion vectors. Conventional methods based on motion vectors do watermarking using modification of motion vectors. However, change of motion vectors results in the degradation of video quality. Thus, our scheme minimizes the modification of the original motion vectors and replaces an original motion vector by the adjacent optimal motion vector using re-estimation of motion vectors to avoid degradation of video quality. Besides, our scheme guarantees the amount of embedded watermark data using the adaptive threshold considering for an efficient video watermarking. In addition, this is compatible with current video compression standards without changing the bitstream. Experimental result shows that the proposed scheme obtains better video quality than other previous algorithms by about $0.6{\sim}1.3\;dB$.

Fast Video Detection Using Temporal Similarity Extraction of Successive Spatial Features (연속하는 공간적 특징의 시간적 유사성 검출을 이용한 고속 동영상 검색)

  • Cho, A-Young;Yang, Won-Keun;Cho, Ju-Hee;Lim, Ye-Eun;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.929-939
    • /
    • 2010
  • The growth of multimedia technology forces the development of video detection for large database management and illegal copy detection. To meet this demand, this paper proposes a fast video detection method to apply to a large database. The fast video detection algorithm uses spatial features using the gray value distribution from frames and temporal features using the temporal similarity map. We form the video signature using the extracted spatial feature and temporal feature, and carry out a stepwise matching method. The performance was evaluated by accuracy, extraction and matching time, and signature size using the original videos and their modified versions such as brightness change, lossy compression, text/logo overlay. We show empirical parameter selection and the experimental results for the simple matching method using only spatial feature and compare the results with existing algorithms. According to the experimental results, the proposed method has good performance in accuracy, processing time, and signature size. Therefore, the proposed fast detection algorithm is suitable for video detection with the large database.

A Study on the Evaluation of MPEG-4 Video Decoding Complexity for HDTV (HDTV를 위한 MPEG-4 비디오 디코딩 복잡도의 평가에 관한 연구)

  • Ahn, Seong-Yeol;Park, Won-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.595-598
    • /
    • 2005
  • MPEG-4 Visual is, and international standard for the object-based video compression, designed for supporting a wide range of applications from multimedia communication to HDTV. To control the minimum decoding complexity required at the decoder, the MPEG-4 Visual standard defines the co-called video buffering mechanism, which includes three video buffer models. Among them, the VCV(Video Complexity Verifier) defines the control of the processing speed for decoding of a macroblock, there are two models: VCV and B-VCV distinguishing the boundary and non-boundary MB. This paper presents the evaluation results of decoding complexity by measuring decoding time of a MB for rectangular, arbitrarily shaped video objects and the various types of objects supporting the resolution of HDTV using the optimized MPEG-4 Reference Software. The experimental results shows that decoding complexity varies depending on the coding type and more effective usage of decoding resources may be possible.

  • PDF

A study on performance evaluation of DVCs with different coding method and feasibility of spatial scalable DVC (분산 동영상 코딩의 코딩 방식에 따른 성능 평가와 공간 계층화 코더로서의 가능성에 대한 연구)

  • Kim, Dae-Yeon;Park, Gwang-Hoon;Kim, Kyu-Heon;Suh, Doug-Young
    • Journal of Broadcast Engineering
    • /
    • v.12 no.6
    • /
    • pp.585-595
    • /
    • 2007
  • Distributed video coding is a new video coding paradigm based on Slepian-Wolf and Wyner-Ziv's information theory Distributed video coding whose decoder exploits side information transfers its computational burden from encoder to decoder, so that encoding with light computational power can be realized. RD performance is superior than that of standard video coding without motion compensation process but still has a gap with that of coding with motion compensation process. This parer introduces basic theory of distributed video coding and its structure and then shows RD performances of DVCs whose coding style is different from each other and of a DVC as a spatial scalable video coder.