• Title/Summary/Keyword: compression coding

Search Result 828, Processing Time 0.03 seconds

A Study on the Multiresolutional Coding Based on Spline Wavelet Transform (스플라인 웨이브렛 변환을 이용한 영상의 다해상도 부호화에 관한 연구)

  • 김인겸;정준용;유충일;이광기;박규태
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.12
    • /
    • pp.2313-2327
    • /
    • 1994
  • As the communication environment evolves, there is an increasing need for multiresolution image coding. To meet this need, the entrophy constratined vector quantizer(ECVQ) for coding of image pyramids by spline wavelet transform is introduced in this paper. This paper proposes a new scheme for image compression taking into account psychovisual feature both in the space and frequency domains : this proposed method involves two steps. First we use spline wavelet transform in order to obtain a set of biorthogonal subclasses of images ; the original image is decomposed at different scale using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vectored quantized using a multi-resolution ECVQ(entropy-constrained vector quantizer) codebook. The simulation results showed that the proposed method could achieve higher quality LENA image improved by about 2.0 dB than that of the ECVQ using other wavelet at 0.5 bpp and, by about 0.5 dB at 1.0 bpp, and reduce the block effect and the edge degradation.

  • PDF

Optimization Method on the Number of the Processing Elements in the Multi-Stage Motion Estimation Algorithm for High Efficiency Video Coding (HEVC 다단계 움직임 추정 기법에서 단위 연산기 개수의 최적화 방법)

  • Lee, Seongsoo
    • Journal of IKEEE
    • /
    • v.21 no.1
    • /
    • pp.100-103
    • /
    • 2017
  • Motion estimation occupies the largest computation in the video compression. Multiple processing elements are often exploited in parallel to meet processing speed. More processing elements increase processing speed, but they also increase hardware area. therefore, it is important to optimize the number of processing element. HEVC (high efficiency video coding) usually exploits multi-stage motion estimation algorithms for low computation and high performance. Since the number and position of search points are different in each stage, the utilization of the processing elements is not always 100% and the utilization is quite different with the number of processing elements. In this paper, the optimizing method is proposed on the number of processing elements. It finds out the optimal number of the processing elements for the given multi-stage motion estimation algorithm by calculating utilization and execution cycle of the processing elements.

Digital Speech Coding Technologies for Wire and Wireless Communication (유무선망에서 사용되는 디지털 음성 부호화 기술 동향)

  • Yoon, Byungsik;Choi, Songin;Kang, Sangwon
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.261-269
    • /
    • 2005
  • Throughout the history of digital communication, the digital speech coder is used as speech compression tool. Nowadays, the speech coder has been rapidly developed in the area of mobile communication system to overcome severe channel error and limitation of radio frequency resources. Due to the development of high performance communication system, high quality of speech coder is needed. This kind of speech coder can be used not only in communication services but also in digital multimedia services. In this paper, we describe the technologies of digital speech coder which are used in wire and wireless communication. We also present a summary of recent speech coding standards for narrowband and wideband applications. Finally we introduce the technical trends of next generation speech coder.

A Ranking Method for Improving Performance of Entropy Coding in Gray-Level Images (그레이레벨 이미지에서의 엔트로피 코딩 성능 향상을 위한 순위 기법)

  • You, Kang-Soo;Sim, Chun-Bo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.4
    • /
    • pp.707-715
    • /
    • 2008
  • This paper proposes an algorithm for efficient compression gray-level images by entropy encoder. The issue of the proposed method is to replace original data of gray-level images with particular ranked data. For this, first, before encoding a stream of gray-level values in an image, the proposed method counts co-occurrence frequencies for neighboring pixel values. Then, it replaces each pay value with particularly ranked numbers based on the investigated co-occurrence frequencies. Finally, the ranked numbers are transmitted to an entropy encoder. The proposed method improves the performance of existing entropy coding by transforming original gray-level values into rank based images using statistical co-occurrence frequencies of gray-level images. The simulation results, using gray-level images with 8-bits, show that the proposed method can reduce bit rate by up to 37.85% compared to existing conventional entropy coders.

GIS Vector Map Compression using Spatial Energy Compaction based on Bin Classification (빈 분류기반 공간에너지집중기법을 이용한 GIS 벡터맵 압축)

  • Jang, Bong-Joo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.15-26
    • /
    • 2012
  • Recently, due to applicability increase of vector data based digital map for geographic information and evolution of geographic measurement techniques, large volumed GIS(geographic information service) services having high resolution and large volumed data are flowing actively. This paper proposed an efficient vector map compression technique using the SEC(spatial energy compaction) based on classified bins for the vector map having 1cm detail and hugh range. We encoded polygon and polyline that are the main objects to express geographic information in the vector map. First, we classified 3 types of bins and allocated the number of bits for each bin using adjacencies among the objects. and then about each classified bin, energy compaction and or pre-defined VLC(variable length coding) were performed according to characteristics of classified bins. Finally, for same target map, while a vector simplification algorithm had about 13%, compression ratio in 1m resolution we confirmed our method having more than 80% encoding efficiencies about original vector map in the 1cm resolution. Also it has not only higher compression ratio but also faster computing speed than present SEC based compression algorithm through experimental results. Moreover, our algorithm presented much more high performances about accuracy and computing power than vector approximation algorithm on same data volume sizes.

An Efficient Test Data Compression/Decompression for Low Power Testing (저전력 테스트를 고려한 효율적인 테스트 데이터 압축 방법)

  • Chun Sunghoon;Im Jung-Bin;Kim Gun-Bae;An Jin-Ho;Kang Sungho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.2 s.332
    • /
    • pp.73-82
    • /
    • 2005
  • Test data volume and power consumption for scan vectors are two major problems in system-on-a-chip testing. Therefore, this paper proposes a new test data compression/decompression method for low power testing. The method is based on analyzing the factors that influence test parameters: compression ratio, power reduction and hardware overhead. To improve the compression ratio and the power reduction ratio, the proposed method is based on Modified Statistical Coding (MSC), Input Reduction (IR) scheme and the algorithms of reordering scan flip-flops and reordering test pattern sequence in a preprocessing step. Unlike previous approaches using the CSR architecture, the proposed method is to compress original test data, not $T_{diff}$, and decompress the compressed test data without the CSR architecture. Therefore, the proposed method leads to better compression ratio with lower hardware overhead and lower power consumption than previous works. An experimental comparison on ISCAS '89 benchmark circuits validates the proposed method.

Effective Compression of the Surveillance Video with Region of Interest (관심영역 구분을 통한 감시영상시스템의 효율적 압축)

  • Ko, Mi-Ae;Kim, Young-Mo;Koh, Kwang-Sik
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.95-102
    • /
    • 2003
  • In surveillance video system, there are many classes of images and some spatial regions are more important than other regions. The conventional compression method in this system have been compressed there full frames without classfying them depend on their important parts. To improve the accuracy of the image coding and deliver effective compression for the surveillance video system, it was necessary to separate the regions according to their importance. In this paper, we propose a new effective surveillance video image compression method. The proposed scheme defines importance based three-level region of interest block in a frame, such as background, motion object block, and the feature object block. A captured video image frame can be separated to these three different levels of block regions. And depends on the priority, each block can be modified and compressed in different resolution, compression ratio and qualify factor. Therefore, in surveillance video system, this algorithm not only reduces the image processing time and space, but also guarantees the Important image data in high quality to acquire the system's goal.

A Feature Map Compression Method for Multi-resolution Feature Map with PCA-based Transformation (PCA 기반 변환을 통한 다해상도 피처 맵 압축 방법)

  • Park, Seungjin;Lee, Minhun;Choi, Hansol;Kim, Minsub;Oh, Seoung-Jun;Kim, Younhee;Do, Jihoon;Jeong, Se Yoon;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.56-68
    • /
    • 2022
  • In this paper, we propose a compression method for multi-resolution feature maps for VCM. The proposed compression method removes the redundancy between the channels and resolution levels of the multi-resolution feature map through PCA-based transformation. According to each characteristic, the basis vectors and mean vector used for transformation, and the transformation coefficient obtained through the transformation are compressed using a VVC-based coder and DeepCABAC. In order to evaluate performance of the proposed method, the object detection performance was measured for the OpenImageV6 and COCO 2017 validation set, and the BD-rate of MPEG-VCM anchor and feature map compression anchor proposed in this paper was compared using bpp and mAP. As a result of the experiment, the proposed method shows a 25.71% BD-rate performance improvement compared to feature map compression anchor in OpenImageV6. Furthermore, for large objects of the COCO 2017 validation set, the BD-rate performance is improved by up to 43.72% compared to the MPEG-VCM anchor.

New Illumination compensation algorithm improving a multi-view video coding performance by advancing its temporal and inter-view correlation (다시점 비디오의 시공간적 중복도를 높여 부호화 성능을 향상시키는 새로운 조명 불일치 보상 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.768-782
    • /
    • 2010
  • Because of the different shooting position between multi-view cameras and the imperfect camera calibration, Illumination mismatches of multi-view video can happen. This variation can bring about the performance decrease of multi-view video coding(MVC) algorithm. A histogram matching algorithm can be applied to recompensate these inconsistencies in a prefiltering step. Once all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching, the coding efficiency of MVC is improved. However the histogram distribution can be different not only between neighboring views but also between sequential views on account of movements of camera angle and some objects, especially human. Therefore the histogram matching algorithm which references all frames in chose view is not appropriate for compensating the illumination differences of these sequence. Thus we propose new algorithms both the image classification algorithm which is applied two criteria to improve the correlation between inter-view frames and the histogram matching which references and matches with a group of pictures(GOP) as a unit to advance the correlation between successive frames. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with the conventional algorithms.

Tile-level and Frame-level Parallel Encoding for HEVC (타일 및 프레임 수준의 HEVC 병렬 부호화)

  • Kim, Younhee;Seok, Jinwuk;Jung, Soon-heung;Kim, Huiyong;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.388-397
    • /
    • 2015
  • High Efficiency Video Coding (HEVC)/H.265 is a new video coding standard which is known as high compression ratio compared to the previous standard, Advanced Video Coding (AVC)/H.264. Due to achievement of high efficiency, HEVC sacrifices the time complexity. To apply HEVC to the market applications, one of the key requirements is the fast encoding. To achieve the fast encoding, exploiting thread-level parallelism is widely chosen mechanism since multi-threading is commonly supported based on the multi-core computer architecture. In this paper, we implement both the Tile-level parallelism and the Frame-level parallelism for HEVC encoding on multi-core platform. Based on the implementation, we present two approaches in combining the Tile-level parallelism with Frame-level parallelism. The first approach creates the fixed number of tile per frame while the second approach creates the number of tile per frame adaptively according to the number of frame in parallel and the number of available worker threads. Experimental results show that both improves the parallel scalability compared to the one that use only tile-level parallelism and the second approach achieves good trade-off between parallel scalability and coding efficiency for both Full-HD (1080 x 1920) and 4K UHD (3840 x 2160) sequences.