• Title/Summary/Keyword: image encoding

Search Result 429, Processing Time 0.027 seconds

Feature based Text Watermarking in Digital Binary Image (이진 문서 영상에서의 특징 기반 텍스트 워터마킹)

  • 공영민;추현곤;최종욱;김희율
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.359-362
    • /
    • 2002
  • In this paper, we propose a new feature-based text watermarking for the binary text image. The structure of specific characters from preprocessed text image are modified to embed watermark. Watermark message are embedded and detected by the following method; Hole line disconnect using the connectivity of the character containing a hole, Center line shift using the hole area and Differential encoding using difference of flippable score points. Experimental results show that the proposed method is robust to rotation and scaling distortion.

  • PDF

Study on Fast HEVC Encoding with Hierarchical Motion Vector Clustering (움직임 벡터의 계층적 군집화를 통한 HEVC 고속 부호화 연구)

  • Lim, Jeongyun;Ahn, Yong-Jo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.21 no.4
    • /
    • pp.578-591
    • /
    • 2016
  • In this paper, the fast encoding algorithm in High Efficiency Video Coding (HEVC) encoder was studied. For the encoding efficiency, the current HEVC reference software is divided the input image into Coding Tree Unit (CTU). then, it should be re-divided into CU up to maximum depth in form of quad-tree for RDO (Rate-Distortion Optimization) in encoding precess. But, it is one of the reason why complexity is high in the encoding precess. In this paper, to reduce the high complexity in the encoding process, it proposed the method by determining the maximum depth of the CU using a hierarchical clustering at the pre-processing. The hierarchical clustering results represented an average combination of motion vectors (MV) on neighboring blocks. Experimental results showed that the proposed method could achieve an average of 16% time saving with minimal BD-rate loss at 1080p video resolution. When combined the previous fast algorithm, the proposed method could achieve an average 45.13% time saving with 1.84% BD-rate loss.

A Fast Encoding Algorithm for Image Vector Quantization Based on Prior Test of Multiple Features (복수 특징의 사전 검사에 의한 영상 벡터양자화의 고속 부호화 기법)

  • Ryu Chul-hyung;Ra Sung-woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1231-1238
    • /
    • 2005
  • This paper presents a new fast encoding algorithm for image vector quantization that incorporates the partial distances of multiple features with a multidimensional look-up table (LUT). Although the methods which were proposed earlier use the multiple features, they handles the multiple features step by step in terms of searching order and calculating process. On the other hand, the proposed algorithm utilizes these features simultaneously with the LUT. This paper completely describes how to build the LUT with considering the boundary effect for feasible memory cost and how to terminate the current search by utilizing partial distances of the LUT Simulation results confirm the effectiveness of the proposed algorithm. When the codebook size is 256, the computational complexity of the proposed algorithm can be reduced by up to the $70\%$ of the operations required by the recently proposed alternatives such as the ordered Hadamard transform partial distance search (OHTPDS), the modified $L_2-norm$ pyramid ($M-L_2NP$), etc. With feasible preprocessing time and memory cost, the proposed algorithm reduces the computational complexity to below the $2.2\%$ of those required for the exhaustive full search (EFS) algorithm while preserving the same encoding quality as that of the EFS algorithm.

The Study about the Differential compression based on the ROI(Region Of Interest) (ROI(Region Of Interest)기반의 차등적 이미지 압축에 관한 연구)

  • Yun, Chi-Hwan;Ko, Sun-Woo;Lee, Geun-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.679-686
    • /
    • 2014
  • Recently, users can get countless images and videos by network. So, the compression technology of image and video is researched more and more. However, the situation which is the interested range of the image is occurred. For instance, since the region of face is more important than background, the image compression technology bases on the region of interest (ROI) is necessary, in the ATM environment. In this research, given the human visual system, which are not sensitive to illumination variations at very dark and light regions of image, we calculate the standard deviation of block and use this value to define the ROI. In encoding process, the relatively high quality can be obtained at the ROI and the relatively low quality can be obtained at the non ROI. In proposed scheme, the feature which is the encoding process according to subjectively image quality can be demonstrated. Finally, this proposed scheme is applied to JPEG standard. The experimental results demonstrate that proposed scheme can achieve better image quality at the high compression ratio.

A differential image quantizer based on wavelet for low bit rate video coding (저비트율 동영상 부호화에 적합한 웨이블릿 기반의 차영상 양자화기)

  • 주수경;유지상
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.473-480
    • /
    • 2003
  • In this paper, we propose a new quadtree coding a1gorithm to improve the performance of the old one. The new algorithm can process any frame of size in standard and reduce encoding and decoding time by decreasing computational load. It also improves the image quality comparing with any old quantizer based on quadtree and zerotree structure. In order for the new algorithm to be applied for real video codec, we analyze the statistical characteristics of coefficients of differential image and add a function that makes It deal with an arbitrary size of image by using new technique while the old one process by block unit. We can also improve the image quality by scaling the coefficient's value from a differential image. By comparing the performance of the new algorithm with quadtree and SPIHT, it Is shown that PSNR is improved, that the computational load is not reduced in encoding and decoding.

An Analysis on Range Block Coherences for Fractal Compression (프랙탈 압축을 위한 레인지 블록간의 유사성 분석)

  • 김영봉
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.4
    • /
    • pp.409-418
    • /
    • 1999
  • The fractal image compression is based on the self-similarity that some area in an image exhibits a very similar shape with other areas. This compression technique has very long encoding time although it has high compression ratio and fast decompression. To cut-off the encoding time, most researches have restricted the search of domain blocks for a range block. These researches have been mainly focused on the coherence between a domain block and a range block, while they have not utilized the coherence among range blocks well. Therefore, we give an analysis on the coherence among range blocks in order to develope an efficient fractal Image compression algorithm. We analysis the range blocks according to not only measures for defining the range block coherence but also threshold of each measure. If these results are joined in a prior work of other fractal compression algorithms, it will give a great effectiveness in encoding time.

  • PDF

A study on the Image Signal Compress using SOM with Isometry (Isometry가 적용된 SOM을 이용한 영상 신호 압축에 관한 연구)

  • Chang, Hae-Ju;Kim, Sang-Hee;Park, Won-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.358-360
    • /
    • 2004
  • The digital images contain a significant amount of redundancy and require a large amount of data for their storage and transmission. Therefore, the image compression is necessary to treat digital images efficiently. The goal of image compression is to reduce the number of bits required for their representation. The image compression can reduce the size of image data using contractive mapping of original image. Among the compression methods, the mapping is affine transformation to find the block(called range block) which is the most similar to the original image. In this paper, we applied the neural network(SOM) in encoding. In order to improve the performance of image compression, we intend to reduce the similarities and unnecesaries comparing with the originals in the codebook. In standard image coding, the affine transform is performed with eight isometries that used to approximate domain blocks to range blocks.

  • PDF

Wavelet Transform Coding for Image Communication (영상 통신을 위한 웨이블릿 변환 부호화)

  • Kim, Yong-Yeon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.1
    • /
    • pp.61-67
    • /
    • 2011
  • In this paper, a new method for effective video coding is studied. Picture set filter is proposed for preserving compression ratio and video quality. This filter controls the compression ratio of each frame depending on the correlation to the reference frame by selectively eliminating less important high-resolution areas. Consequently, video quality can be preserved and bit rate can be controlled adaptively. In the simulation, to test the performance of the proposed coding method, comparisons with the full search block matching algorithm and the differential image coding algorithm are made. In the former case, video quality, compression ratio and encoding time is improved. In the latter case, video quality is degraded, but compression ratio and encoding time is improved. Consequently, the proposed method shows a reasonably good performance over existing ones.

Quantization Parameter Determination Method for Face Depth Image Encoding (깊이 얼굴 영상 부호화에서의 양자화 인자 결정 방법)

  • Park, Dong-Jin;Kwon, Soon-Kak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.1
    • /
    • pp.13-23
    • /
    • 2020
  • In this paper, we propose a quantization parameter determination method for face depth image encoding in order to minimize an impact on a face recognition accuracy. When a face depth image is compressed through quantization in H.264/AVC, differential quantization parameters are assigned according to an accuracy of ellipsoid modeling prediction and an importance degree of a unit block in extracting facial features. The simulation results show that the face recognition success rates are improved by up to 6% at the same compression rates through the proposed compression rate determination method.

Enhancement of H.264/AVC Encoding Speed and Reduction of CPU Load through Parallel Programming Based on CUDA (CUDA 기반의 병렬 프로그래밍을 통한 H.264/AVC 부호화 속도 향상 및 CPU 부하 경감)

  • Jang, Eun-Been;Ha, Yun-Su
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.34 no.6
    • /
    • pp.858-863
    • /
    • 2010
  • In order to enhance encoding speed in dynamic image encoding using H.264/AVC, reducing the time for motion estimation which takes a large portion of the processing time is very important. An approach using graphics processing unit(GPU) as a coprocessor to assist the central processing unit(CPU) in computing massive data, will be a way to reduce the processing time. In this paper, we present an efficient block-level parallel algorithm for the motion estimation(ME) on a computer unified device architecture(CUDA) platform developed in general-purpose computation on GPU. Experiments are carried out to verify the effectiveness of the proposed algorithm.