• Title/Summary/Keyword: compression coding

Search Result 828, Processing Time 0.03 seconds

Efficient Multispectral Image Compression Using Variable Block Size Vector Quantization (가변 블럭 벡터 양자화를 이용한 효율적인 다분광 화상 데이터 압축)

  • Ban, Seong-Won;Kim, Byeong-Ju;Seok, Jeong-Yeop;Gwon, Seong-Geun;Gwon, Gi-Gu;Kim, Yeong-Chun;Lee, Geon-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.6
    • /
    • pp.703-711
    • /
    • 2001
  • In this paper, we propose efficient multispectral image compression using variable block size vector quantization (VQ). In wavelet domain, we perform the variable block size VQ to remove intraband redundancy for a reference band image that has the lowest spatial variance and the best correlation with other band. And in wavelet domain, we perform the classified interband prediction to remove interband redundancy for the remaining bands. Then error wavelet coefficients between original image and predicted image are residual variable block size vector quantized to reduce prediction error. Experiments on remotely sensed satellite image show that coding efficiency of the proposed method is better than that of the conventional method.

  • PDF

ECG Signal Compression using Feature Points based on Curvature (곡률을 이용한 특징점 기반 심전도 신호 압축)

  • Kim, Tae-Hun;Kim, Sung-Wan;Ryu, Chun-Ha;Yun, Byoung-Ju;Kim, Jeong-Hong;Choi, Byung-Jae;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.624-630
    • /
    • 2010
  • As electrocardiogram(ECG) signals are generally sampled with a frequency of over 200Hz, a method to compress diagnostic information without losing data is required to store and transmit them efficiently. In this paper, an ECG signal compression method, which uses feature points based on curvature, is proposed. The feature points of P, Q, R, S, T waves, which are critical components of the ECG signal, have large curvature values compared to other vertexes. Thus, these vertexes are extracted with the proposed method, which uses local extremum of curvatures. Furthermore, in order to minimize reconstruction errors of the ECG signal, extra vertexes are added according to the iterative vertex selection method. Through the experimental results on the ECG signals from MIT-BIH Arrhythmia database, it is concluded that the vertexes selected by the proposed method preserve all feature points of the ECG signals. In addition, they are more efficient than the AZTEC(Amplitude Zone Time Epoch Coding) method.

An Adaptive De-blocking Algorithm in Low Bit-rate Video Coding (저 비트율 비디오를 위한 적응적 블록킹 현상 제거 기법)

  • 김종호;김해욱;정제창
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.505-513
    • /
    • 2004
  • Most video codecs including the international standards use the block-based hybrid structure for efficient compression. But for low bit-rate applications such as video transmission through wireless channels, the blocking artifacts degrade image qualify seriously. In this paper, we propose an adaptive de-blocking algorithm using characteristics of the block boundaries. Blocking artifacts contain the high frequency components near the block boundaries, therefore the lowpass filtering can remove them. However, simple lowpass filtering results into blurring by removing important information such as edges. To overcome this problem, we determine the modes depending upon the characteristics of pixels adjacent to block boundary then proper filter is applied to each area. Simulation results show that proposed method improves de-blocking performance compared to that of MPEG-4.

Strain elastography of tongue carcinoma using intraoral ultrasonography: A preliminary study to characterize normal tissues and lesions

  • Ogura, Ichiro;Sasaki, Yoshihiko;Sue, Mikiko;Oda, Takaaki
    • Imaging Science in Dentistry
    • /
    • v.48 no.1
    • /
    • pp.45-49
    • /
    • 2018
  • Purpose: The aim of this study was to evaluate the quantitative strain elastography of tongue carcinoma using intraoral ultrasonography. Materials and Methods: Two patients with squamous cell carcinoma (SCC) who underwent quantitative strain elastography for the diagnosis of tongue lesions using intraoral ultrasonography were included in this prospective study. Strain elastography was performed using a linear 14 MHz transducer (Aplio 300; Canon Medical Systems, Otawara, Japan). Manual light compression and decompression of the tongue by the transducer was performed to achieve optimal and consistent color coding. The variation in tissue strain over time caused by the compression exerted using the probe was displayed as a strain graph. The integrated strain elastography software allowed the operator to place circular regions of interest (ROIs) of various diameters within the elastography window, and automatically displayed quantitative strain (%) for each ROI. Quantitative indices of the strain (%) were measured for normal tissues and lesions in the tongue. Results: The average strain of normal tissue and tongue SCC in a 50-year-old man was 1.468% and 0.000%, respectively. The average strain of normal tissue and tongue SCC in a 59-year-old man was 1.007% and 0.000%, respectively. Conclusion: We investigated the quantitative strain elastography of tongue carcinoma using intraoral ultrasonography. Strain elastography using intraoral ultrasonography is a promising technique for characterizing and differentiating normal tissues and SCC in the tongue.

Adaptive Intra Fast Algorithm of H.264 for Video Surveillance (보안 영상 시스템에 적합한 H.264의 적응적 인트라 고속 알고리즘)

  • Jang, Ki-Young;Kim, Eung-Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.12C
    • /
    • pp.1055-1061
    • /
    • 2008
  • H.264 is the prominent video coding standard in various applications such as real-time streaming and digital multimedia broadcasting, since it provides enhanced compression performance, error resilience tools, and network adaptation. Compression efficiency of H.264 has been improved, however, it requires more computing and memory access than traditional methods. In this paper we proposed adaptive intra fast algorithm for real-time video surveillance system reducing the encoding complexity of H264/A VC. For this aim, temporal interrelationship between macroblock in the previous and the current frame is used to decide the encoding mode of macroblock fast. As a result, though video quality was deteriorated a little, less than 0.04dB, and bit rate was somewhat increased in suggested method, however, proposed method improved encoding time significantly and, in particular, encoding time of an image with little changes of neighboring background such as surveillance video was more shortened than traditional methods.

Adaptive Macroblock Quantization Method for H.264 Codec (H.264 코덱을 위한 적응적 매크로블록 양자화 방법)

  • Park, Sang-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.5
    • /
    • pp.1193-1200
    • /
    • 2010
  • This paper presents a new adaptive macroblock quantization algorithm which generates the output bits corresponding to the target bit budget. The H.264 standard uses various coding modes and optimization methods to improve the compression performance, which makes it difficult to control the amount of the generated traffic accurately. In the proposed scheme, linear regression analysis is used to analyze the relationship between the bit rate of each macroblock and the quantization parameter and to predict the MAD values. Using the predicted values, the quantization parameter of each macroblock is determined by the Lagrange multiplier method and then modified according to the difference between the bit budget and the generated bits. It is shown by experimental results that the new algorithm can generate output bits accurately corresponding to the target bit rates.

Stereo image compression based on error concealment for 3D television (3차원 텔레비전을 위한 에러 은닉 기반 스테레오 영상 압축)

  • Bak, Sungchul;Sim, Donggyu;Namkung, Jae-Chan;Oh, Seoung-jun
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.286-296
    • /
    • 2005
  • This paper presents a stereo-based image compression and transmission system for 3D realistic television. In the proposed system, a disparity map is extracted from an input stereo image pair and the extracted disparity map and one of two input images are transmitted or stored at a local or remote site. However, correspondences can not be determined in occlusion areas. Thus, it is not easy to recover 3D information in such regions. In this paper, a reconstruction image compensation algorithm based on error block concealment and in-loop filtering is proposed to minimize the reconstruction error in generating stereo image pair. The effectiveness of the proposed algorithm is shown in term of objective accuracy of reconstruction image with several real stereo image pairs.

The Efficient Error Resilient Entropy Coding for Robust Transmission of Compressed Images (압축 영상의 강건한 전송을 위한 효과적인 에러 내성 엔트로피 부호화)

  • Cho, Seong-Hwan;Kim, Eung-Sung;Kim, Jeong-Sig
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.2
    • /
    • pp.206-212
    • /
    • 2006
  • Many image and video compression algorithms work by splitting the input image into blocks and producing variable-length coded bits for each block data. If variable-length coded data are transmitted consecutively, then the resulting coder is highly sensitive to channel errors. Therefore, most image and video techniques for providing some protection to the stream against channel errors usually involve adding a controlled amount of redundancy back into the stream. Such redundancy might take the form of resynchronization markers, which enable the decoder to restart the decoding process from the known state, in the event of transmission errors. The Error Resilient Entropy Code (EREC) is a well known method which can regain synchronization without any redundant information to convert from variable-length code to fixed-length code. This paper proposes an enhancement to EREC, which greatly improves its transmission ability for the compressed image quality without any redundant bits in the event of errors. The simulation result shows that the both objective and subjective quality of transmitted image is enhanced compared with the existing EREC at the same BER(Bit Error Rate).

  • PDF

A Fast Fractal Image Compression Using The Normalized Variance (정규화된 분산을 이용한 프랙탈 압축방법)

  • Kim, Jong-Koo;Hamn, Do-Yong;Wee, Young-Cheul;Kimn, Ha-Jine
    • The KIPS Transactions:PartA
    • /
    • v.8A no.4
    • /
    • pp.499-502
    • /
    • 2001
  • Fractal image coding suffers from the long search time of domain pool although it provides many properties including the high compression ratio. We find that the normalized variance of a block is independent of contrast, brightness. Using this observation, we introduce a self similar block searching method employing the d-dimensional nearest neighbor searching. This method takes Ο(log/N) time for searching the self similar domain blocks for each range block where N is the number of domain blocks. PSNR (Peak Signal Noise Ratio) of this method is similar to that of the full search method that requires Ο(N) time for each range block. Moreover, the image quality of this method is independent of the number of edges in the image.

  • PDF

Detection of Frame Deletion Using Convolutional Neural Network (CNN 기반 동영상의 프레임 삭제 검출 기법)

  • Hong, Jin Hyung;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.886-895
    • /
    • 2018
  • In this paper, we introduce a technique to detect the video forgery by using the regularity that occurs in the video compression process. The proposed method uses the hierarchical regularity lost by the video double compression and the frame deletion. In order to extract such irregularities, the depth information of CU and TU, which are basic units of HEVC, is used. For improving performance, we make a depth map of CU and TU using local information, and then create input data by grouping them in GoP units. We made a decision whether or not the video is double-compressed and forged by using a general three-dimensional convolutional neural network. Experimental results show that it is more effective to detect whether or not the video is forged compared with the results using the existing machine learning algorithm.