• Title/Summary/Keyword: Fast Image Compression

Search Result 125, Processing Time 0.027 seconds

Medical Image CODEC Hardware Design based on MISD architecture (MISD 구조에 의한 의료 영상 CODEC의 하드웨어 설계)

  • Park, Sung-Wook;Yoo, Sun-Kook;Kim, Sun-Ho;Kim, Nam-Hyeon;Youn, Dae-Hee
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1994 no.12
    • /
    • pp.92-95
    • /
    • 1994
  • As computer systems to make medical practice easy are widely used, a special hardware system processing medical data fast becomes more important. To meet the urgent demand for high speed image processing, especially image compression and decompression, we designed and implemented the medical image CODEC (COder/BECoder) based on MISD(Multiple Instruction Single Data stream) architecture to adopt parallelism in it. Considering not being a standart scheme of medical mage compression/decompress ion, the CODEC is designed programable and general. In this paper, we use JPEG (Joint Photographic Experts Group) algorithm to process images fast and evalutate it.

  • PDF

Estimation of an intitial image for fast fractal decoding (고속 프랙탈 영상 복원을 위한 초기 영상 추정)

  • 문용호;박태희;백광렬;김재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.2
    • /
    • pp.325-333
    • /
    • 1997
  • In fractral decoding procedure, the reconstructed image is obtained by iteratively applying the contractive transform to an arbitrary initial image. But this method is not suitable for the fast decoding because convergence speed depends on the selection of initial image. Therefore, the initial image to achieve fast decoding should be selected. In this paper, we propose an initial image estimation that can be applied to various decoding methods. The initial image similar to the original image is estimated by using only the compressed data so that the proposed method does not affect the compression ratio. From the simulation, the PSNR of the proposed initial image is 6dB higher han that of ones iterated output image of conventional decoding with Babaraimage. Computations in addition and multiplication are reduced about 96%. On the other hands, if we apply the proposed initial image to other decoding algorithms, the faster convergence speed is expected.

  • PDF

Evaluation of the Image Backtrack-Based Fast Direct Mode Decision Algorithm

  • Choi, Yungho;Park, Neungsoo
    • Journal of Information Processing Systems
    • /
    • v.8 no.4
    • /
    • pp.685-692
    • /
    • 2012
  • B frame bi-directional predictions and the DIRECT mode coding of the H.264 video compression standard necessitate a complex mode decision process, resulting in a long computation time. To make H.264 feasible, this paper proposes an image backtrack-based fast (IBFD) algorithm and evaluates the performances of two promising fast algorithms (i.e., AFDM and IBFD). Evaluation results show that an image backtrack-based fast (IBFD) algorithm can determine DIRECT mode macroblocks with 13% higher accuracy, as compared with the AFDM. Furthermore, IBFD is shown to reduce the motion estimation time of B frames by up to 23% with a negligible quality degradation.

Study on the Similarity Functions for Image Compression (영상 압축을 위한 유사성 함수 연구)

  • Joo, Woo-Seok;Kang, Jong-Oh
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.8
    • /
    • pp.2133-2142
    • /
    • 1997
  • Compared with previous compression methods, fractal image compression drastically increases compression rate by using block-based encoding. Although decompression can be done in real time even with softwares, the most serious problem in utilizing the fractal method is the time required for the encoding. In this paper, we propose and verify i) an algorithm that reduces the encoding time by reducing the number of similarity searching on the basis of dimensional informations, and ii) an algorithm that enhances the quality of the restored image on the basis of brightness and contrast information. Finally, a method that enables fast compression with little quality degradation is proposed.

  • PDF

Multiresolution Wavelet-Based Disparity Estimation for Stereo Image Compression

  • Tengcharoen, Chompoonuch;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1098-1101
    • /
    • 2004
  • The ordinary stereo image of an object consists of data of left and right views. Therefore, the left and right image pairs have to be transmitted simultaneously in order to display 3-dimentional video at the remote site. However, due to the twice data in comparing with a monoscopic image of the same object, it needs to be compressed for fast transmission and resource saving. Hence, it needs an effective coding algorithm for compressing stereo image. It was found previously that compressing left and right frames independently will achieve the compression ratio lower than compressing by utilizing the spatial redundancy between both frames. Therefore, in this paper, we study the stereo image compression technique based on the multiresolution wavelet transform using varied disparity-block size for estimation and compensation. The size of disparity-block in the stereo pair subbands are scaling on a coarse-to-fine wavelet coefficients strategy. Finally, the reference left image and residual right image after disparity estimation and compensation are coded by using SPIHT coding. The considered method demonstrates good performance in both PSNR measures and visual quality for stereo image.

  • PDF

Optimum Image Compression Rate Maintaining Diagnostic Image Quality of Digital Intraoral Radiographs

  • Song Ju-Seop;Koh Kwang-Joon
    • Imaging Science in Dentistry
    • /
    • v.30 no.4
    • /
    • pp.265-274
    • /
    • 2000
  • Purpose: The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed image. Materials and Methods: The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner (Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity, specificity and k value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level was compared with that of the original image files. Results: No significant difference was found between original and the corresponding images up to 7% (1 : 14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1 : 14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. Conclusion: 1 : 14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  • PDF

Fast Ultrasound Image Compression Based on Characteristics of Ultrasound Images (초음파 영상특성에 기반한 고속 초음파 영상압축)

  • Kim, S.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.70-71
    • /
    • 1998
  • In this paper, We proposed fast ultrasound image compression based on characteristics of ultrasound images. In the proposed method, wavelet transform is performed for non-zero coefficients selectively. It codes zero-tree symbols using conditional pdf (probability density function) as orientation of bands. It normalizes wavelet coefficients with threshold of each wavelet band and encodes those using a uniform quantizer. Experimental results show that the proposed method is the proposed method is superior in PSNR to LuraTech's method by about 1.0 dB, to JPEG by about 5.0 dB for $640\times480$ 24bits color ultrasound image.

  • PDF

Hardware Implementation of High Speed CODEC for PACS (PACS를 위한 고속 CODEC의 하드웨어 구현)

  • 유선국;박성욱
    • Journal of Biomedical Engineering Research
    • /
    • v.15 no.4
    • /
    • pp.475-480
    • /
    • 1994
  • For the effective management of medical images, it becomes popular to use computing machines in medical practice, namely PACS. However, the amount of image data is so large that there is a lack of storage space. We usually use data compression techniques to save storage, but the process speed of machines is not fast enough to meet surgical requirement. So a special hardware system processing medical images faster is more important than ever. To meet the demand for high speed image processing, especially image compression and decompression, we designed and implemented the medical image CODEC (COder/DECoder) based on MISD (Multiple Instruction Single Data stream) architecture to adopt parallelism in it. Considering not being a standard scheme of medical image compression/decompression, the CODEC is designed programable and general. In this paper, we use JPEG (Joint Photographic Experts Group) algorithm to process images and evalutate the CODEC.

  • PDF

Fast Image Compression and Pixel-wise Switching Technique for Hardware Efficient Implementation of Dynamic Capacitance Compensation (하드웨어 효율적인 동적 커패시턴스 보상 구현을 위한 고속 영상 압축 및 화소별 스위칭 기법)

  • Choi, Joon-Hwan;Song, Won-Suk;Choi, Hyuk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.8
    • /
    • pp.616-622
    • /
    • 2009
  • Thanks to Dynamic Capacitance Control (DCC) technique, response time of an LCD display has greatly improved. However, DCC requires hi-speed memory for the real-time writing/reading of an image of a previous frame, which results in increases in hardware overhead and cost. In this paper, we propose Modified Exponential Golomb (MEG) coding, a low-complex high-speed image compression method, which can remarkably reduce memory requirement for DCC. We also propose a pixel-wise DCC switching technique to prevent a compression error from affecting the quality of a final image on LCD. In our experiment, the degradation in visual quality was not noticeable when we cut the DCC memory size of 1080i HD data by 1/3.

Fast Disparity Vector Estimation using Motion vector in Stereo Image Coding (스테레오 영상에서 움직임 벡터를 이용한 고속 변이 벡터 추정)

  • Doh, Nam-Keum;Kim, Tae-Yong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.56-65
    • /
    • 2009
  • Stereoscopic images consist of the left image and the right image. Thus, stereoscopic images have much amounts of data than single image. Then an efficient image compression technique is needed, the DPCM-based predicted coding compression technique is used in most video coding standards. Motion and disparity estimation are needed to realize the predicted coding compression technique. Their performing algorithm is block matching algorithm used in most video coding standards. Full search algorithm is a base algorithm of block matching algorithm which finds an optimal block to compare the base block with every other block in the search area. This algorithm presents the best efficiency for finding optimal blocks, but it has very large computational loads. In this paper, we have proposed fast disparity estimation algorithm using motion and disparity vector information of the prior frame in stereo image coding. We can realize fast disparity vector estimation in order to reduce search area by taking advantage of global disparity vector and to decrease computational loads by limiting search points using motion vectors and disparity vectors of prior frame. Experimental results show that the proposed algorithm has better performance in the simple image sequence than complex image sequence. We conclude that the fast disparity vector estimation is possible in simple image sequences by reducing computational complexities.