• Title/Summary/Keyword: lossless compress

Search Result 26, Processing Time 0.027 seconds

A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service (응급 원격 진료 서비스를 위한 생체신호 압축 방법 비교 연구 및 압축/복원 프로그램 개발)

  • Yoon Tae-Sung;Lim Young-Ho;Kim Jung-Sang;Yoo Sun-Kook
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.5
    • /
    • pp.311-321
    • /
    • 2003
  • In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, $SpO_2)$ of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using PEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program or the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment.

A Pattern Matching Extended Compression Algorithm for DNA Sequences

  • Murugan., A;Punitha., K
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.196-202
    • /
    • 2021
  • DNA sequencing provides fundamental data in genomics, bioinformatics, biology and many other research areas. With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Those large volumes of data also require a fast transmission, effective storage, superior functionality and provision of quick access to any record. Data storage costs have a considerable proportion of total cost in the formation and analysis of DNA sequences. In particular, there is a need of highly control of disk storage capacity of DNA sequences but the standard compression techniques unsuccessful to compress these sequences. Several specialized techniques were introduced for this purpose. Therefore, to overcome all these above challenges, lossless compression techniques have become necessary. In this paper, it is described a new DNA compression mechanism of pattern matching extended Compression algorithm that read the input sequence as segments and find the matching pattern and store it in a permanent or temporary table based on number of bases. The remaining unmatched sequence is been converted into the binary form and then it is been grouped into binary bits i.e. of seven bits and gain these bits are been converted into an ASCII form. Finally, the proposed algorithm dynamically calculates the compression ratio. Thus the results show that pattern matching extended Compression algorithm outperforms cutting-edge compressors and proves its efficiency in terms of compression ratio regardless of the file size of the data.

Continued image Sending in DICOM of usefulness Cosideration in Angiography (혈관조영술에서 동영상 전송의 유용성 고찰)

  • Park, Young-Sung;Lee, Jong-Woong;Jung, Hee-Dong;Kim, Jae-Yeul;Hwang, Sun-Gwang
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.9 no.2
    • /
    • pp.39-43
    • /
    • 2007
  • In angiography, the global standard agreements of DICOM is lossless. But it brings on overload and takes too much store space in DICOM sever. Because of all those things we transmit images which is classified in subjective way. But this cause data loss and would be lead doctors to make wrong reading. As a result of that we try to transmit continued image (raw data) to reduce those mistakes. We got angiography images from the equipment(Allura FD20-Philips). And compressed it in two different methods(lossless & lossy fair). and then transmitted them to PACS system. We compared the quality of QC phantom images that are compressed by different compress method and compared spatial resolution of each images after CD copy. Then compared each Image's data volume(lossless & lossy fair). We measured spatial resolution of each image. All of them had indicated 401p/mm. We measured spatial resolution of each image after CD copy. We got also same conclusion (401p/mm). The volume of continued image (raw data) was 127.8MB(360.5 sheets on average) compressed in lossless and 29.5MB(360.5 sheets) compressed in lossy fair. In case of classified image, it was 47.35MB(133.7 sheets) in lossless and 4.5MB(133.7 sheets) in lossy fair. In case of angiography the diagnosis is based on continued image(raw data). But we transmit classified image. Because transmitting continued image causes some problems in PACS system especially transmission and store field. We transmit classified image compressed in lossless But it is subjective and would be different depend on radiologist. therefore it would make doctors do wrong reading when patients transfer another hospital. So we suggest that transmit continued image(raw data) compressed in lossy fair. It reduces about 60% of data volume compared with classified image. And the image quality is same after CD copy.

  • PDF

Lossless Compression for Hyperspectral Images based on Adaptive Band Selection and Adaptive Predictor Selection

  • Zhu, Fuquan;Wang, Huajun;Yang, Liping;Li, Changguo;Wang, Sen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3295-3311
    • /
    • 2020
  • With the wide application of hyperspectral images, it becomes more and more important to compress hyperspectral images. Conventional recursive least squares (CRLS) algorithm has great potentiality in lossless compression for hyperspectral images. The prediction accuracy of CRLS is closely related to the correlations between the reference bands and the current band, and the similarity between pixels in prediction context. According to this characteristic, we present an improved CRLS with adaptive band selection and adaptive predictor selection (CRLS-ABS-APS). Firstly, a spectral vector correlation coefficient-based k-means clustering algorithm is employed to generate clustering map. Afterwards, an adaptive band selection strategy based on inter-spectral correlation coefficient is adopted to select the reference bands for each band. Then, an adaptive predictor selection strategy based on clustering map is adopted to select the optimal CRLS predictor for each pixel. In addition, a double snake scan mode is used to further improve the similarity of prediction context, and a recursive average estimation method is used to accelerate the local average calculation. Finally, the prediction residuals are entropy encoded by arithmetic encoder. Experiments on the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) 2006 data set show that the CRLS-ABS-APS achieves average bit rates of 3.28 bpp, 5.55 bpp and 2.39 bpp on the three subsets, respectively. The results indicate that the CRLS-ABS-APS effectively improves the compression effect with lower computation complexity, and outperforms to the current state-of-the-art methods.

Huffman Coding using Nibble Run Length Code (니블 런 랭스 코드를 이용한 허프만 코딩)

  • 백승수
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.1
    • /
    • pp.1-6
    • /
    • 1999
  • In this paper We propose the new lossless compression method which use Huffman Coding using the preprocessing to compress the still image. The proposed methode divide into two parts according to activity of the image. If activities are high, the original Huffman Coding method was used directly. IF activities are low, the nibble run-length coding and the bit dividing method was used. The experimental results show that compression rate of the proposed method was better than the general Huffman Coding method.

  • PDF

Adaptive Predictor for Entropy Coding (엔트로피 코딩을 위한 적응적 예측기)

  • Kim, Young-Ro;Park, Hyun-Sang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.1
    • /
    • pp.209-213
    • /
    • 2010
  • In this paper, an efficient predictor for entropy coding is proposed. It adaptively selects one of two prediction errors obtained by MED(median edge detector) or GAP(gradient adaptive prediction). The reduced error is encoded by existing entropy coding method. Experimental results show that the proposed algorithm can compress higher than existing predictive methods.

TRIANGLE MESH COMPRESSION USING GEOMETRIC CONSTRAINTS

  • Sim, Jae-Young;Kim, Chang-Su;Lee, Sang-Uk
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.462-465
    • /
    • 2000
  • It is important to compress three dimensional (3D) data efficiently, since 3D data are too large to store or transmit in general. In this paper, we propose a lossless compression algorithm of the 3D mesh connectivity, based on the vertex degree. Most techniques for the 3D mesh compression treat the connectivity and the geometric separately, but our approach attempts to exploit the geometric information for compressing the connectivity information. We use the geometric angle constraint of the vertex fanout pattern to predict the vertex degree, so the proposed algorithm yields higher compression efficiency than the conventional algorithms.

  • PDF

The image format research which is suitable in animation work (애니메이션 작업에 사용되는 이미지 포맷 연구)

  • Kwon, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.14
    • /
    • pp.37-51
    • /
    • 2008
  • The computer has become an indispensable tool for animation works. However if you don't understand the characteristics of the computer and its software, you might not have the result satisfying your efforts. The incorrect understanding of image format sometimes causes it. Habitually image formats are selected usually for most of works but there is a distinct difference among those image formats while the efficient usages of them are different from each other. For your more efficient work therefore, you need to identify the characteristics of various kinds of image format used mostly for animation works. First I took a look at the theories of the lossy compression and lossless compression, which are two types of data compression widely used in the whole parts of computer world and the difference between bitmap method and vector method, which are respectably different in terms of the way of expressing images and finally the 24 bit true color and 8 bits alpha channel. Based on those characteristics, I have analyzed the functional difference among image formats used between various types of animation works such as 2D, 3D, composing and editing and also the benefits and weakness of them. Additionally I've proved it is wrong that the JPEG files consume a small space in computer work. In conclusion, I suggest the TIF format as the most efficient format for whatever it is editing, composing, 3D and 2D in considering capacity, function and image quality and also I'd like to recommend PSD format which has compatibility and excellent function, since the Adobe educational programs are used a lot for the school education. I hope this treatise to contribute to your right choice of image format in school education and practical works.

  • PDF

A Study on a Lossless Compression Scheme for Cloud Point Data of the Target Construction (목표 구조물에 대한 점군데이터의 무손실 압축 기법에 관한 연구)

  • Bang, Min-Suk;Yun, Kee-Bang;Kim, Ki-Doo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.5
    • /
    • pp.33-41
    • /
    • 2011
  • In this paper, we propose a lossless compression scheme for cloud point data of the target construction by using doubleness and decreasing useless information of cloud point data. We use Hough transform to find the horizontal angle between construction and terrestrial LIDAR. This angle is used for the rotation of the cloud point data. The cloud point data can be parallel to x-axis, then y-axis doubleness is increased. Therefore, the cloud point data can be more compressed. In addition, we apply two methods to decrease the number of cloud point data for useless information of them. One is decimation of the cloud point data, the other is to extract the range of y-coordinates of target construction, and then extract the cloud point data existing in the range only. The experimental result shows the performance of proposed scheme. To compress the data, we use only the position information without additional information. Therefore, this scheme can increase processing speed of the compression algorithm.

H.264 Encoding Technique of Multi-view Image expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 영상에 대한 H.264 부호화 기술)

  • Kim, Min-Tae;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.81-90
    • /
    • 2010
  • This paper presents H.264 coding schemes for multi-view video using the concept of layered depth image(LDI) representation and efficient compression technique for LDI. After converting those data to the proposed representation, we encode color, depth, and auxiliary data representing the hierarchical structure, respectively, Two kinds of preprocessing approaches are proposed for multiple color and depth components. In order to compress auxiliary data, we have employed a near lossless coding method. Finally, we have reconstructed the original viewpoints successfully from the decoded approach that is useful for dealing with multiple color and depth data simultaneously.