• Title/Summary/Keyword: Code Compression

Search Result 429, Processing Time 0.031 seconds

Study on Section Properties of Deckplates with Flat-Hat Stiffners (Flat-Hat 스티프너를 가진 데크플레이트의 단면 성능에 관한 연구)

  • Ju, Gi-Su;Park, Sung-Moo
    • Journal of Korean Association for Spatial Structures
    • /
    • v.4 no.1 s.11
    • /
    • pp.77-86
    • /
    • 2004
  • It is the buckling of the compression portions of the deckplate that govern its behaviour under wet concrete construction loading. The size and position of intermediate stiffeners in the compression flanges of thin-plate steel decks exert a strong influence on the dominant buckling mode of the flange. Test sections composed of high-strength steel were brake pressed with a variety of Flat-hat intermediate stiffeners in the compression flange forming a progression from small to large stiffeners. The ABAQUS program to determine the effectiveness of intermediate stiffeners in controlling buckling modes is undertaken. A series of specimens are loaded with simple beam. Various buckling wave forms prior to ultimate failure through a plastic collapse mechanism. The experimentally determined buckling stresses are found to be comparable with studies performed using the ABAQUS program analysis and using each country code.

  • PDF

Behavior and design of perforated steel storage rack columns under axial compression

  • El Kadi, Bassel;Kiymaz, G.
    • Steel and Composite Structures
    • /
    • v.18 no.5
    • /
    • pp.1259-1277
    • /
    • 2015
  • The present study is focused on the behavior and design of perforated steel storage rack columns under axial compression. These columns may exhibit different types of behavior and levels of strength owing to their peculiar features including their complex cross-section forms and perforations along the member. In the present codes of practice, the design of these columns is carried out using analytical formulas which are supported by experimental tests described in the relevant code document. Recently proposed analytical approaches are used to estimate the load carrying capacity of axially compressed steel storage rack columns. Experimental and numerical studies were carried out to verify the proposed approaches. The experimental study includes compression tests done on members of different lengths, but of the same cross-section. A comparison between the analytical and the experimental results is presented to identify the accuracy of the recently proposed analytical approaches. The proposed approach includes modifications in the Direct Strength Method to include the effects of perforations (the so-called reduced thickness approach). CUFSM and CUTWP software programs are used to calculate the elastic buckling parameters of the studied members. Results from experimental and analytical studies compared very well. This indicates the validity of the recently proposed approaches for predicting the ultimate strength of steel storage rack columns.

A Study on Inter Prediction Mode Determination using the Variance in the Motion Vectors (움직임 벡터의 변화량을 이용한 인터 예측 모드 결정에 관한 연구)

  • Kim, June;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.109-112
    • /
    • 2014
  • H.264/AVC is an international video coding standard that is established in cooperation with ITU-T VCEG and ISO/IEC MPEG, which shows improved code and efficiency than the previous video standards. Motion estimation using various macroblock from 44 to 1616 among the compression techniques of H.264/AVC contributes much to high compression efficiency. Generally, in the case of small motion vector or low complexity about P slice is decided $P16{\times}16$ mode encoding method. But according to circumstances, macroblock is decided $P16{\times}16$ mode despite large motion vector. If the motion vector variance is more than threshold and final select mode is $P16{\times}16$ mode, it is switched to $P8{\times}8$ mode, so this paper shows that the storage capacity is reduced. The results of experiment show that the proposed algorithm increases the compression efficiency of the H.264/AVC algorithm to 0.4%, even reducing the time and without increasing complexity.

Implementation of 16Kpbs ADPCM by DSK50 (DSK50을 이용한 16kbps ADPCM 구현)

  • Cho, Yun-Seok;Han, Kyong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1295-1297
    • /
    • 1996
  • CCITT G.721, G.723 standard ADPCM algorithm is implemented by using TI's fixed point DSP start kit (DSK). ADPCM can be implemented on a various rates, such as 16K, 24K, 32K and 40K. The ADPCM is sample based compression technique and its complexity is not so high as the other speech compression techniques such as CELP, VSELP and GSM, etc. ADPCM is widely applicable to most of the low cost speech compression application and they are tapeless answering machine, simultaneous voice and fax modem, digital phone, etc. TMS320C50 DSP is a low cost fixed point DSP chip and C50 DSK system has an AIC (analog interface chip) which operates as a single chip A/D and D/A converter with 14 bit resolution, C50 DSP chip with on-chip memory of 10K and RS232C interface module. ADPCM C code is compiled by TI C50 C-compiler and implemented on the DSK on-chip memory. Speech signal input is converted into 14 bit linear PCM data and encoded into ADPCM data and the data is sent to PC through RS232C. The ADPCM data on PC is received by the DSK through RS232C and then decoded to generate the 14 bit linear PCM data and converted into the speech signal. The DSK system has audio in/out jack and we can input and out the speech signal.

  • PDF

Low Power Scan Test Methodology Using Hybrid Adaptive Compression Algorithm (하이브리드 적응적 부호화 알고리즘을 이용한 저전력 스캔 테스트 방식)

  • Kim Yun-Hong;Jung Jun-Mo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.188-196
    • /
    • 2005
  • This paper presents a new test data compression and low power scan test method that can reduce test time and power consumption. A proposed method can reduce the scan-in power and test data volume using a modified scan cell reordering algorithm and hybrid adaptive encoding method. Hybrid test data compression method uses adaptively the Golomb codes and run-length codes according to length of runs in test data, which can reduce efficiently the test data volume compare to previous method. We apply a scan cell reordering technique to minimize the column hamming distance in scan vectors, which can reduce the scan-in power consumption and test data. Experimental results for ISCAS 89 benchmark circuits show that reduced test data and low power scan testing can be achieved in all cases. The proposed method showed an about a 17%-26% better compression ratio, 8%-22% better average power consumption and 13%-60% better peak power consumption than that of previous method.

  • PDF

ECG Data Compression Using Adaptive Fractal Interpolation (적응 프랙탈 보간을 이용한 심전도 데이터 압축)

  • 전영일;윤영로
    • Journal of Biomedical Engineering Research
    • /
    • v.17 no.1
    • /
    • pp.121-128
    • /
    • 1996
  • This paper presents the ECG data compression method referred the adaptive fractal interpolation algorithm. In the previous piecewise fractal interpolation(PFI) algorithm, the size of range is fixed So, the reconstruction error of the PFI algorithm is nonuniformly distributed in the part of the original ECG signal. In order to improve this problem, the adaptive fractal interpolation(AEI) algorithm uses the variable range. If the predetermined tolerance was not satisfied, the range would be subdivided into two equal size blocks. large ranges are used for encoding the smooth waveform to yield high compression efficiency, and the smaller ranges are U for encoding rapidly varying parts of the signal to preserve the signal quality. The suggested algorithm was evaluated using MIT/BIH arrhythmia database. The AEI algorithm was found to yield a relatively low reconstruction error for a given compression ratio than the PFI algorithm. In applications where a PRD of about 7.13% was acceptable, the ASI algorithm yielded compression ratio as high as 10.51, without any entropy coding of the parameters of the fractal code.

  • PDF

DNA Sequences Compression using Repeat technique and Selective Encryption using modified Huffman's Technique

  • Syed Mahamud Hossein; Debashis De; Pradeep Kumar Das Mohapatra
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.8
    • /
    • pp.85-104
    • /
    • 2024
  • The DNA (Deoxyribonucleic Acid) database size increases tremendously transmuting from millions to billions in a year. Ergo for storing, probing the DNA database requires efficient lossless compression and encryption algorithm for secure communication. The DNA short pattern repetitions are of paramount characteristics in biological sequences. This algorithm is predicated on probing exact reiterate, substring substitute by corresponding ASCII code and engender a Library file, as a result get cumulating of the data stream. In this technique the data is secured utilizing ASCII value and engendering Library file which acts as a signature. The security of information is the most challenging question with veneration to the communication perspective. The selective encryption method is used for security purpose, this technique is applied on compressed data or in the library file or in both files. The fractional part of a message is encrypted in the selective encryption method keeping the remaining part unchanged, this is very paramount with reference to selective encryption system. The Huffman's algorithm is applied in the output of the first phase reiterate technique, including transmuting the Huffman's tree level position and node position for encryption. The mass demand is the minimum storage requirement and computation cost. Time and space complexity of Repeat algorithm are O(N2) and O(N). Time and space complexity of Huffman algorithm are O(n log n) and O(n log n). The artificial data of equipollent length is additionally tested by this algorithm. This modified Huffman technique reduces the compression rate & ratio. The experimental result shows that only 58% to 100% encryption on actual file is done when above 99% modification is in actual file can be observed and compression rate is 1.97bits/base.

Finite element analysis of shear-critical reinforced concrete walls

  • Kazaz, Ilker
    • Computers and Concrete
    • /
    • v.8 no.2
    • /
    • pp.143-162
    • /
    • 2011
  • Advanced material models for concrete are not widely available in general purpose finite element codes. Parameters to define them complicate the implementation because they are case sensitive. In addition to this, their validity under severe shear condition has not been verified. In this article, simple engineering plasticity material models available in a commercial finite element code are used to demonstrate that complicated shear behavior can be calculated with reasonable accuracy. For this purpose dynamic response of a squat shear wall that had been tested on a shaking table as part of an experimental program conducted in Japan is analyzed. Both the finite element and material aspects of the modeling are examined. A corrective artifice for general engineering plasticity models to account for shear effects in concrete is developed. The results of modifications in modeling the concrete in compression are evaluated and compared with experimental response quantities.

Lossy Source Compression of Non-Uniform Binary Source via Reinforced Belief Propagation over GQ-LDGM Codes

  • Zheng, Jianping;Bai, Baoming;Li, Ying
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.972-975
    • /
    • 2010
  • In this letter, we consider the lossy coding of a non-uniform binary source based on GF(q)-quantized low-density generator matrix (LDGM) codes with check degree $d_c$=2. By quantizing the GF(q) LDGM codeword, a non-uniform binary codeword can be obtained, which is suitable for direct quantization of the non-uniform binary source. Encoding is performed by reinforced belief propagation, a variant of belief propagation. Simulation results show that the performance of our method is quite close to the theoretic rate-distortion bounds. For example, when the GF(16)-LDGM code with a rate of 0.4 and block-length of 1,500 is used to compress the non-uniform binary source with probability of 1 being 0.23, the distortion is 0.091, which is very close to the optimal theoretical value of 0.074.

Huffman Coding using Nibble Run Length Code (니블 런 랭스 코드를 이용한 허프만 코딩)

  • 백승수
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.1
    • /
    • pp.1-6
    • /
    • 1999
  • In this paper We propose the new lossless compression method which use Huffman Coding using the preprocessing to compress the still image. The proposed methode divide into two parts according to activity of the image. If activities are high, the original Huffman Coding method was used directly. IF activities are low, the nibble run-length coding and the bit dividing method was used. The experimental results show that compression rate of the proposed method was better than the general Huffman Coding method.

  • PDF