• Title/Summary/Keyword: computer arithmetic

Search Result 252, Processing Time 0.027 seconds

Design of Efficient NTT-based Polynomial Multiplier (NTT 기반의 효율적인 다항식 곱셈기 설계)

  • Lee, SeungHo;Lee, DongChan;Kim, Yongmin
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.88-94
    • /
    • 2021
  • Public-key cryptographic algorithms such as RSA and ECC, which are currently in use, have used mathematical problems that would take a long time to calculate with current computers for encryption. But those algorithms can be easily broken by the Shor algorithm using the quantum computer. Lattice-based cryptography is proposed as new public-key encryption for the post-quantum era. This cryptographic algorithm is performed in the Polynomial Ring, and polynomial multiplication requires the most processing time. Therefore, a hardware model module is needed to calculate polynomial multiplication faster. Number Theoretic Transform, which called NTT, is the FFT performed in the finite field. The logic verification was performed using HDL, and the proposed design at the transistor level using Hspice was compared and analyzed to see how much improvement in delay time and power consumption was achieved. In the proposed design, the average delay was improved by 30% and the power consumption was reduced by more than 8%.

Design of a Bit-Level Super-Systolic Array (비트 수준 슈퍼 시스톨릭 어레이의 설계)

  • Lee Jae-Jin;Song Gi-Yong
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.12
    • /
    • pp.45-52
    • /
    • 2005
  • A systolic array formed by interconnecting a set of identical data-processing cells in a uniform manner is a combination of an algorithm and a circuit that implements it, and is closely related conceptually to arithmetic pipeline. High-performance computation on a large array of cells has been an important feature of systolic array. To achieve even higher degree of concurrency, it is desirable to make cells of systolic array themselves systolic array as well. The structure of systolic array with its cells consisting of another systolic array is to be called super-systolic array. This paper proposes a scalable bit-level super-systolic amy which can be adopted in the VLSI design including regular interconnection and functional primitives that are typical for a systolic architecture. This architecture is focused on highly regular computational structures that avoids the need for a large number of global interconnection required in general VLSI implementation. A bit-level super-systolic FIR filter is selected as an example of bit-level super-systolic array. The derived bit-level super-systolic FIR filter has been modeled and simulated in RT level using VHDL, then synthesized using Synopsys Design Compiler based on Hynix $0.35{\mu}m$ cell library. Compared conventional word-level systolic array, the newly proposed bit-level super-systolic arrays are efficient when it comes to area and throughput.

A Design of Pipelined-parallel CABAC Decoder Adaptive to HEVC Syntax Elements (HEVC 구문요소에 적응적인 파이프라인-병렬 CABAC 복호화기 설계)

  • Bae, Bong-Hee;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.155-164
    • /
    • 2015
  • This paper describes a design and implementation of CABAC decoder, which would handle HEVC syntax elements in adaptively pipelined-parallel computation manner. Even though CABAC offers the high compression rate, it is limited in decoding performance due to context-based sequential computation, and strong data dependency between context models, as well as decoding procedure bin by bin. In order to enhance the decoding computation of HEVC CABAC, the flag-type syntax elements are adaptively pipelined by precomputing consecutive flag-type ones; and multi-bin syntax elements are decoded by processing bins in parallel up to three. Further, in order to accelerate Binary Arithmetic Decoder by reducing the critical path delay, the update and renormalization of context modeling are precomputed parallel for the cases of LPS as well as MPS, and then the context modeling renewal is selected by the precedent decoding result. It is simulated that the new HEVC CABAC architecture could achieve the max. performance of 1.01 bins/cycle, which is two times faster with respect to the conventional approach. In ASIC design with 65nm library, the CABAC architecture would handle 224 Mbins/sec, which could decode QFHD HEVC video data in real time.

Performance Evaluation of Hybrid-SE-MMA Adaptive Equalizer using Adaptive Modulus and Adaptive Step Size (적응 모듈러스와 적응 스텝 크기를 이용한 Hybrid-SE-MMA 적응 등화기의 성능 평가)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.97-102
    • /
    • 2020
  • This paper relates with the Hybrid-SE-MMA (Signed-Error MMA) that is possible to improving the equalization performance by using the adaptive modulus and adaptive step size in SE-MMA adaptive equalizer for the minimizing the intersymbol interference. The equalizer tap coefficient is updatted use the error signal in MMA algorithm for adaptive equalizer. But the sign of error signal is used for the simplification of arithmetic operation in SE-MMA algorithm in order to updating the coefficient. By this simplification, we get the fast convergence speed and the reduce the algorithm processing speed, but not in the equalization performance. In this paper, it is possible to improve the equalization performance by computer simulation applying the adaptive modulus to the SE-MMA which is proposional to the power of equalizer output signal. In order to compare the improved equalization performance compared to the present SE-MMA, the recovered signal constellation that is the output of the equalizer, residual isi, MD(maximum distortion), MSE and the SER perfomance that means the robustness to the external noise were used. As a result of computer simulation, the Hybrid-SE-MMA improve equalization performance in the residual isi and MD, MSE, SER than the SE-MMA.

A Parallel Processing Technique for Large Spatial Data (대용량 공간 데이터를 위한 병렬 처리 기법)

  • Park, Seunghyun;Oh, Byoung-Woo
    • Spatial Information Research
    • /
    • v.23 no.2
    • /
    • pp.1-9
    • /
    • 2015
  • Graphical processing unit (GPU) contains many arithmetic logic units (ALUs). Because many ALUs can be exploited to process parallel processing, GPU provides efficient data processing. The spatial data require many geographic coordinates to represent the shape of them in a map. The coordinates are usually stored as geodetic longitude and latitude. To display a map in 2-dimensional Cartesian coordinate system, the geodetic longitude and latitude should be converted to the Universal Transverse Mercator (UTM) coordinate system. The conversion to the other coordinate system and the rendering process to represent the converted coordinates to screen use complex floating-point computations. In this paper, we propose a parallel processing technique that processes the conversion and the rendering using the GPU to improve the performance. Large spatial data is stored in the disk on files. To process the large amount of spatial data efficiently, we propose a technique that merges the spatial data files to a large file and access the file with the method of memory mapped file. We implement the proposed technique and perform the experiment with the 747,302,971 points of the TIGER/Line spatial data. The result of the experiment is that the conversion time for the coordinate systems with the GPU is 30.16 times faster than the CPU only method and the rendering time is 80.40 times faster than the CPU.

The characteristics of Pacioli's Bookkeeping (파치올리 부기론의 특성에 관한 고찰)

  • Yoon Seok-Gon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.297-306
    • /
    • 2005
  • Compendium of arithmetic, geometry, and proportions and proportionality' that was published in Venice in 1494 has been recognized as the first bookkeeping data in the world. Major characteristics of Pacioli's bookkeeping rules were reviewed in this study as follows: All the necessary particulars for double entry bookkeeping were provided in Pacioli's bookkeeping rules;. List of property was described at the time of start of business; Three major books were used; Details of daily transactions were considered to be important; Strike through was lined at each description in journal books, details of daily transactions, and list of property after entry of ledger; Amount columns were provided and Arabic numerals were used; Annual settlement custom was being initiated; Profit and loss account was prepared at year-end; Trial balance sheet was inevitably described; Books were verified prior to closing accounts; Control account was not established; Financial statements were not prepared and business analysis was made, too; Finished goods inventory was not adjusted; Mark was assigned to books; Inter-office account was prepared; Branch accounts and branch ledgers were prepared; There was entries of trust; Current arrangement was described; The principle 'Cost or market price, whichever is lower basis' was promoted: Petty cash system is explained Checks and bills of exchange are used in bank account. As mentioned, characteristics of Pacioli's bookkeeping rules were reviewed; the signs of necessity for preparation of profit and loss statement and balance sheet is found as well as preparation of trial balance sheet and the rules may be considered as a very excellent one in terms of the bookkeeping on initiating stage of double entry bookkeeping.

  • PDF

Design and Implementation of an Embedded Spatial MMDBMS for Spatial Mobile Devices (공간 모바일 장치를 위한 내장형 공간 MMDBMS의 설계 및 구현)

  • Park, Ji-Woong;Kim, Joung-Joon;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.1 s.13
    • /
    • pp.25-37
    • /
    • 2005
  • Recently, with the development of wireless communications and mobile computing, interest about mobile computing is rising. Mobile computing can be regarded as an environment where a user carries mobile devices, such as a PDA or a notebook, and shares resources with a server computer via wireless communications. A mobile database refers to a database which is used in these mobile devices. The mobile database can be used in the fields of insurance business, banking business, medical treatment, and so on. Especially, LBS(Location Based Service) which utilizes location information of users becomes an essential field of mobile computing. In order to support LBS in the mobile environment, there must be an Embedded Spatial MMDBMS(Main-Memory Database Management System) that can efficiently manage large spatial data in spatial mobile devices. Therefore, in this paper, we designed and implemented the Embedded Spatial MMDBMS, extended from the HSQLDB which is an existing MMDBMS for PC, to manage spatial data efficiently in spatial mobile devices. The Embedded Spatial MMDBMS adopted the spatial data model proposed by ISO(International Organization for Standardization), provided the arithmetic coding method that is suitable for spatial data, and supported the efficient spatial index which uses the MBR compression and hashing method suitable for spatial mobile devices. In addition, the system offered the spatial data display capability in low-performance processors of spatial mobile devices and supported the data caching and synchronization capability for performance improvement of spatial data import/export between the Embedded Spatial MMDBMS and the GIS server.

  • PDF

Extraction of the ship movement information by a radar target extractor (Radar Target Extractor에 의한 선박운동정보의 추출에 관한 연구)

  • Lee, Dae-Jae;Kim, Kwang-Sik;Byun, Duck-Soo
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.38 no.3
    • /
    • pp.249-255
    • /
    • 2002
  • This paper describes on the extraction of ship's real-time movement information using a combination full-function ARPA radar and ECS system that displays radar images and an electronic chart together on a single PC screen. The radar target extractor(RTX) board, developed by Marine Electronics Corporation of Korea, receives radar video, trigger, antenna bearing pulse and heading pulse signals from a radar unit and processes these signals to extract target information. The target data extracted from each pulse repetition interval in DSPs of RTX that installed in 16 bit ISA slot of a IBM PC compatible computer is formatted into a series of radar target messages. These messages are then transmitted to the host PC and displayed on a single screen. The position data of target in range and azimuth direction are stored and used for determining the center of the distributed target by arithmetic averaging after the detection of the target end. In this system, the electronic chart or radar screens can be displayed separately or simulaneously and in radar mode all information of radar targets can be recorded and replayed In spite of a PC based radar system, all essential information required for safe and efficient navigation of ship can be provided.

Design and Analysis of a Digit-Serial $AB^{2}$ Systolic Arrays in $GF(2^{m})$ ($GF(2^{m})$ 상에서 새로운 디지트 시리얼 $AB^{2}$ 시스톨릭 어레이 설계 및 분석)

  • Kim Nam-Yeun;Yoo Kee-Young
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.4
    • /
    • pp.160-167
    • /
    • 2005
  • Among finite filed arithmetic operations, division/inverse is known as a basic operation for public-key cryptosystems over $GF(2^{m})$ and it is computed by performing the repetitive $AB^{2}$ multiplication. This paper presents a digit-serial-in-serial-out systolic architecture for performing the $AB^2$ operation in GF$(2^{m})$. To obtain L×L digit-serial-in-serial-out architecture, new $AB^{2}$ algorithm is proposed and partitioning, index transformation and merging the cell of the architecture, which is derived from the algorithm, are proposed. Based on the area-time product, when the digit-size of digit-serial architecture, L, is selected to be less than about m, the proposed digit-serial architecture is efficient than bit-parallel architecture, and L is selected to be less than about $(1/5)log_{2}(m+1)$, the proposed is efficient than bit-serial. In addition, the area-time product complexity of pipelined digit-serial $AB^{2}$ systolic architecture is approximately $10.9\%$ lower than that of nonpipelined one, when it is assumed that m=160 and L=8. Additionally, since the proposed architecture can be utilized for the basic architecture of crypto-processor and it is well suited to VLSI implementation because of its simplicity, regularity and pipelinability.

A Comparison of Analysis Methods for Work Environment Measurement Databases Including Left-censored Data (불검출 자료를 포함한 작업환경측정 자료의 분석 방법 비교)

  • Park, Ju-Hyun;Choi, Sangjun;Koh, Dong-Hee;Park, Donguk;Sung, Yeji
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.32 no.1
    • /
    • pp.21-30
    • /
    • 2022
  • Objectives: The purpose of this study is to suggest an optimal method by comparing the analysis methods of work environment measurement datasets including left-censored data where one or more measurements are below the limit of detection (LOD). Methods: A computer program was used to generate left-censored datasets for various combinations of censoring rate (1% to 90%) and sample size (30 to 300). For the analysis of the censored data, the simple substitution method (LOD/2), β-substitution method, maximum likelihood estimation (MLE) method, Bayesian method, and regression on order statistics (ROS)were all compared. Each method was used to estimate four parameters of the log-normal distribution: (1) geometric mean (GM), (2) geometric standard deviation (GSD), (3) 95th percentile (X95), and (4) arithmetic mean (AM) for the censored dataset. The performance of each method was evaluated using relative bias and relative root mean squared error (rMSE). Results: In the case of the largest sample size (n=300), when the censoring rate was less than 40%, the relative bias and rMSE were small for all five methods. When the censoring rate was large (70%, 90%), the simple substitution method was inappropriate because the relative bias was the largest, regardless of the sample size. When the sample size was small and the censoring rate was large, the Bayesian method, the β-substitution method, and the MLE method showed the smallest relative bias. Conclusions: The accuracy and precision of all methods tended to increase as the sample size was larger and the censoring rate was smaller. The simple substitution method was inappropriate when the censoring rate was high, and the β-substitution method, MLE method, and Bayesian method can be widely applied.