• Title/Summary/Keyword: Parallel Implementation

Search Result 878, Processing Time 0.025 seconds

Design and Analysis of a Digit-Serial $AB^{2}$ Systolic Arrays in $GF(2^{m})$ ($GF(2^{m})$ 상에서 새로운 디지트 시리얼 $AB^{2}$ 시스톨릭 어레이 설계 및 분석)

  • Kim Nam-Yeun;Yoo Kee-Young
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.4
    • /
    • pp.160-167
    • /
    • 2005
  • Among finite filed arithmetic operations, division/inverse is known as a basic operation for public-key cryptosystems over $GF(2^{m})$ and it is computed by performing the repetitive $AB^{2}$ multiplication. This paper presents a digit-serial-in-serial-out systolic architecture for performing the $AB^2$ operation in GF$(2^{m})$. To obtain L×L digit-serial-in-serial-out architecture, new $AB^{2}$ algorithm is proposed and partitioning, index transformation and merging the cell of the architecture, which is derived from the algorithm, are proposed. Based on the area-time product, when the digit-size of digit-serial architecture, L, is selected to be less than about m, the proposed digit-serial architecture is efficient than bit-parallel architecture, and L is selected to be less than about $(1/5)log_{2}(m+1)$, the proposed is efficient than bit-serial. In addition, the area-time product complexity of pipelined digit-serial $AB^{2}$ systolic architecture is approximately $10.9\%$ lower than that of nonpipelined one, when it is assumed that m=160 and L=8. Additionally, since the proposed architecture can be utilized for the basic architecture of crypto-processor and it is well suited to VLSI implementation because of its simplicity, regularity and pipelinability.

Implementation of High-radix Modular Exponentiator for RSA using CRT (CRT를 이용한 하이래딕스 RSA 모듈로 멱승 처리기의 구현)

  • 이석용;김성두;정용진
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.10 no.4
    • /
    • pp.81-93
    • /
    • 2000
  • In a methodological approach to improve the processing performance of modulo exponentiation which is the primary arithmetic in RSA crypto algorithm, we present a new RSA hardware architecture based on high-radix modulo multiplication and CRT(Chinese Remainder Theorem). By implementing the modulo multiplier using radix-16 arithmetic, we reduced the number of PE(Processing Element)s by quarter comparing to the binary arithmetic scheme. This leads to having the number of clock cycles and the delay of pipelining flip-flops be reduced by quarter respectively. Because the receiver knows p and q, factors of N, it is possible to apply the CRT to the decryption process. To use CRT, we made two s/2-bit multipliers operating in parallel at decryption, which accomplished 4 times faster performance than when not using the CRT. In encryption phase, the two s/2-bit multipliers can be connected to make a s-bit linear multiplier for the s-bit arithmetic operation. We limited the encryption exponent size up to 17-bit to maintain high speed, We implemented a linear array modulo multiplier by projecting horizontally the DG of Montgomery algorithm. The H/W proposed here performs encryption with 15Mbps bit-rate and decryption with 1.22Mbps, when estimated with reference to Samsung 0.5um CMOS Standard Cell Library, which is the fastest among the publications at present.

On a High-Speed Implementation of LILI-128 Stream Cipher Using FPGA/VHDL (FPGA/VHDL을 이용한 LILI-128 암호의 고속화 구현에 관한 연구)

  • 이훈재;문상재
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.11 no.3
    • /
    • pp.23-32
    • /
    • 2001
  • Since the LILI-128 cipher is a clock-controlled keystream generator, the speed of the keystream data is degraded in a clock-synchronized hardware logic design. Basically, the clock-controlled $LFSR_d$ in the LILI-128 cipher requires a system clock that is 1 ~4 times higher. Therefore, if the same clock is selected, the system throughput of the data rate will be lowered. Accordingly, this paper proposes a 4-bit parallel $LFSR_d$, where each register bit includes four variable data routines for feed feedback of shifting within the $LFSR_d$ . Furthermore, the timing of the propose design is simulated using a $Max^+$plus II from the ALTERA Co., the logic circuit is implemented for an FPGA device (EPF10K20RC240-3), and the throughput stability is analyzed up to a late of 50 Mbps with a 50MHz system clock. (That is higher than the 73 late at 45 Mbps, plus the maximum delay routine in the proposed design was below 20ns.) Finally, we translate/simulate our FPGA/VHDL design to the Lucent ASIC device( LV160C, 0.13 $\mu\textrm{m}$ CMOS & 1.5v technology), and it could achieve a throughput of about 500 Mbps with a 0.13$\mu\textrm{m}$ semiconductor for the maximum path delay below 1.8ns.

Implementation of RSA modular exponentiator using Division Chain (나눗셈 체인을 이용한 RSA 모듈로 멱승기의 구현)

  • 김성두;정용진
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.12 no.2
    • /
    • pp.21-34
    • /
    • 2002
  • In this paper we propos a new hardware architecture of modular exponentiation using a division chain method which has been proposed in (2). Modular exponentiation using the division chain is performed by receding an exponent E as a mixed form of multiplication and addition with divisors d=2 or $d=2^I +1$ and respective remainders r. This calculates the modular exponentiation in about $1.4log_2$E multiplications on average which is much less iterations than $2log_2$E of conventional Binary Method. We designed a linear systolic array multiplier with pipelining and used a horizontal projection on its data dependence graph. So, for k-bit key, two k-bit data frames can be inputted simultaneously and two modular multipliers, each consisting of k/2+3 PE(Processing Element)s, can operate in parallel to accomplish 100% throughput. We propose a new encoding scheme to represent divisors and remainders of the division chain to keep regularity of the data path. When it is synthesized to ASIC using Samsung 0.5 um CMOS standard cell library, the critical path delay is 4.24ns, and resulting performance is estimated to be abort 140 Kbps for a 1024-bit data frame at 200Mhz clock In decryption process, the speed can be enhanced to 560kbps by using CRT(Chinese Remainder Theorem). Futhermore, to satisfy real time requirements we can choose small public exponent E, such as 3,17 or $2^{16} +1$, in encryption and verification process. in which case the performance can reach 7.3Mbps.

A Study on the Proposal of an Integration Model for Library Collaboration Instruction (도서관협력수업의 통합모형 제안에 관한 연구)

  • Byeong-Kee Lee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.4
    • /
    • pp.25-47
    • /
    • 2023
  • Library collaboration instruction (LCI) is a process in which a classroom teacher and librarian collaborate to co-planning, co-implementation, co-assessment instruction. LCI is being studied and modeled in various dimensions such as the level of collaboration, information activities, and time scheduling. However, there is no integrated model that comprehensively covers teacher and librarian collaboration. The purpose of this study is to propose a schematic integration model for LCI by comparing and analyzing various models in five dimensions (level of collaboration, information activities, collaborative approach, time scheduling, and technological integration). The main results of the integration model for LCI reflected in this study are as follows. First, in terms of the level of collaboration, TLC integration model reflected such as library-based teacher-led instruction, cross-curricular integrated curriculum. Second, in terms of information activities, LCI integration model reflected social and science subjects inquiry activities in addition to the information use process. Third, in terms of collaborative approach, LCI integration model is divided into such as lead-observation instruction and parallel station instruction. Fourth, in terms of time management, LCI integration model took into account the Korean national curriculum and scheduling methods. Fifth, in terms of technology integration, LCI integration model reflected the PICRAT model, modified from the perspective of library collaboration instruction.

Policy Suggestions Regarding to Soil Quality Levels in Korea from a Comparison Study of the United States, the United Kingdom, Germany, the Netherlands, and Denmark's Soil Quality Policies (토양질 기준에 관한 주요 외국 정책의 비교분석을 통한 우리나라의 토양질 기준 개념설정과 적용)

  • Park Yong-Ha;Yang Jae-E;Ok Yong-Sik
    • Journal of Soil and Groundwater Environment
    • /
    • v.10 no.4
    • /
    • pp.1-12
    • /
    • 2005
  • Policies regarding to soil quality of the United States, the United Kingdom, the Netherlands, Germany, and Demark were analyzed to suggest Korean policy for improving soil quality concept and it's implementation. All countries met four criteria: I) Soil quality levels of contaminants are indebt to concept of contaminant risk to recipients (human and ecosystem); ii) Any soil quality value can't be a magic number to determine whether a site is contaminated or not. To determine risk of sites, risk assessment of the sites should be followed; iii) Concentrations of contaminants of sites are not always significantly certain to risk of human and ecosystem of the sites; and iv) Soil quality levels are adopted based on land uses and plans. Considering our rooms to improve policies and analysis of the other country reports on their legislations about soil quality levels, our policy implementation could be approached from these directions: i) Our concept for soil quality levels needs to develop in scientific and rational. ii) Soil quality levels and risk assessment should be implemented as determining tools of site contamination in parallel, and iii) Soil quality levels depending on land uses and plans should be developed in debt with rational and scientific concept of risk. Increasing efficacy of Korea policy regarding the soil quality levels would be in dept to applying concepts of SCL (Soil Contamination Level) and SRL (Soil Regulatory Level) developed, implementing soil quality levels and risk assessment of contaminated sites in conjunction, and classifying three distinctions of land uses based on sensitiveness of recipients (human and ecosystem) to contaminants in soil in this research.

An Exploratory Study on the Experts' Perception of Science Curriculum Localization Policy: Focus on the Revision of the Arrangement and Implementation Guideline and the Achievement Standard of Curriculum (과학과 교육과정 지역화 정책에 대한 전문가 인식 탐색 -교육과정 편성·운영 지침 및 성취기준 개정을 중심으로-)

  • Chun, Joo-young;Lee, Gyeong-geon;Hong, Hun-gi
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.6
    • /
    • pp.483-499
    • /
    • 2021
  • The curriculum localization policy is closely related to the decentralization and autonomy policy, which is a direction of the 2022 revised curriculum. In particular, considering the continuously expanding and changing environment and contents in science education, the localization of the science curriculum has the advantage of advancing to expertise through diversity. In this paper, through experts' perception of the science curriculum localization policy, the implications of the curriculum revision were confirmed, focusing on 'MPOE(Metropolitan and Provincial Offices of Education) curriculum arrangement and implementation guidelines(hereinafter referred to as 'guidelines')' and the achievement standards revision of science curriculum. In conclusion, study participants considered that the possibility of expanding the localization of the curriculum was high due to the unique characteristics of science practices. And they recognized the level of localization at the 'district office of education or village'-level between MPOE-level and school-level. When localization reaches the school-level in the future, it was considered necessary to discuss linkage with teacher policies such as teacher's competency, noting that the level of teachers could become the level of localization. In addition, there was a common perception that in order for the science 'guidelines' to be localized, 17 MPOE must be given the authority to autonomously organize some achievement standards in parallel. It was considered that 'restructuring or slimming of achievement standards' should precede localization of achievement standards in connection with this. On the other hand, it was predicted that the curriculum localization policy would enhance the aspect of diversification and autonomy of the science curriculum, and the establishment of achievement standards was directly related to evaluation, so it recognized the need to refine policies such as new description for evaluation clause in future science 'guidelines'. Finally, considering science and characteristics, it was mentioned that it is necessary to specify regional intensive science education policies in the 'guidelines' themselves beyond the localization of teaching materials.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.