• Title/Summary/Keyword: Free-space parallel processing

Search Result 6, Processing Time 0.019 seconds

Optical CBC Block Encryption Method using Free Space Parallel Processing of XOR Operations (XOR 연산의 자유 공간 병렬 처리를 이용한 광학적 CBC 블록 암호화 기법)

  • Gil, Sang Keun
    • Korean Journal of Optics and Photonics
    • /
    • v.24 no.5
    • /
    • pp.262-270
    • /
    • 2013
  • In this paper, we propose a modified optical CBC(Cipher Block Chaining) encryption method using optical XOR logic operations. The proposed method is optically implemented by using dual encoding and a free-space interconnected optical logic gate technique in order to process XOR operations in parallel. Also, we suggest a CBC encryption/decryption optical module which can be fabricated with simple optical architecture. The proposed method makes it possible to encrypt and decrypt vast two-dimensional data very quickly due to the fast optical parallel processing property, and provides more security strength than the conventional electronic CBC algorithm because of the longer security key with the two-dimensional array. Computer simulations show that the proposed method is very effective in CBC encryption processing and can be applied to even ECB(Electronic Code Book) mode and CFB(Cipher Feedback Block) mode.

Optical Implementation of Triple DES Algorithm Based on Dual XOR Logic Operations

  • Jeon, Seok Hee;Gil, Sang Keun
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.5
    • /
    • pp.362-370
    • /
    • 2013
  • In this paper, we propose a novel optical implementation of a 3DES algorithm based on dual XOR logic operations for a cryptographic system. In the schematic architecture, the optical 3DES system consists of dual XOR logic operations, where XOR logic operation is implemented by using a free-space interconnected optical logic gate method. The main point in the proposed 3DES method is to make a higher secure cryptosystem, which is acquired by encrypting an individual private key separately, and this encrypted private key is used to decrypt the plain text from the cipher text. Schematically, the proposed optical configuration of this cryptosystem can be used for the decryption process as well. The major advantage of this optical method is that vast 2-D data can be processed in parallel very quickly regardless of data size. The proposed scheme can be applied to watermark authentication and can also be applied to the OTP encryption if every different private key is created and used for encryption only once. When a security key has data of $512{\times}256$ pixels in size, our proposed method performs 2,048 DES blocks or 1,024 3DES blocks cipher in this paper. Besides, because the key length is equal to $512{\times}256$ bits, $2^{512{\times}256}$ attempts are required to find the correct key. Numerical simulations show the results to be carried out encryption and decryption successfully with the proposed 3DES algorithm.

Optical Pipelined Multi-bus Interconnection Network Intrinsic Topologies

  • d'Auriol, Brian Joseph
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.632-642
    • /
    • 2017
  • Digital all-optical parallel computing is an important research direction and spans conventional devices and convergent nano-optics deployments. Optical bus-based interconnects provide interesting aspects such as relative information communication speed-up or slow-down between optical signals. This aspect is harnessed in the newly proposed All-Optical Linear Array with a Reconfigurable Pipelined Bus System (OLARPBS) model. However, the physical realization of such communication interconnects needs to be considered. This paper considers spatial layouts of processing elements along with the optical bus light paths that are necessary to realize the corresponding interconnection requirements. A metric in terms of the degree of required physical constraint is developed to characterize the variety of possible solutions. Simple algorithms that determine spatial layouts are given. It is shown that certain communication interconnection structures have associated intrinsic topologies.

Integrity, Orbit Determination and Time Synchronisation Algorithms for Galileo

  • Merino, M.M. Romay;Medel, C. Hernandez;Piedelobo, J.R. Martin
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.2
    • /
    • pp.9-14
    • /
    • 2006
  • Galileo is the European Global Navigation Satellite System, under civilian control, and consists on a constellation of medium Earth orbit satellites and its associated ground infrastructure. Galileo will provide to their users highly accurate global positioning services and their associated integrity information. The elements in charge of the computation of Galileo navigation and integrity information are the OSPF (Orbit Synchronization Processing Facility) and IPF (Integrity Processing Facility), within the Galileo Ground Mission Segment (GMS). Navigation algorithms play a key role in the provision of the Galileo Mission, since they are responsible for computing the essential information the users need to calculate their position: the satellite ephemeris and clock offsets. Such information is generated in the Galileo Ground Mission Segment and broadcast by the satellites within the navigation signal, together with the expected a-priori accuracy (SISA: Signal-In-Space Accuracy), which is the parameter that in fault-free conditions makes the overbounding the predicted ephemeris and clock model errors for the Worst User Location. In parallel, the integrity algorithms of the GMS are responsible of providing a real-time monitoring of the satellite status with timely alarm messages in case of failures. The accuracy of the integrity monitoring system is characterized by the SISMA (Signal In Space Monitoring Accuracy), which is also broadcast to the users through the integrity message.

  • PDF

A Walsh-Based Distributed Associative Memory with Genetic Algorithm Maximization of Storage Capacity for Face Recognition

  • Kim, Kyung-A;Oh, Se-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.640-643
    • /
    • 2003
  • A Walsh function based associative memory is capable of storing m patterns in a single pattern storage space with Walsh encoding of each pattern. Furthermore, each stored pattern can be matched against the stored patterns extremely fast using algorithmic parallel processing. As such, this special type of memory is ideal for real-time processing of large scale information. However this incredible efficiency generates large amount of crosstalk between stored patterns that incurs mis-recognition. This crosstalk is a function of the set of different sequencies [number of zero crossings] of the Walsh function associated with each pattern to be stored. This sequency set is thus optimized in this paper to minimize mis-recognition, as well as to maximize memory saying. In this paper, this Walsh memory has been applied to the problem of face recognition, where PCA is applied to dimensionality reduction. The maximum Walsh spectral component and genetic algorithm (GA) are applied to determine the optimal Walsh function set to be associated with the data to be stored. The experimental results indicate that the proposed methods provide a novel and robust technology to achieve an error-free, real-time, and memory-saving recognition of large scale patterns.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.