• Title/Summary/Keyword: Computing Resource

Search Result 857, Processing Time 0.027 seconds

A Design of Secure Communication Architecture Applying Quantum Cryptography

  • Shim, Kyu-Seok;Kim, Yong-Hwan;Lee, Wonhyuk
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.spc
    • /
    • pp.123-134
    • /
    • 2022
  • Existing network cryptography systems are threatened by recent developments in quantum computing. For example, the Shor algorithm, which can be run on a quantum computer, is capable of overriding public key-based network cryptography systems in a short time. Therefore, research on new cryptography systems is actively being conducted. The most powerful cryptography systems are quantum key distribution (QKD) and post quantum cryptograph (PQC) systems; in this study, a network based on both QKD and PQC is proposed, along with a quantum key management system (QKMS) and a Q-controller to efficiently operate the network. The proposed quantum cryptography communication network uses QKD as its backbone, and replaces QKD with PQC at the user end to overcome the shortcomings of QKD. This paper presents the functional requirements of QKMS and Q-Controller, which can be utilized to perform efficient network resource management.

Behavioral Analysis Zero-Trust Architecture Relying on Adaptive Multifactor and Threat Determination

  • Chit-Jie Chew;Po-Yao Wang;Jung-San Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2529-2549
    • /
    • 2023
  • For effectively lowering down the risk of cyber threating, the zero-trust architecture (ZTA) has been gradually deployed to the fields of smart city, Internet of Things, and cloud computing. The main concept of ZTA is to maintain a distrustful attitude towards all devices, identities, and communication requests, which only offering the minimum access and validity. Unfortunately, adopting the most secure and complex multifactor authentication has brought enterprise and employee a troublesome and unfriendly burden. Thus, authors aim to incorporate machine learning technology to build an employee behavior analysis ZTA. The new framework is characterized by the ability of adjusting the difficulty of identity verification through the user behavioral patterns and the risk degree of the resource. In particular, three key factors, including one-time password, face feature, and authorization code, have been applied to design the adaptive multifactor continuous authentication system. Simulations have demonstrated that the new work can eliminate the necessity of maintaining a heavy authentication and ensure an employee-friendly experience.

Design of Distributed Cloud System for Managing large-scale Genomic Data

  • Seine Jang;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.119-126
    • /
    • 2024
  • The volume of genomic data is constantly increasing in various modern industries and research fields. This growth presents new challenges and opportunities in terms of the quantity and diversity of genetic data. In this paper, we propose a distributed cloud system for integrating and managing large-scale gene databases. By introducing a distributed data storage and processing system based on the Hadoop Distributed File System (HDFS), various formats and sizes of genomic data can be efficiently integrated. Furthermore, by leveraging Spark on YARN, efficient management of distributed cloud computing tasks and optimal resource allocation are achieved. This establishes a foundation for the rapid processing and analysis of large-scale genomic data. Additionally, by utilizing BigQuery ML, machine learning models are developed to support genetic search and prediction, enabling researchers to more effectively utilize data. It is expected that this will contribute to driving innovative advancements in genetic research and applications.

A Design of Integrated Scientific Workflow Execution Environment for A Computational Scientific Application (계산 과학 응용을 위한 과학 워크플로우 통합 수행 환경 설계)

  • Kim, Seo-Young;Yoon, Kyoung-A;Kim, Yoon-Hee
    • Journal of Internet Computing and Services
    • /
    • v.13 no.1
    • /
    • pp.37-44
    • /
    • 2012
  • Numerous scientists who are engaged in compute-intensive researches require more computing facilities than before, while the computing resource and techniques are increasingly becoming more advanced. For this reason, many works for e-Science environment have been actively invested and established around the world, but still the scientists look for an intuitive experimental environment, which is guaranteed the improved environmental facilities without additional configurations or installations. In this paper, we present an integrated scientific workflow execution environment for Scientific applications supporting workflow design with high performance computing infrastructure and accessibility for web browser. This portal supports automated consecutive execution of computation jobs in order of the form defined by workflow design tool and execution service concerning characteristics of each job to batch over distributed grid resources. Workflow editor of the portal presents a high-level frontend and easy-to-use interface with monitoring service, which shows the status of workflow execution in real time so that user can check the intermediate data during experiments. Therefore, the scientists can take advantages of the environment to improve the productivity of study based on HTC.

An elastic distributed parallel Hadoop system for bigdata platform and distributed inference engines (동적 분산병렬 하둡시스템 및 분산추론기에 응용한 서버가상화 빅데이터 플랫폼)

  • Song, Dong Ho;Shin, Ji Ae;In, Yean Jin;Lee, Wan Gon;Lee, Kang Se
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1129-1139
    • /
    • 2015
  • Inference process generates additional triples from knowledge represented in RDF triples of semantic web technology. Tens of million of triples as an initial big data and the additionally inferred triples become a knowledge base for applications such as QA(question&answer) system. The inference engine requires more computing resources to process the triples generated while inferencing. The additional computing resources supplied by underlying resource pool in cloud computing can shorten the execution time. This paper addresses an algorithm to allocate the number of computing nodes "elastically" at runtime on Hadoop, depending on the size of knowledge data fed. The model proposed in this paper is composed of the layered architecture: the top layer for applications, the middle layer for distributed parallel inference engine to process the triples, and lower layer for elastic Hadoop and server visualization. System algorithms and test data are analyzed and discussed in this paper. The model hast the benefit that rich legacy Hadoop applications can be run faster on this system without any modification.

Design and Implementation of Library Information System Using Collective Intelligence and Cloud Computing (집단지성과 클라우드 컴퓨팅을 활용한 도서관 정보시스템 설계 및 구현)

  • Min, Byoung-Won
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.11
    • /
    • pp.49-61
    • /
    • 2011
  • In recent, library is considered as an integrated knowledge convergence center that can respond to various requests about information service of users. Therefor it is necessary to establish a novel information system based on information communications technologies of the era. In other words, it is currently required to develop mobile information service available in portable devices such as smart phones or tablet PCs, and to establish information system reflecting cloud computing, SaaS, Annotation, and Library 2.0 etc. In this paper we design and implement a library information system using collective intelligence and cloud computing. This information system can be adapted for the varieties of mobile service paradigm and abruptly increasing amount of electronic materials. Advantages of this concept model are resource sharing, multi-tenant supporting, configuration, and meta-data supporting etc. In addition it can offer software on-demand type user services. In order to test the performance of our system, we perform an effectiveness analysis and TTA authentication test. The average response time corresponding to variance of data reveals 0.692 seconds which is very good performance in timing effectiveness point of view. And we detect maturity level-3 or 4 authentication in TTA tests such as SaaS maturity, performance, and application programs.

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.

A Study on Determination of the Number of Work Processes Reflecting Characteristics of Program on Computational Grid (계산 그리드 상에서 프로그램의 특성을 반영한 작업 프로세스 수의 결정에 관한 연구)

  • Cho, Soo-Hyun;Kim, Young-Hak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.71-85
    • /
    • 2006
  • The environment of computational grid is composed of the LAN/WAN each of which has different efficiency and heterogeneous network conditions, and where various programs are running. In this environment, the role of the resource selection broker is very important because the work of each node is performed by considering heterogeneous network environment and the computing power of each node according to the characteristics of a program. In this paper, a new resource selection broker is presented that decides the number of work processes to be allocated at each node by considering network state information and the performance of each node according to the characteristics of a program in the environment of computational grid. The proposed resource selection broker has three steps as follows. First, the performance ratio of each node is computed using latency-bandwidth-cpu mixture information reflecting the characteristics of a program, and the number of work processes that will be performed at each node are decided by this ratio. Second, RSL file is automatically made based on the number of work processes decided at the previous step. Finally, each node creates work processes by using that RSL file and performs the work which has been allocated to itself. As experimental results, the proposed method reflecting characteristics of a program, compared with the existing (uniformity) and latency-bandwidth method is improved $278%\sim316%,\;524%\sim595%,\;924%\sim954%$ in the point of work amount, work process number, and node number respectively.

  • PDF

Implementing Finite State Machine Based Operating System for Wireless Sensor Nodes (무선 센서 노드를 위한 FSM 기반 운영체제의 구현)

  • Ha, Seung-Hyun;Kim, Tae-Hyung
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.85-97
    • /
    • 2011
  • Wireless sensor networks have emerged as one of the key enabling technologies for ubiquitous computing since wireless intelligent sensor nodes connected by short range communication media serve as a smart intermediary between physical objects and people in ubiquitous computing environment. We recognize the wireless sensor network as a massively distributed and deeply embedded system. Such systems require concurrent and asynchronous event handling as a distributed system and resource-consciousness as an embedded system. Since the operating environment and architecture of wireless sensor networks, with the seemingly conflicting requirements, poses unique design challenges and constraints to developers, we propose a very new operating system for sensor nodes based on finite state machine. In this paper, we clarify the design goals reflected from the characteristics of sensor networks, and then present the heart of the design and implementation of a compact and efficient state-driven operating system, SenOS. We describe how SenOS can operate in an extremely resource constrained sensor node while providing the required reactivity and dynamic reconfigurability with low update cost. We also compare our experimental results after executing some benchmark programs on SenOS with those on TinyOS.

IAM Architecture and Access Token Transmission Protocol in Inter-Cloud Environment (Inter-Cloud 환경에서의 IAM 구조 및 액세스 토큰 전송 프로토콜)

  • Kim, Jinouk;Park, Jungsoo;Yoon, Kwonjin;Jung, Souhwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.3
    • /
    • pp.573-586
    • /
    • 2016
  • With the adoption of cloud computing, the number of companies that take advantage of cloud computing has increased. Additionally, various of existing service providers have moved their service onto the cloud and provided user with various cloud-based service. The management of user authentication and authorization in cloud-based service technology has become an important issue. This paper introduce a new technique for providing authentication and authorization with other inter-cloud IAM (Identity and Access Management). It is an essential and easy method for data sharing and communication between other cloud users. The proposed system uses the credentials of a user that has already joined an organization who would like to use other cloud services. When users of a cloud provider try to obtain access to the data of another cloud provider, part of credentials from IAM server will be forwarded to the cloud provider. Before the transaction, Access Agreement must be set for granting access to the resource of other Organization. a user can access the resource of other organization based on the control access configuration of the system. Using the above method, we could provide an effective and secure authentication system on the cloud.