• Title/Summary/Keyword: IaaS(Infrastructure as a Service)

Search Result 36, Processing Time 0.019 seconds

The CloudHIS System for Personal Healthcare Information Integration Scheme of Cloud Computing (클라우드 컴퓨팅 환경에서 개인의료정보를 통합한 CloudHIS 시스템)

  • Cho, Young-Bok;Woo, Sung-Hee;Lee, Sang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.5
    • /
    • pp.27-35
    • /
    • 2014
  • The characteristics of today's health care industry, based on the state of the art IT can be represented as a paradigm of human-oriented ubiquitous and accessible as possible by U-Health care. In addition, the healthcare industry is information and communication technologies (ICT) developments regarding the many advances and applications based on the research being carried out actively. Medical information system has been developed toward combining information systems of medical IT and it sets its sights on the fusion of developed IT and u-healthcare system. So changing distributed medical information systems into a safe PHR integrated system based on IaaS cloud computing is suggested in order to forge u-healthcare system with the times in this paper. Our experimental results show that our proposed system increased the data access time by about 24% and reduces the waiting time for processing service by about 4.3% over the web-based PHR.

Development of BIM Collaboration Framework Based on ISO 19650 (국제표준을 반영한 BIM 협업 프레임워크 개발)

  • Choi, Sung-Woo;Hyun, Keun-Ju;Kim, Hyeon-seung
    • Journal of KIBIM
    • /
    • v.13 no.4
    • /
    • pp.54-63
    • /
    • 2023
  • In recent years, the mandatory use of BIM has been actively promoted due to the digital transformation of the construction industry. However, the CDE (Common Data Environment) system, which is an essential element for operating BIM, has not been established in accordance with the domestic situation. To solve this problem, this study analyzed the results of previous studies, including the ISO 19650 standard and domestic CDE system requirements, and developed BIM-based collaboration functions that are suitable for the domestic construction industry through functional analysis of domestic and foreign commercial CDE solutions. And we developed a BIM collaboration framework to provide BIM-based collaboration functions as a service by using cloud technologies such as IaaS, PaaS, and SaaS to provide infrastructure resources flexibly and flexibly. The BIM collaboration framework developed in this study meets most of the CDE requirements of ISO1965, so it can secure competitiveness when bidding for overseas BIM projects. Also, because the BIM collaboration functions can be selectively applied to build a BIM-based collaboration platform, it is expected that the utilization of the BIM collaboration framework will be high, as it can minimize not only the time to build the platform but also the operating costs, and the usability is higher than that of existing commercial BIM CDE solutions.

Deployment Strategies of Cloud Computing System for Defense Infrastructure Enhanced with High Availability (고가용성 보장형 국방 클라우드 시스템 도입 전략)

  • Kang, Ki-Wan;Park, Jun-Gyu;Lee, Sang-Hoon;Park, Ki-Woong
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.3
    • /
    • pp.7-15
    • /
    • 2019
  • Cloud computing markets are rapidly growing as cost savings and business innovation are being carried out through ICT worldwide. In line with this paradigm, the nation is striving to introduce cloud computing in various areas, including the public sector and defense sector, through various research. In the defense sector, DIDC was established in 2015 by integrating military, naval, air and military computing centers, and it provides cloud services in the form of IaaS to some systems in the center. In DIDC and various future cloud defense systems, It is an important issue to ensure availability in cloud defense systems in the defense sector because system failures such as network delays and system resource failures are directly linked to the results of battlefields. However, ensuring the highest levels of availability for all systems in the defense cloud can be inefficient, and the efficiency that can be gained from deploying a cloud system can be reduced. In this paper, we classify and define the level of availability of defense cloud systems step by step, and propose the strategy of introducing Erasure coding and failure acceptance systems, and disaster recovery system technology according to each level of availability acquisition.

A Workflow Execution System for Analyzing Large-scale Astronomy Data on Virtualized Computing Environments

  • Yu, Jung-Lok;Jin, Du-Seok;Yeo, Il-Yeon;Yoon, Hee-Jun
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.16-25
    • /
    • 2020
  • The size of observation data in astronomy has been increasing exponentially with the advents of wide-field optical telescopes. This means the needs of changes to the way used for large-scale astronomy data analysis. The complexity of analysis tools and the lack of extensibility of computing environments, however, lead to the difficulty and inefficiency of dealing with the huge observation data. To address this problem, this paper proposes a workflow execution system for analyzing large-scale astronomy data efficiently. The proposed system is composed of two parts: 1) a workflow execution manager and its RESTful endpoints that can automate and control data analysis tasks based on workflow templates and 2) an elastic resource manager as an underlying mechanism that can dynamically add/remove virtualized computing resources (i.e., virtual machines) according to the analysis requests. To realize our workflow execution system, we implement it on a testbed using OpenStack IaaS (Infrastructure as a Service) toolkit and HTCondor workload manager. We also exhaustively perform a broad range of experiments with different resource allocation patterns, system loads, etc. to show the effectiveness of the proposed system. The results show that the resource allocation mechanism works properly according to the number of queued and running tasks, resulting in improving resource utilization, and the workflow execution manager can handle more than 1,000 concurrent requests within a second with reasonable average response times. We finally describe a case study of data reduction system as an example application of our workflow execution system.

The Big Data Analysis and Medical Quality Management for Wellness (웰니스를 위한 빅데이터 분석과 의료 질 관리)

  • Cho, Young-Bok;Woo, Sung-Hee;Lee, Sang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.12
    • /
    • pp.101-109
    • /
    • 2014
  • Medical technology development and increase the income level of a "Long and healthy Life=Wellness," with the growing interest in actively promoting and maintaining health and wellness has become enlarged. In addition, the demand for personalized health care services is growing and extensive medical moves of big data, disease prevention, too. In this paper, the main interest in the market, highlighting wellness in order to support big data-driven healthcare quality through patient-centered medical services purposes. Patients with drug dependence treatment is not to diet but to improve disease prevention and treatment based on analysis of big data. Analysing your Tweets-daily information and wellness disease prevention and treatment, based on the purpose of the dictionary. Efficient big data analysis for node while increasing processing time experiment. Test result case of total access time efficient 26% of one node to three nodes and case of data storage is 63%, case of data aggregate is 18% efficient of one node to three nodes.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.