• Title/Summary/Keyword: Cloud computing systems

Search Result 594, Processing Time 0.031 seconds

A Study on the Role and Security Enhancement of the Expert Data Processing Agency: Focusing on a Comparison of Data Brokers in Vermont (데이터처리전문기관의 역할 및 보안 강화방안 연구: 버몬트주 데이터브로커 비교를 중심으로)

  • Soo Han Kim;Hun Yeong Kwon
    • Journal of Information Technology Services
    • /
    • v.22 no.3
    • /
    • pp.29-47
    • /
    • 2023
  • With the recent advancement of information and communication technologies such as artificial intelligence, big data, cloud computing, and 5G, data is being produced and digitized in unprecedented amounts. As a result, data has emerged as a critical resource for the future economy, and overseas countries have been revising laws for data protection and utilization. In Korea, the 'Data 3 Act' was revised in 2020 to introduce institutional measures that classify personal information, pseudonymized information, and anonymous information for research, statistics, and preservation of public records. Among them, it is expected to increase the added value of data by combining pseudonymized personal information, and to this end, "the Expert Data Combination Agency" and "the Expert Data Agency" (hereinafter referred to as the Expert Data Processing Agency) system were introduced. In comparison to these domestic systems, we would like to analyze similar overseas systems, and it was recently confirmed that the Vermont government in the United States enacted the first "Data Broker Act" in the United States as a measure to protect personal information held by data brokers. In this study, we aim to compare and analyze the roles and functions of the "Expert Data Processing Agency" and "Data Broker," and to identify differences in designated standards, security measures, etc., in order to present ways to contribute to the activation of the data economy and enhance information protection.

Design and Implementation of Indoor Air Hazardous Substance Detection Mobile System based on IoT Platform (IoT platform 기반 실내 대기 위험 물질 감지 모바일 시스템 설계 및 구현)

  • Yang, Oh-Seok;Kim, Yeong-Uk;Lee, Hong-Lo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.6
    • /
    • pp.43-53
    • /
    • 2019
  • In recent years, there have been many cases of damage to indoor air hazardous materials, and major damage due to the lack of quick action. In this regard, the system is intended to establish for sending push messages to the user's mobile when the concentration of hazardous substances is exceeded. This system extracts data with IoT system such as Arduino and Raspberry Pi and then constructs database through MongoDB and MySQL in cloud computing system. The database is imported through the application server using NodeJS and sent to the application for visualization. Also, when receiving signals about a dangerous situation in IoT system, push message is sent using Google FCM library. Mobile application is developed using Android Web view, and page to enter Web view is developed using HTML5 (HTML, Javascript CSS). The application of this system enables real-time monitoring of indoor air-dangerous substances. In addition, real-time information on indoor/outdoor detection location and concentration can be sent to the user's mobile in case of a risk situation, which can be expected to help the user respond quickly.

Establishing a Sustainable Future Smart Education System (지속가능한 미래형 스마트교육 시스템 구축 방안)

  • Park, Ji-Hyeon;Choi, Jae-Myeong;Park, Byoung-Lyoul;Kang, Heau-Jo
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.3
    • /
    • pp.495-503
    • /
    • 2012
  • As modern society rapidly changes, the field of education has also developed speedily. Since Edunet system developed in 1996, many different systems are developing continuously such as Center for Teaching and Learning, cyber home learning systems, diagnosis prescribing systems, video systems, teaching and counseling, and study management systems. However, the aforementioned systems have had not great response from the educational consumers due to a lack of interconnection. There are several reasons for it. One of the reasons is that program administrators did not carefully consider the continuity of each programs but established a brand new system whenever they need rather than predict or consider the future needs. The suitable system for smart education should be one big integrated system based on many different data analysis and processing. The system should also supply educational consumers various and useful information by adopting the idea of bigdata rather than a single sign on system connecting each independent system. The cloud computing system should be established as a system that can be managed not as simple compiled files and application programs but as various contents and DATA.

Proposal for the 『Army TIGER Cyber Defense System』 Installation capable of responding to future enemy cyber attack (미래 사이버위협에 대응 가능한 『Army TIGER 사이버방호체계』 구축을 위한 제언)

  • Byeong-jun Park;Cheol-jung Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.157-166
    • /
    • 2024
  • The Army TIGER System, which is being deployed to implement a future combat system, is expected to bring innovative changes to the army's combat methods and comabt execution capability such as mobility, networking and intelligence. To this end, the Army will introduce various systems using drones, robots, unmanned vehicles, AI(Artificial Intelligence), etc. and utilize them in combat. The use of various unmanned vehicles and AI is expected to result in the introduction of equipment with new technologies into the army and an increase in various types of transmitted information, i.e. data. However, currently in the military, there is an acceleration in research and combat experimentations on warfigthing options using Army TIGER forces system for specific functions. On the other hand, the current reality is that research on cyber threats measures targeting information systems related to the increasing number of unmanned systems, data production, and transmission from unmanned systems, as well as the establishment of cloud centers and AI command and control center driven by the new force systems, is not being pursued. Accordingly this paper analyzes the structure and characteristics of the Army TIGER force integration system and makes suggestions for necessity of building, available cyber defense solutions and Army TIGER integrated cyber protections system that can respond to cyber threats in the future.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Research-platform Design for the Korean Smart Greenhouse Based on Cloud Computing (클라우드 기반 한국형 스마트 온실 연구 플랫폼 설계 방안)

  • Baek, Jeong-Hyun;Heo, Jeong-Wook;Kim, Hyun-Hwan;Hong, Youngsin;Lee, Jae-Su
    • Journal of Bio-Environment Control
    • /
    • v.27 no.1
    • /
    • pp.27-33
    • /
    • 2018
  • This study was performed to review the domestic and international smart farm service model based on the convergence of agriculture and information & communication technology and derived various factors needed to improve the Korean smart greenhouse. Studies on modelling of crop growth environment in domestic smart farms were limited. And it took a lot of time to build research infrastructure. The cloud-based research platform as an alternative is needed. This platform can provide an infrastructure for comprehensive data storage and analysis as it manages the growth model of cloud-based integrated data, growth environment model, actuators control model, and farm management as well as knowledge-based expert systems and farm dashboard. Therefore, the cloud-based research platform can be applied as to quantify the relationships among various factors, such as the growth environment of crops, productivity, and actuators control. In addition, it will enable researchers to analyze quantitatively the growth environment model of crops, plants, and growth by utilizing big data, machine learning, and artificial intelligences.

A Scalable Data Integrity Mechanism Based on Provable Data Possession and JARs

  • Zafar, Faheem;Khan, Abid;Ahmed, Mansoor;Khan, Majid Iqbal;Jabeen, Farhana;Hamid, Zara;Ahmed, Naveed;Bashir, Faisal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2851-2873
    • /
    • 2016
  • Cloud storage as a service provides high scalability and availability as per need of user, without large investment on infrastructure. However, data security risks, such as confidentiality, privacy, and integrity of the outsourced data are associated with the cloud-computing model. Over the year's techniques such as, remote data checking (RDC), data integrity protection (DIP), provable data possession (PDP), proof of storage (POS), and proof of retrievability (POR) have been devised to frequently and securely check the integrity of outsourced data. In this paper, we improve the efficiency of PDP scheme, in terms of computation, storage, and communication cost for large data archives. By utilizing the capabilities of JAR and ZIP technology, the cost of searching the metadata in proof generation process is reduced from O(n) to O(1). Moreover, due to direct access to metadata, disk I/O cost is reduced and resulting in 50 to 60 time faster proof generation for large datasets. Furthermore, our proposed scheme achieved 50% reduction in storage size of data and respective metadata that result in providing storage and communication efficiency.

An Efficient Log Data Management Architecture for Big Data Processing in Cloud Computing Environments (클라우드 환경에서의 효율적인 빅 데이터 처리를 위한 로그 데이터 수집 아키텍처)

  • Kim, Julie;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.1-7
    • /
    • 2013
  • Big data management is becoming increasingly important in both industry and academia of information science community. One of the important categories of big data generated from software systems is log data. Log data is generally used for better services in various service providers and can also be used as information for qualification. This paper presents a big data management architecture specialized for log data. Specifically, it provides the aggregation of log messages sent from multiple clients and provides intelligent functionalities such as analyzing log data. The proposed architecture supports an asynchronous process in client-server architectures to prevent the potential bottleneck of accessing data. Accordingly, it does not affect the client performance although using remote data store. We implement the proposed architecture and show that it works well for processing big log data. All components are implemented based on open source software and the developed prototypes are now publicly available.

Trend analysis of Smart TV and Mobile Operating System (모바일 운영체제와 스마트 TV 동향 분석)

  • Bae, Yu-Mi;Jung, Sung-Jae;Jang, Rae-Young;Park, Jeong-Su;Kyung, Ji-Hun;Sung, Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.740-743
    • /
    • 2012
  • The initial role of the operating system acts as an intermediary between the computer and the user, and, hardware and process management, and the convenience of your computer system is to use. Of these operating systems as well as servers and personal computers, smartphones and tablet mounted on mobile devices such as mobile operating system was born. Mobile Operating System has been expanded a TV or Car Area that built into a simple embedded operating system, is emergence of a variety of devices, cloud services, combined with the desire of users due to the high built-in simple embedded operating system that was working on a TV or a car is expanding to the area. The reason for the emergence of a variety of devices, cloud services, combined with the desire of users is high. In this paper, the mobile operating system, N-Screen, Smart TV to find out about and through the analysis of the major smart TV, the future Find out about trends in the mobile operating system.

  • PDF

Constant-Size Ciphertext-Policy Attribute-Based Data Access and Outsourceable Decryption Scheme (고정 크기 암호 정책 속성 기반의 데이터 접근과 복호 연산 아웃소싱 기법)

  • Hahn, Changhee;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.933-945
    • /
    • 2016
  • Sharing data by multiple users on the public storage, e.g., the cloud, is considered to be efficient because the cloud provides on-demand computing service at anytime and anywhere. Secure data sharing is achieved by fine-grained access control. Existing symmetric and public key encryption schemes are not suitable for secure data sharing because they support 1-to-1 relationship between a ciphertext and a secret key. Attribute based encryption supports fine-grained access control, however it incurs linearly increasing ciphertexts as the number of attributes increases. Additionally, the decryption process has high computational cost so that it is not applicable in case of resource-constrained environments. In this study, we propose an efficient attribute-based secure data sharing scheme with outsourceable decryption. The proposed scheme guarantees constant-size ciphertexts irrespective of the number of attributes. In case of static attributes, the computation cost to the user is reduced by delegating approximately 95.3% of decryption operations to the more powerful storage systems, whereas 72.3% of decryption operations are outsourced in terms of dynamic attributes.