• Title/Summary/Keyword: cloud computing systems

Search Result 602, Processing Time 0.031 seconds

Interactive Visual Analytic Approach for Anomaly Detection in BGP Network Data (BGP 네트워크 데이터 내의 이상징후 감지를 위한 인터랙티브 시각화 분석 기법)

  • Choi, So-mi;Kim, Son-yong;Lee, Jae-yeon;Kauh, Jang-hyuk;Kwon, Koo-hyung;Choo, Jae-gul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.135-143
    • /
    • 2022
  • As the world has implemented social distancing and telecommuting due to the spread of COVID-19, real-time streaming sessions based on routing protocols have increased dependence on the Internet due to the activation of video and voice-related content services and cloud computing. BGP is the most widely used routing protocol, and although many studies continue to improve security, there is a lack of visual analysis to determine the real-time nature of analysis and the mis-detection of algorithms. In this paper, we analyze BGP data, which are powdered as normal and abnormal, on a real-world basis, using an anomaly detection algorithm that combines statistical and post-processing statistical techniques with Rule-based techniques. In addition, we present an interactive spatio-temporal analysis plan as an intuitive visualization plan and analysis result of the algorithm with a map and Sankey Chart-based visualization technique.

Batch Resizing Policies and Techniques for Fine-Grain Grid Tasks: The Nuts and Bolts

  • Muthuvelu, Nithiapidary;Chai, Ian;Chikkannan, Eswaran;Buyya, Rajkumar
    • Journal of Information Processing Systems
    • /
    • v.7 no.2
    • /
    • pp.299-320
    • /
    • 2011
  • The overhead of processing fine-grain tasks on a grid induces the need for batch processing or task group deployment in order to minimise overall application turnaround time. When deciding the granularity of a batch, the processing requirements of each task should be considered as well as the utilisation constraints of the interconnecting network and the designated resources. However, the dynamic nature of a grid requires the batch size to be adaptable to the latest grid status. In this paper, we describe the policies and the specific techniques involved in the batch resizing process. We explain the nuts and bolts of these techniques in order to maximise the resulting benefits of batch processing. We conduct experiments to determine the nature of the policies and techniques in response to a real grid environment. The techniques are further investigated to highlight the important parameters for obtaining the appropriate task granularity for a grid resource.

A Study on the Environment Characteristics and Continuous Usage Intention for Improvement of Fintech (핀테크 활성화를 위한 사용환경특성과 지속사용의도)

  • Jung, Dae-Hyun;Chang, Hwal-Sik;Park, Kwang-O
    • The Journal of Information Systems
    • /
    • v.26 no.2
    • /
    • pp.123-142
    • /
    • 2017
  • Purpose The development of the Fintech industry can be on the basis of the development in IT technologies such as Big data, IoT, cloud computing, it can be considered that the financial industry is repeating the evolution into Fintech. But the awareness of the consumers is still very low. Therefore the current dissertation, tries to deduce the suggestions for invigoration measures for Fintech by conducting an empirical study on the factors that influence the intention of reuse of Fintech on the consumer's point of view. Design/methodology/approach This study made a design of the research model by integrating the factors deducted from the Expectation Confirmation Theory. This paper empirically analyzes the impact of Continuous Usage Intention for Improvement of Fintech. The 302 survey responses were used to verify research hypotheses through covariate structural equation model. Findings According to the empirical analysis result, this study confirmed that the ultimate purpose of the Fintech service is to eliminate the social cost's waste element occurring from issue of money by not using or reducing the usage of cash. Since many Fintech users have pointed out security as the priority task, a direction for the related institutions has been proposed. Second, the content of the current dissertation will be the opportunity of broadening the perception of the current consumers that perceive Fintech as only a NFC simple payment service.

UniPy: A Unified Programming Language for MGC-based IoT Systems

  • Kim, Gayoung;Choi, Kwanghoon;Chang, Byeong-Mo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.3
    • /
    • pp.77-86
    • /
    • 2019
  • The advent of Internet of Things (IoT) makes common nowadays computing environments involving programming not a single computer but several heterogeneous distributed computers together. Developing programs separately, one for each computer, increases programmer burden and testing all the programs become more complex. To address the challenge, this paper proposes an RPC-based unified programming language, UniPy, for development of MGC (eMbedded, Gateway, and Cloud) applications in IoT systems configured with popular computers such as Arduino, Raspberry Pi, and Web-based DB server. UniPy offers programmers a view of classes as locations and a very simple form of remote procedure call mechanism. Our UniPy compiler automatically splits a UniPy program into small pieces of the program at different locations supporting the necessary RPC mechanism. An advantage of UniPy programs is to permit programmers to write local codes the same as for a single computer requiring no extra knowledge due to having unified programming models, which is very different from the existing research works such as Fabryq and Ravel. Also, the structure of UniPy programs allows programmers to test them by directly executing them before splitting, which is a feature that has never been emphasized yet.

A Study on the Role and Security Enhancement of the Expert Data Processing Agency: Focusing on a Comparison of Data Brokers in Vermont (데이터처리전문기관의 역할 및 보안 강화방안 연구: 버몬트주 데이터브로커 비교를 중심으로)

  • Soo Han Kim;Hun Yeong Kwon
    • Journal of Information Technology Services
    • /
    • v.22 no.3
    • /
    • pp.29-47
    • /
    • 2023
  • With the recent advancement of information and communication technologies such as artificial intelligence, big data, cloud computing, and 5G, data is being produced and digitized in unprecedented amounts. As a result, data has emerged as a critical resource for the future economy, and overseas countries have been revising laws for data protection and utilization. In Korea, the 'Data 3 Act' was revised in 2020 to introduce institutional measures that classify personal information, pseudonymized information, and anonymous information for research, statistics, and preservation of public records. Among them, it is expected to increase the added value of data by combining pseudonymized personal information, and to this end, "the Expert Data Combination Agency" and "the Expert Data Agency" (hereinafter referred to as the Expert Data Processing Agency) system were introduced. In comparison to these domestic systems, we would like to analyze similar overseas systems, and it was recently confirmed that the Vermont government in the United States enacted the first "Data Broker Act" in the United States as a measure to protect personal information held by data brokers. In this study, we aim to compare and analyze the roles and functions of the "Expert Data Processing Agency" and "Data Broker," and to identify differences in designated standards, security measures, etc., in order to present ways to contribute to the activation of the data economy and enhance information protection.

Design and Implementation of Indoor Air Hazardous Substance Detection Mobile System based on IoT Platform (IoT platform 기반 실내 대기 위험 물질 감지 모바일 시스템 설계 및 구현)

  • Yang, Oh-Seok;Kim, Yeong-Uk;Lee, Hong-Lo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.6
    • /
    • pp.43-53
    • /
    • 2019
  • In recent years, there have been many cases of damage to indoor air hazardous materials, and major damage due to the lack of quick action. In this regard, the system is intended to establish for sending push messages to the user's mobile when the concentration of hazardous substances is exceeded. This system extracts data with IoT system such as Arduino and Raspberry Pi and then constructs database through MongoDB and MySQL in cloud computing system. The database is imported through the application server using NodeJS and sent to the application for visualization. Also, when receiving signals about a dangerous situation in IoT system, push message is sent using Google FCM library. Mobile application is developed using Android Web view, and page to enter Web view is developed using HTML5 (HTML, Javascript CSS). The application of this system enables real-time monitoring of indoor air-dangerous substances. In addition, real-time information on indoor/outdoor detection location and concentration can be sent to the user's mobile in case of a risk situation, which can be expected to help the user respond quickly.

Establishing a Sustainable Future Smart Education System (지속가능한 미래형 스마트교육 시스템 구축 방안)

  • Park, Ji-Hyeon;Choi, Jae-Myeong;Park, Byoung-Lyoul;Kang, Heau-Jo
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.3
    • /
    • pp.495-503
    • /
    • 2012
  • As modern society rapidly changes, the field of education has also developed speedily. Since Edunet system developed in 1996, many different systems are developing continuously such as Center for Teaching and Learning, cyber home learning systems, diagnosis prescribing systems, video systems, teaching and counseling, and study management systems. However, the aforementioned systems have had not great response from the educational consumers due to a lack of interconnection. There are several reasons for it. One of the reasons is that program administrators did not carefully consider the continuity of each programs but established a brand new system whenever they need rather than predict or consider the future needs. The suitable system for smart education should be one big integrated system based on many different data analysis and processing. The system should also supply educational consumers various and useful information by adopting the idea of bigdata rather than a single sign on system connecting each independent system. The cloud computing system should be established as a system that can be managed not as simple compiled files and application programs but as various contents and DATA.

Proposal for the 『Army TIGER Cyber Defense System』 Installation capable of responding to future enemy cyber attack (미래 사이버위협에 대응 가능한 『Army TIGER 사이버방호체계』 구축을 위한 제언)

  • Byeong-jun Park;Cheol-jung Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.157-166
    • /
    • 2024
  • The Army TIGER System, which is being deployed to implement a future combat system, is expected to bring innovative changes to the army's combat methods and comabt execution capability such as mobility, networking and intelligence. To this end, the Army will introduce various systems using drones, robots, unmanned vehicles, AI(Artificial Intelligence), etc. and utilize them in combat. The use of various unmanned vehicles and AI is expected to result in the introduction of equipment with new technologies into the army and an increase in various types of transmitted information, i.e. data. However, currently in the military, there is an acceleration in research and combat experimentations on warfigthing options using Army TIGER forces system for specific functions. On the other hand, the current reality is that research on cyber threats measures targeting information systems related to the increasing number of unmanned systems, data production, and transmission from unmanned systems, as well as the establishment of cloud centers and AI command and control center driven by the new force systems, is not being pursued. Accordingly this paper analyzes the structure and characteristics of the Army TIGER force integration system and makes suggestions for necessity of building, available cyber defense solutions and Army TIGER integrated cyber protections system that can respond to cyber threats in the future.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Research-platform Design for the Korean Smart Greenhouse Based on Cloud Computing (클라우드 기반 한국형 스마트 온실 연구 플랫폼 설계 방안)

  • Baek, Jeong-Hyun;Heo, Jeong-Wook;Kim, Hyun-Hwan;Hong, Youngsin;Lee, Jae-Su
    • Journal of Bio-Environment Control
    • /
    • v.27 no.1
    • /
    • pp.27-33
    • /
    • 2018
  • This study was performed to review the domestic and international smart farm service model based on the convergence of agriculture and information & communication technology and derived various factors needed to improve the Korean smart greenhouse. Studies on modelling of crop growth environment in domestic smart farms were limited. And it took a lot of time to build research infrastructure. The cloud-based research platform as an alternative is needed. This platform can provide an infrastructure for comprehensive data storage and analysis as it manages the growth model of cloud-based integrated data, growth environment model, actuators control model, and farm management as well as knowledge-based expert systems and farm dashboard. Therefore, the cloud-based research platform can be applied as to quantify the relationships among various factors, such as the growth environment of crops, productivity, and actuators control. In addition, it will enable researchers to analyze quantitatively the growth environment model of crops, plants, and growth by utilizing big data, machine learning, and artificial intelligences.