• Title/Summary/Keyword: data center servers

Search Result 59, Processing Time 0.031 seconds

Digital Forensic Methodology of IaaS Cloud Computing Service (IaaS 유형의 클라우드 컴퓨팅 서비스에 대한 디지털 포렌식 연구)

  • Jeong, Il-Hoon;Oh, Jung-Hoon;Park, Jung-Heum;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.6
    • /
    • pp.55-65
    • /
    • 2011
  • Recently, use of cloud computing service is dramatically increasing due to wired and wireless communications network diffusion in a field of high performance Internet technique. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. In a view of digital forensic investigation, it is difficult to obtain data from cloud computing service environments. therefore, this paper suggests analysis method of AWS(Amazon Web Service) and Rackspace which take most part in cloud computing service where IaaS formats presented for data acquisition in order to get an evidence.

An Improved Estimation Model of Server Power Consumption for Saving Energy in a Server Cluster Environment (서버 클러스터 환경에서 에너지 절약을 위한 향상된 서버 전력 소비 추정 모델)

  • Kim, Dong-Jun;Kwak, Hu-Keun;Kwon, Hui-Ung;Kim, Young-Jong;Chung, Kyu-Sik
    • The KIPS Transactions:PartA
    • /
    • v.19A no.3
    • /
    • pp.139-146
    • /
    • 2012
  • In the server cluster environment, one of the ways saving energy is to control server's power according to traffic conditions. This is to determine the ON/OFF state of servers according to energy usage of data center and each server. To do this, we need a way to estimate each server's energy. In this paper, we use a software-based power consumption estimation model because it is more efficient than the hardware model using power meter in terms of energy and cost. The traditional software-based power consumption estimation model has a drawback in that it doesn't know well the computing status of servers because it uses only the idle status field of CPU. Therefore it doesn't estimate consumption power effectively. In this paper, we present a CPU field based power consumption estimation model to estimate more accurate than the two traditional models (CPU/Disk/Memory utilization based power consumption estimation model and CPU idle utilization based power consumption estimation model) by using the various status fields of CPU to get the CPU status of servers and the overall status of system. We performed experiments using 2 PCs and compared the power consumption estimated by the power consumption model (software) with that measured by the power meter (hardware). The experimental results show that the traditional model has about 8-15% average error rate but our proposed model has about 2% average error rate.

An Extended DOM for GML Data (GML 데이타를 지원하는 확장된 DOM)

  • Ban, Chae-Hoon;Jo, Jeong-Hee;Moon, Sang-Ho;Hong, Bong-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.5
    • /
    • pp.510-519
    • /
    • 2002
  • The OpenGIS Consortium has proposed a new web-mapping technology to support interoperability in web GIS environment by developing the specifications of MapServer and GML. In this environment, the MapServer transforms legacy spatial data into GML data, and clients display them on standard web browsers. This web-mapping testbed proposes methods for discovering, accessing, integrating and displaying GIS information except processing of spatial operations which are essential services in GIS environment. This paper proposes the method for executing spatial operations on GML data which are overlays of different map layers in legacy data servers. To support spatial operations on GML data in web GIS environment, this paper designs and implements GDOM based on the W3C's DOM Specifications and OGC's Simple Features Specifications. This paper shows the specifications and implementation of GDOM and the process of spatial operations in web-mapping testbed environment.

A Provable One-way Authentication Key Agreement Scheme with User Anonymity for Multi-server Environment

  • Zhu, Hongfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.811-829
    • /
    • 2015
  • One-way authenticated key agreement protocols, aiming at solving the problems to establish secure communications over public insecure networks, can achieve one-way authentication of communicating entities for giving a specific user strong anonymity and confidentiality of transmitted data. Public Key Infrastructure can design one-way authenticated key agreement protocols, but it will consume a large amount of computation. Because one-way authenticated key agreement protocols mainly concern on authentication and key agreement, we adopt multi-server architecture to realize these goals. About multi-server architecture, which allow the user to register at the registration center (RC) once and can access all the permitted services provided by the eligible servers. The combination of above-mentioned ideas can lead to a high-practical scheme in the universal client/server architecture. Based on these motivations, the paper firstly proposed a new one-way authenticated key agreement scheme based on multi-server architecture. Compared with the related literatures recently, our proposed scheme can not only own high efficiency and unique functionality, but is also robust to various attacks and achieves perfect forward secrecy. Finally, we give the security proof and the efficiency analysis of our proposed scheme.

SWoT Service Discovery for CoAP-Based Sensor Networks (CoAP 기반 센서네트워크를 위한 SWoT 서비스 탐색)

  • Yu, Myung-han;Kim, Sangkyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.9
    • /
    • pp.331-336
    • /
    • 2015
  • On the IoT-based sensor networks, users or sensor nodes must perform a Service Discovery (SD) procedure before access to the wanted service. Current approach uses a center-concentrated Resource Directory (RD) servers or P2P technique, but these can cause a point-of-failure or flooding of SD messages. In this paper, we proposes an improved SWoT SD approach for CoAP-based sensor networks, which integrates Social Web of Things (SWoT) concept to current CoAP-based SD approach that makes up for weak points of existing systems. This new approach can perform a function like a keyword or location-based search originated from SNS, which can enhances the usability. Finally, we implemented a real system to evaluate.

A Study on the Police Knowledge Management System based on the IntraNet (인트라넷기반의 경찰지식관리시스템에 관한 연구)

  • Choi, Eung-Ryul;Lim, Jae-Kang
    • Korean Security Journal
    • /
    • no.3
    • /
    • pp.273-305
    • /
    • 2000
  • The knowledge substitutes the traditional factors of production - land, labor, and capital - and has become one of the most important new resources. The Internet Knowledge Society is where the knowledge is the major source of development and competition. Now more than 350mi11ion computers are connected to internet servers and the internet users are more than 250mi11ion. The purpose of this paper is to propose some key factors for implementing the Police Knowledge Management System(PKMS) based on Intranet. With Information Technology(IT), the police administrative system will be much more efficient. Introducing the If into the system is critical for restructuring the police administrative system. This paper concludes as follows : ■ Knowledge is divided into tacit and explicit one. Knowledge process is divided into acquisition, accumulation, distribution and creation of knowledge. ■ The IntraNet is composed of Web server, FTP server, electric-mail server, and is constructed security system to safety. ■ All policemen are bound to serve as a new knowledge worker. ■ Police organization needs to operate data management system. The organization also needs to the Police Knowledge Management Center(PKMC). And the Police Chief Knowledge Officers(PCKO) needs to be appointed to manage the PKMC. ■ An information and knowledge infrastructure(various databases are the most important factor) should be established within the organization to promote the self-directed management, the interactive communication, and the learning ability of the members.

  • PDF

A Discovery System of Malicious Javascript URLs hidden in Web Source Code Files

  • Park, Hweerang;Cho, Sang-Il;Park, Jungkyu;Cho, Youngho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.5
    • /
    • pp.27-33
    • /
    • 2019
  • One of serious security threats is a botnet-based attack. A botnet in general consists of numerous bots, which are computing devices with networking function, such as personal computers, smartphones, or tiny IoT sensor devices compromised by malicious codes or attackers. Such botnets can launch various serious cyber-attacks like DDoS attacks, propagating mal-wares, and spreading spam e-mails over the network. To establish a botnet, attackers usually inject malicious URLs into web source codes stealthily by using data hiding methods like Javascript obfuscation techniques to avoid being discovered by traditional security systems such as Firewall, IPS(Intrusion Prevention System) or IDS(Intrusion Detection System). Meanwhile, it is non-trivial work in practice for software developers to manually find such malicious URLs which are hidden in numerous web source codes stored in web servers. In this paper, we propose a security defense system to discover such suspicious, malicious URLs hidden in web source codes, and present experiment results that show its discovery performance. In particular, based on our experiment results, our proposed system discovered 100% of URLs hidden by Javascript encoding obfuscation within sample web source files.

Strategy for Task Offloading of Multi-user and Multi-server Based on Cost Optimization in Mobile Edge Computing Environment

  • He, Yanfei;Tang, Zhenhua
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.615-629
    • /
    • 2021
  • With the development of mobile edge computing, how to utilize the computing power of edge computing to effectively and efficiently offload data and to compute offloading is of great research value. This paper studies the computation offloading problem of multi-user and multi-server in mobile edge computing. Firstly, in order to minimize system energy consumption, the problem is modeled by considering the joint optimization of the offloading strategy and the wireless and computing resource allocation in a multi-user and multi-server scenario. Additionally, this paper explores the computation offloading scheme to optimize the overall cost. As the centralized optimization method is an NP problem, the game method is used to achieve effective computation offloading in a distributed manner. The decision problem of distributed computation offloading between the mobile equipment is modeled as a multi-user computation offloading game. There is a Nash equilibrium in this game, and it can be achieved by a limited number of iterations. Then, we propose a distributed computation offloading algorithm, which first calculates offloading weights, and then distributedly iterates by the time slot to update the computation offloading decision. Finally, the algorithm is verified by simulation experiments. Simulation results show that our proposed algorithm can achieve the balance by a limited number of iterations. At the same time, the algorithm outperforms several other advanced computation offloading algorithms in terms of the number of users and overall overheads for beneficial decision-making.

Blockchain and Physically Unclonable Functions Based Mutual Authentication Protocol in Remote Surgery within Tactile Internet Environment

  • Hidar, Tarik;Abou el kalam, Anas;Benhadou, Siham;Kherchttou, Yassine
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.15-22
    • /
    • 2022
  • The Tactile Internet technology is considered as the evolution of the internet of things. It will enable real time applications in all fields like remote surgery. It requires extra low latency which must not exceed 1ms, high availability, reliability and strong security system. Since it appearance in 2014, tremendous efforts have been made to ensure authentication between sensors, actuators and servers to secure many applications such as remote surgery. This human to machine relationship is very critical due to its dependence of the human live, the communication between the surgeon who performs the remote surgery and the robot arms, as a tactile internet actor, should be fully and end to end protected during the surgery. Thus, a secure mutual user authentication framework has to be implemented in order to ensure security without influencing latency. The existing methods of authentication require server to stock and exchange data between the tactile internet entities, which does not only make the proposed systems vulnerables to the SPOF (Single Point of Failure), but also impact negatively on the latency time. To address these issues, we propose a lightweight authentication protocol for remote surgery in a Tactile Internet environment, which is composed of a decentralized blockchain and physically unclonable functions. Finally, performances evaluation illustrate that our proposed solution ensures security, latency and reliability.

Efficient Load Balancing Technique through Server Load Threshold Alert in SDN (SDN 환경에서의 서버 부하 임계치 경고를 통한 효율적인 부하분산 기법)

  • Lee, Jun-Young;Kwon, Tea-Wook
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.5
    • /
    • pp.817-824
    • /
    • 2021
  • The SDN(Software Defined Networking) technology, which appeared to overcome the limitations of the existing network system, resolves the rigidity of the existing system through the separation of HW and SW in network equipment. These characteristics of SDN provide wide scalability beyond hardware-oriented network equipment, and provide flexible load balancing policies in data centers of various sizes. In the meantime, many studies have been conducted to apply the advantages of SDN to data centers and have shown their effectiveness. The method mainly used in previous studies was to periodically check the server load and perform load balancing based on this. In this method, the more the number of servers and the shorter the server load check cycle, the more traffic increases. In this paper, we propose a new load balancing technique that can eliminate unnecessary traffic and manage server resources more efficiently by reporting to the controller when a specific level of load occurs in the server to solve this limitation.