• 제목/요약/키워드: Internet data center

검색결과 690건 처리시간 0.029초

IBC-Based Entity Authentication Protocols for Federated Cloud Systems

  • Cao, Chenlei;Zhang, Ru;Zhang, Mengyi;Yang, Yixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권5호
    • /
    • pp.1291-1312
    • /
    • 2013
  • Cloud computing changes the service models of information systems and accelerates the pace of technological innovation of consumer electronics. However, it also brings new security issues. As one of the important foundations of various cloud security solutions, entity authentication is attracting increasing interest of many researchers. This article proposes a layered security architecture to provide a trust transmission mechanism among cloud systems maintained by different organizations. Based on the security architecture, four protocols are proposed to implement mutual authentication, data sharing and secure data transmission in federated cloud systems. The protocols not only can ensure the confidentiality of the data transferred, but also resist man-in-the-middle attacks and masquerading attacks. Additionally, the security properties of the four protocols have been proved by S-pi calculus formal verification. Finally, the performance of the protocols is investigated in a lab environment and the feasibility of the security architecture has been verified under a hybrid cloud system.

기존 물류 네트워크 기반에서 크로스 - 도킹 거점선정에 관한 연구 (A Study on Selection of Cross-Docking Center based on Existing Logistics Network)

  • 이인철;이명호;김내헌
    • 산업공학
    • /
    • 제19권1호
    • /
    • pp.26-33
    • /
    • 2006
  • Many Firms consider the application of a cross-docking system to reduce inventory and lead-time. However, most studies mainly concentrate on the design of a cross-docking system. This study presents the method that selects the cross-docking center under the existing logistics network. Describing the operation environment to apply the cross-docking system, the selection criteria of the cross-docking center, and the main constraints of transportation planning under the environment of multi-level logistics network, we define the selection problem of the cross-docking center applied to a logistics field. We also define the simulation model that can analyze variously the cross-docking volume and develop the selection methodology of the cross-docking center. The simulation model presents the algorithm and influence factors of the cross-docking system, the decision criteria of the system, policy parameter, and input data. In addition, this study analyzes the effect of increasing the number of simultaneous receiving and shipping docks, and the efficiency of the overnight transportation and cross-docking by evaluating each scenario after simulating the scenarios with the practical data of the logistics field.

Finding a plan to improve recognition rate using classification analysis

  • Kim, SeungJae;Kim, SungHwan
    • International journal of advanced smart convergence
    • /
    • 제9권4호
    • /
    • pp.184-191
    • /
    • 2020
  • With the emergence of the 4th Industrial Revolution, core technologies that will lead the 4th Industrial Revolution such as AI (artificial intelligence), big data, and Internet of Things (IOT) are also at the center of the topic of the general public. In particular, there is a growing trend of attempts to present future visions by discovering new models by using them for big data analysis based on data collected in a specific field, and inferring and predicting new values with the models. In order to obtain the reliability and sophistication of statistics as a result of big data analysis, it is necessary to analyze the meaning of each variable, the correlation between the variables, and multicollinearity. If the data is classified differently from the hypothesis test from the beginning, even if the analysis is performed well, unreliable results will be obtained. In other words, prior to big data analysis, it is necessary to ensure that data is well classified according to the purpose of analysis. Therefore, in this study, data is classified using a decision tree technique and a random forest technique among classification analysis, which is a machine learning technique that implements AI technology. And by evaluating the degree of classification of the data, we try to find a way to improve the classification and analysis rate of the data.

인터넷 상에서 사용되는 미병의 개념 및 사용자 분석 - 네이버 지식-iN과 카페를 중심으로 - (Analysis of the Mibyeong Concept and User on the internet. - Focusing on Naver Jisik-iN Q&A, Cafe posts -)

  • 김선민;이시우;문수정
    • 대한예방한의학회지
    • /
    • 제21권1호
    • /
    • pp.95-106
    • /
    • 2017
  • Objectives : Although interest in preventive medicine has increased recently, "Mibyeong", the preventive concept of Korean medicine, is still unfamiliar to the general public. Therefore, this study aims to investigate the concept of Mibyeong and users used on the Internet. Methods : Naver (www.naver.com), which has the highest ranking in terms of market share, number of visitors, search time share, and community category share, has been selected as a search target and jisik-iN Q&A and posts of cafe about Mibyeong were searched for recently approximately 6 years. Results : 105 cases of Jisik-iN Q&A and 283 cases of cafe posts were searched. Overall, the number of Jisik-iN Q&A and cafe posts's Mibyeong term usage was the highest in 2013. In the Internet user category, Mibyeong Term was used most commonly in the Jisik-iN Q&A by Korean medicine related medical personnel (29 cases, 28%) and in the cafe other health-related workers (87cases, 31%). In Mibyeong related cafe classification, Information Exchange (220 cases, 77%) was the most frequent and besides 39 cases (14%) used in Operation of Medical Institutions. And the concept of Mibyeong was often used as symptom-based rather than diagnostic test or disease (Cafe posts 52%, Jisik-iN Q&A 70%), in particular, topic of Mibyeong related Jisik-iN Q&A was used in the order of pain (31 cases, 16%), cancer (17 cases, 9%), fatigue (11 cases, 6%). Conclusions : This study has significance as basic research data of general Internet user group and can be used as fundamental data for awareness promotion, publicity and necessity of Mibyeong.

인터넷데이터센터의 교류, 직류급전시스템 비교 분석 (Comparative Study on AC and DC Feed System for Internet Data Center)

  • 김두환;김효성
    • 전력전자학회논문지
    • /
    • 제17권1호
    • /
    • pp.27-33
    • /
    • 2012
  • Internet Data Centers (IDC), as essential facilities for modern IT industry, typically have scores of MW of concentrated electric loads. Uninterruptible Power Supplies (UPS) are necessary for the power feed system of IDCs because of stable power requirement. Thus, conventional AC power feed systems of IDCs have three cascaded power conversion stages such as (AC-DC), (DC-AC), and (AC-DC), which results in very low conversion efficiency. On the contrary, DC power feed systems need just a single power conversion stage (AC-DC) supplying AC mains power to DC server loads, which gives comparatively high conversion efficiency and reliability. This paper compares the efficiencies between 220V AC power feed system and 300V DC power feed system on equal load conditions which were established in Mok-Dong IDC of Korea Telecom company (KT). Experimental results show that the total operation efficiency of the 300V DC power feed system is around 15% higher than that of the 220V AC power feed system.

Auto Regulated Data Provisioning Scheme with Adaptive Buffer Resilience Control on Federated Clouds

  • Kim, Byungsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권11호
    • /
    • pp.5271-5289
    • /
    • 2016
  • On large-scale data analysis platforms deployed on cloud infrastructures over the Internet, the instability of the data transfer time and the dynamics of the processing rate require a more sophisticated data distribution scheme which maximizes parallel efficiency by achieving the balanced load among participated computing elements and by eliminating the idle time of each computing element. In particular, under the constraints that have the real-time and limited data buffer (in-memory storage) are given, it needs more controllable mechanism to prevent both the overflow and the underflow of the finite buffer. In this paper, we propose an auto regulated data provisioning model based on receiver-driven data pull model. On this model, we provide a synchronized data replenishment mechanism that implicitly avoids the data buffer overflow as well as explicitly regulates the data buffer underflow by adequately adjusting the buffer resilience. To estimate the optimal size of buffer resilience, we exploits an adaptive buffer resilience control scheme that minimizes both data buffer space and idle time of the processing elements based on directly measured sample path analysis. The simulation results show that the proposed scheme provides allowable approximation compared to the numerical results. Also, it is suitably efficient to apply for such a dynamic environment that cannot postulate the stochastic characteristic for the data transfer time, the data processing rate, or even an environment where the fluctuation of the both is presented.

Secure and Efficient Privacy-Preserving Identity-Based Batch Public Auditing with Proxy Processing

  • Zhao, Jining;Xu, Chunxiang;Chen, Kefei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.1043-1063
    • /
    • 2019
  • With delegating proxy to process data before outsourcing, data owners in restricted access could enjoy flexible and powerful cloud storage service for productivity, but still confront with data integrity breach. Identity-based data auditing as a critical technology, could address this security concern efficiently and eliminate complicated owners' public key certificates management issue. Recently, Yu et al. proposed an Identity-Based Public Auditing for Dynamic Outsourced Data with Proxy Processing (https://doi.org/10.3837/tiis.2017.10.019). It aims to offer identity-based, privacy-preserving and batch auditing for multiple owners' data on different clouds, while allowing proxy processing. In this article, we first demonstrate this scheme is insecure in the sense that malicious cloud could pass integrity auditing without original data. Additionally, clouds and owners are able to recover proxy's private key and thus impersonate it to forge tags for any data. Secondly, we propose an improved scheme with provable security in the random oracle model, to achieve desirable secure identity based privacy-preserving batch public auditing with proxy processing. Thirdly, based on theoretical analysis and performance simulation, our scheme shows better efficiency over existing identity-based auditing scheme with proxy processing on single owner and single cloud effort, which will benefit secure big data storage if extrapolating in real application.

Image Deduplication Based on Hashing and Clustering in Cloud Storage

  • Chen, Lu;Xiang, Feng;Sun, Zhixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1448-1463
    • /
    • 2021
  • With the continuous development of cloud storage, plenty of redundant data exists in cloud storage, especially multimedia data such as images and videos. Data deduplication is a data reduction technology that significantly reduces storage requirements and increases bandwidth efficiency. To ensure data security, users typically encrypt data before uploading it. However, there is a contradiction between data encryption and deduplication. Existing deduplication methods for regular files cannot be applied to image deduplication because images need to be detected based on visual content. In this paper, we propose a secure image deduplication scheme based on hashing and clustering, which combines a novel perceptual hash algorithm based on Local Binary Pattern. In this scheme, the hash value of the image is used as the fingerprint to perform deduplication, and the image is transmitted in an encrypted form. Images are clustered to reduce the time complexity of deduplication. The proposed scheme can ensure the security of images and improve deduplication accuracy. The comparison with other image deduplication schemes demonstrates that our scheme has somewhat better performance.

Scalable Blockchain Storage Model Based on DHT and IPFS

  • Chen, Lu;Zhang, Xin;Sun, Zhixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권7호
    • /
    • pp.2286-2304
    • /
    • 2022
  • Blockchain is a distributed ledger that combines technologies such as cryptography, consensus mechanism, peer-to-peer transmission, and time stamping. The rapid development of blockchain has attracted attention from all walks of life, but storage scalability issues have hindered the application of blockchain. In this paper, a scalable blockchain storage model based on Distributed Hash Table (DHT) and the InterPlanetary File System (IPFS) was proposed. This paper introduces the current research status of the scalable blockchain storage model, as well as the basic principles of DHT and the InterPlanetary File System. The model construction and workflow are explained in detail. At the same time, the DHT network construction mechanism, block heat identification mechanism, new node initialization mechanism, and block data read and write mechanism in the model are described in detail. Experimental results show that this model can reduce the storage burden of nodes, and at the same time, the blockchain network can accommodate more local blocks under the same block height.

Special Quantum Steganalysis Algorithm for Quantum Secure Communications Based on Quantum Discriminator

  • Xinzhu Liu;Zhiguo Qu;Xiubo Chen;Xiaojun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권6호
    • /
    • pp.1674-1688
    • /
    • 2023
  • The remarkable advancement of quantum steganography offers enhanced security for quantum communications. However, there is a significant concern regarding the potential misuse of this technology. Moreover, the current research on identifying malicious quantum steganography is insufficient. To address this gap in steganalysis research, this paper proposes a specialized quantum steganalysis algorithm. This algorithm utilizes quantum machine learning techniques to detect steganography in general quantum secure communication schemes that are based on pure states. The algorithm presented in this paper consists of two main steps: data preprocessing and automatic discrimination. The data preprocessing step involves extracting and amplifying abnormal signals, followed by the automatic detection of suspicious quantum carriers through training on steganographic and non-steganographic data. The numerical results demonstrate that a larger disparity between the probability distributions of steganographic and non-steganographic data leads to a higher steganographic detection indicator, making the presence of steganography easier to detect. By selecting an appropriate threshold value, the steganography detection rate can exceed 90%.