• Title/Summary/Keyword: Memory traffic

Search Result 191, Processing Time 0.025 seconds

The Least-Dirty-First CLOCK Replacement Policy for Phase-Change Memory based Swap Devices (PCM 기반 스왑 장치를 위한 클럭 기반 최소 쓰기 우선 교체 정책)

  • Yoo, Seunghoon;Lee, Eunji;Bahn, Hyokyung
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1071-1077
    • /
    • 2015
  • In this paper, we adopt PCM (phase-change memory) as a virtual memory swap device and present a new page replacement policy that considers the characteristics of PCM. Specifically, we aim to reduce the write traffic to PCM by considering the dirtiness of pages when making a replacement decision. The proposed policy tracks the dirtiness of a page at the granularity of a sub-page and replaces the least dirty page among the pages not recently used. Experimental results show that the proposed policy reduces the amount of data written to PCM by 22.9% on average and up to 73.7% compared to CLOCK. It also extends the lifespan of PCM by 49.0% and reduces the energy consumption of PCM by 3.0% on average.

Performance Comparison of Synchronization Methods for CC-NUMA Systems (CC-NUMA 시스템에서의 동기화 기법에 대한 성능 비교)

  • Moon, Eui-Sun;Jhang, Seong-Tae;Jhon, Chu-Shik
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.4
    • /
    • pp.394-400
    • /
    • 2000
  • The main goal of synchronization is to guarantee exclusive access to shared data and critical sections, and then it makes parallel programs work correctly and reliably. Exclusive access restricts parallelism of parallel programs, therefor efficient synchronization is essential to achieve high performance in shared-memory parallel programs. Many techniques are devised for efficient synchronization, which utilize features of systems and applications. This paper shows the simulation results that existing synchronization methods have inefficiency under CC-NUMA(Cache Coherent Non-Uniform Memory Access) system, and then compares the performance of Freeze&Melt synchronization that can remove the inefficiency. The simulation results present that Test-and-Test&Set synchronization has inefficiency caused by broadcast operation and the pre-defined order of Queue-On-Lock-Bit (QOLB) synchronization to execute a critical section causes inefficiency. Freeze&Melt synchronization, which removes these inefficiencies, has performance gain by decreasing the waiting time to execute a critical section and the execution time of a critical section, and by reducing the traffic between clusters.

  • PDF

Web Monitoring based Encryption Web Traffic Attack Detection System (웹 모니터링 기반 암호화 웹트래픽 공격 탐지 시스템)

  • Lee, Seokwoo;Park, Soonmo;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.449-455
    • /
    • 2021
  • This paper proposes an encryption web transaction attack detection system based on the existing web application monitoring system. Although there was difficulty in detecting attacks on the encrypted web traffic because the existing web traffic security systems detect and defend attacks based on encrypted packets in the network area of the encryption section between the client and server, by utilizing the technology of the web application monitoring system, it is possible to detect various intelligent cyber-attacks based on information that is already decrypted in the memory of the web application server. In addition, since user identification is possible through the application session ID, statistical detection of attacks such as IP tampering attacks, mass web transaction call users, and DDoS attacks are also possible. Thus, it can be considered that it is possible to respond to various intelligent cyber attacks hidden in the encrypted traffic by collecting and detecting information in the non-encrypted section of the encrypted web traffic.

Network Anomaly Traffic Detection Using WGAN-CNN-BiLSTM in Big Data Cloud-Edge Collaborative Computing Environment

  • Yue Wang
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.375-390
    • /
    • 2024
  • Edge computing architecture has effectively alleviated the computing pressure on cloud platforms, reduced network bandwidth consumption, and improved the quality of service for user experience; however, it has also introduced new security issues. Existing anomaly detection methods in big data scenarios with cloud-edge computing collaboration face several challenges, such as sample imbalance, difficulty in dealing with complex network traffic attacks, and difficulty in effectively training large-scale data or overly complex deep-learning network models. A lightweight deep-learning model was proposed to address these challenges. First, normalization on the user side was used to preprocess the traffic data. On the edge side, a trained Wasserstein generative adversarial network (WGAN) was used to supplement the data samples, which effectively alleviates the imbalance issue of a few types of samples while occupying a small amount of edge-computing resources. Finally, a trained lightweight deep learning network model is deployed on the edge side, and the preprocessed and expanded local data are used to fine-tune the trained model. This ensures that the data of each edge node are more consistent with the local characteristics, effectively improving the system's detection ability. In the designed lightweight deep learning network model, two sets of convolutional pooling layers of convolutional neural networks (CNN) were used to extract spatial features. The bidirectional long short-term memory network (BiLSTM) was used to collect time sequence features, and the weight of traffic features was adjusted through the attention mechanism, improving the model's ability to identify abnormal traffic features. The proposed model was experimentally demonstrated using the NSL-KDD, UNSW-NB15, and CIC-ISD2018 datasets. The accuracies of the proposed model on the three datasets were as high as 0.974, 0.925, and 0.953, respectively, showing superior accuracy to other comparative models. The proposed lightweight deep learning network model has good application prospects for anomaly traffic detection in cloud-edge collaborative computing architectures.

Towards Carbon-Neutralization: Deep Learning-Based Server Management Method for Efficient Energy Operation in Data Centers (탄소중립을 향하여: 데이터 센터에서의 효율적인 에너지 운영을 위한 딥러닝 기반 서버 관리 방안)

  • Sang-Gyun Ma;Jaehyun Park;Yeong-Seok Seo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.149-158
    • /
    • 2023
  • As data utilization is becoming more important recently, the importance of data centers is also increasing. However, the data center is a problem in terms of environment and economy because it is a massive power-consuming facility that runs 24 hours a day. Recently, studies using deep learning techniques to reduce power used in data centers or servers or predict traffic have been conducted from various perspectives. However, the amount of traffic data processed by the server is anomalous, which makes it difficult to manage the server. In addition, many studies on dynamic server management techniques are still required. Therefore, in this paper, we propose a dynamic server management technique based on Long-Term Short Memory (LSTM), which is robust to time series data prediction. The proposed model allows servers to be managed more reliably and efficiently in the field environment than before, and reduces power used by servers more effectively. For verification of the proposed model, we collect transmission and reception traffic data from six of Wikipedia's data centers, and then analyze and experiment with statistical-based analysis on the relationship of each traffic data. Experimental results show that the proposed model is helpful for reliably and efficiently running servers.

Cross-Layer Architecture for QoS Provisioning in Wireless Multimedia Sensor Networks

  • Farooq, Muhammad Omer;St-Hilaire, Marc;Kunz, Thomas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.1
    • /
    • pp.178-202
    • /
    • 2012
  • In this paper, we first survey cross-layer architectures for Wireless Sensor Networks (WSNs) and Wireless Multimedia Sensor Networks (WMSNs). Afterwards, we propose a novel cross-layer architecture for QoS provisioning in clustered and multi-hop based WMSNs. The proposed architecture provides support for multiple network-based applications on a single sensor node. For supporting multiple applications on a single node, an area in memory is reserved where each application can store its network protocols settings. Furthermore, the proposed cross-layer architecture supports heterogeneous flows by classifying WMSN traffic into six traffic classes. The architecture incorporates a service differentiation module for QoS provisioning in WMSNs. The service differentiation module defines the forwarding behavior corresponding to each traffic class. The forwarding behavior is primarily determined by the priority of the traffic class, moreover the service differentiation module allocates bandwidth to each traffic class with goals to maximize network utilization and avoid starvation of low priority flows. The proposal incorporates the congestion detection and control algorithm. Upon detection of congestion, the congested node makes an estimate of the data rate that should be used by the node itself and its one-hop away upstream nodes. While estimating the data rate, the congested node considers the characteristics of different traffic classes along with their total bandwidth usage. The architecture uses a shared database to enable cross-layer interactions. Application's network protocol settings and the interaction with the shared database is done through a cross-layer optimization middleware.

Enhancing GPU Performance by Efficient Hardware-Based and Hybrid L1 Data Cache Bypassing

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.2
    • /
    • pp.69-77
    • /
    • 2017
  • Recent GPUs have adopted cache memory to benefit general-purpose GPU (GPGPU) programs. However, unlike CPU programs, GPGPU programs typically have considerably less temporal/spatial locality. Moreover, the L1 data cache is used by many threads that access a data size typically considerably larger than the L1 cache, making it critical to bypass L1 data cache intelligently to enhance GPU cache performance. In this paper, we examine GPU cache access behavior and propose a simple hardware-based GPU cache bypassing method that can be applied to GPU applications without recompiling programs. Moreover, we introduce a hybrid method that integrates static profiling information and hardware-based bypassing to further enhance performance. Our experimental results reveal that hardware-based cache bypassing can boost performance for most benchmarks, and the hybrid method can achieve performance comparable to state-of-the-art compiler-based bypassing with considerably less profiling cost.

Call admission control for ATM networks using a sparse distributed memory (ATM 망에서 축약 분산 기억 장치를 사용한 호 수락 제어)

  • 권희용;송승준;최재우;황희영
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.3
    • /
    • pp.1-8
    • /
    • 1998
  • In this paper, we propose a Neural Call Admission Control (CAC) method using a Sparse Distributed Memory(SDM). CAC is a key technology of TM network traffic control. It should be adaptable to the rapid and various changes of the ATM network environment. conventional approach to the ATM CAC requires network analysis in all cases. So, the optimal implementation is said to be very difficult. Therefore, neural approach have recently been employed. However, it does not mett the adaptability requirements. because it requires additional learning data tables and learning phase during CAC operation. We have proposed a neural network CAC method based on SDM that is more actural than conventioal approach to apply it to CAC. We compared it with previous neural network CAC method. It provides CAC with good adaptability to manage changes. Experimenatal results show that it has rapid adaptability and stability without additional learning table or learning phase.

  • PDF

Design of Chatting Architecture that Handle Large-Scale Traffic (대용량 트래픽 처리를 위한 채팅 구조 설계)

  • Hong, Seong-Mun;Lee, Yoon-jae;Ko, Se-Young;Jung, Seung-Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.13-16
    • /
    • 2019
  • 웹 서비스의 트래픽은 변화의 폭이 크다. 또한 서비스는 실시간으로 변화하는 트래픽에 대비하기 위하여 트래픽의 최대치를 가정하여 서버를 구성해야한다. 하지만 트래픽의 최대치와 평균적인 트래픽은 큰 차이가 있어 위와 같은 서버 구성은 많은 자원의 낭비로 이어진다. 이렇듯 실시간으로 변화하는 트래픽에 대응하기 위하여 분산 시스템 구조와 InMemory Cache, Messaging Queue 등을 활용하여 대응하도록 설계했다. 또한 InMemory Cache 와 NoSQL 을 활용하여 효과적으로 메세지를 저장하고 검색할 수 있도록 설계하였다.

IP Lookup Table Design Using LC-Trie with Memory Constraint (메모리 제약을 가진 LC-Trie를 이용한 IP 참조 테이블 디자인)

  • Lee, Chae-Y.;Park, Jae-G.
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.27 no.4
    • /
    • pp.406-412
    • /
    • 2001
  • IP address lookup is to determine the next hop destination of an incoming packet in the router. The address lookup is a major bottleneck in high performance router due to the increased routing table sizes, increased traffic, higher speed links, and the migration to 128 bits IPv6 addresses. IP lookup time is dependent on data structure of lookup table and search scheme. In this paper, we propose a new approach to build a lookup table that satisfies the memory constraint. The design of lookup table is formulated as an optimization problem. The objective is to minimize average depth from the root node for lookup. We assume that the frequencies with which prefixes are accessed are known and the data structure is level compressed trie with branching factor $\kappa$ at the root and binary at all other nodes. Thus, the problem is to determine the branching factor k at the root node such that the average depth is minimized. A heuristic procedure is proposed to solve the problem. Experimental results show that the lookup table based on the proposed heuristic has better average and the worst-case depth for lookup.

  • PDF