• Title/Summary/Keyword: Memory traffic

Search Result 191, Processing Time 0.026 seconds

A Working-set Sensitive Page Replacement Policy for PCM-based Swap Systems

  • Park, Yunjoo;Bahn, Hyokyung
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.17 no.1
    • /
    • pp.7-14
    • /
    • 2017
  • Due to the recent advances in Phage-Change Memory (PCM) technologies, a new memory hierarchy of computer systems with PCM is expected to appear. In this paper, we present a new page replacement policy that adopts PCM as a high speed swap device. As PCM has limited write endurance, our goal is to minimize the amount of data written to PCM. To do so, we defer the eviction of dirty pages in proportion to their dirtiness. However, excessive preservation of dirty pages in memory may deteriorate the page fault rate, especially when the memory capacity is not enough to accommodate full working-set pages. Thus, our policy monitors the current working-set size of the system, and controls the deferring level of dirty pages not to degrade the system performances. Simulation experiments show that the proposed policy reduces the write traffic to PCM by 160% without performance degradations.

Intelligent Traffic Light using Fuzzy Neural Network

  • Park, Myeong-Bok;You-Sik, Hong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.66-71
    • /
    • 2003
  • In the past, when there were few vehicles on the road, the T.O.D.(Time of Day) traffic signal worked very well. The T.O.D. signal operates on a preset signal cycling which cycles on the basis of the average number of average passenger cars in the memory device of an electric signal unit. Today, with increasing traffic and congested roads, the conventional traffic light creates startup-delay time and end lag time so that thirty to forty-five percent efficiency in traffic handling is lost, as well as adding to fuel costs. To solve this problem, this paper proposes a new concept of optimal green time algorithm, which reduces average vehicle waiting time while improving average vehicle speed using fuzzy rules and neural networks. Through computer simulation, this method has been proven to be much more efficient than fixed time interval signals. Fuzzy Neural Network will consistanly improve average waiting time, vehicle speed, and fuel consumption.

Deep reinforcement learning for base station switching scheme with federated LSTM-based traffic predictions

  • Hyebin Park;Seung Hyun Yoon
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.379-391
    • /
    • 2024
  • To meet increasing traffic requirements in mobile networks, small base stations (SBSs) are densely deployed, overlapping existing network architecture and increasing system capacity. However, densely deployed SBSs increase energy consumption and interference. Although these problems already exist because of densely deployed SBSs, even more SBSs are needed to meet increasing traffic demands. Hence, base station (BS) switching operations have been used to minimize energy consumption while guaranteeing quality-of-service (QoS) for users. In this study, to optimize energy efficiency, we propose the use of deep reinforcement learning (DRL) to create a BS switching operation strategy with a traffic prediction model. First, a federated long short-term memory (LSTM) model is introduced to predict user traffic demands from user trajectory information. Next, the DRL-based BS switching operation scheme determines the switching operations for the SBSs using the predicted traffic demand. Experimental results confirm that the proposed scheme outperforms existing approaches in terms of energy efficiency, signal-to-interference noise ratio, handover metrics, and prediction performance.

Policy for Selective Flushing of Smartphone Buffer Cache using Persistent Memory (영속 메모리를 이용한 스마트폰 버퍼 캐시의 선별적 플러시 정책)

  • Lim, Soojung;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.71-76
    • /
    • 2022
  • Buffer cache bridges the performance gap between memory and storage, but its effectiveness is limited due to periodic flush, performed to prevent data loss in smartphones. This paper shows that selective flushing technique with small persistent memory can reduce the flushing overhead of smartphone buffer cache significantly. This is due to our I/O analysis of smartphone applications in that a certain hot data account for most of file writes, while a large proportion of file data incurs single-writes. The proposed selective flushing policy performs flushing to persistent memory for frequently updated data, and storage flushing is performed only for single-write data. This eliminates storage write traffic and also improves the space efficiency of persistent memory. Simulations with popular smartphone application I/O traces show that the proposed policy reduces write traffic to storage by 24.8% on average and up to 37.8%.

A Study for Improving Performance of ATM Multicast Switch (ATM 멀티캐스트 스위치의 성능 향상을 위한 연구)

  • 이일영;조양현;오영환
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12A
    • /
    • pp.1922-1931
    • /
    • 1999
  • A multicast traffic’s feature is the function of providing a point to multipoints cell transmission, which is emerging from the main function of ATM switch. However, when a conventional point-to-point switch executes a multicast function, the excess load is occurred because unicast cell as well as multicast cell passed the copy network. Additionally, due to the excess load, multicast cells collide with other cells in a switch. Thus a deadlock that losses cells raises, extremely diminishes the performance of switch. An input queued switch also has a defect of the HOL (Head of Line) blocking that less lessens the performance of the switch. In the proposed multicast switch, we use shared memory switch to reduce HOL blocking and deadlock. In order to decrease switch’s complexity and cell's processing time, to improve a throughput, we utilize the method that routes a cell on a separated paths by traffic pattern and the scheduling algorithm that processes a maximum 2N cell at once in the control part. Besides, when cells is congested at an output port, a cell loss probability increases. Thus we use the Output Memory (OM) to reduce the cell loss probability. And we make use of the method that stores the assigned memory (UM, MM) with a cell by a traffic pattern and clears the cell of the Output memory after a fixed saving time to improve the memory utilization rate. The performance of the proposed switch is executed and compared with the conventional policy under the burst traffic condition through both the analysis based on Markov chain and simulation.

  • PDF

Methodology for Developing a Predictive Model for Highway Traffic Information Using LSTM (LSTM을 활용한 고속도로 교통정보 예측 모델 개발 방법론)

  • Yoseph Lee;Hyoung-suk Jin;Yejin Kim;Sung-ho Park;Ilsoo Yun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.1-18
    • /
    • 2023
  • With the recent developments in big data and deep learning, a variety of traffic information is collected widely and used for traffic operations. In particular, long short-term memory (LSTM) is used in the field of traffic information prediction with time series characteristics. Since trends, seasons, and cycles differ due to the nature of time series data input for an LSTM, a trial-and-error method based on characteristics of the data is essential for prediction models based on time series data in order to find hyperparameters. If a methodology is established to find suitable hyperparameters, it is possible to reduce the time spent in constructing high-accuracy models. Therefore, in this study, a traffic information prediction model is developed based on highway vehicle detection system (VDS) data and LSTM, and an impact assessment is conducted through changes in the LSTM evaluation indicators for each hyperparameter. In addition, a methodology for finding hyperparameters suitable for predicting highway traffic information in the transportation field is presented.

Performance Improvement of the Multicast Switch using Output Scheduling Scheme (출력 스케줄링 기법을 이용한 멀티캐스트 스위치의 성능 개선)

  • 최영복;최종길;김해근
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.2
    • /
    • pp.301-308
    • /
    • 2003
  • In this paper, we propose a multicast ATM switch that reduces traffic load by using the method of storing unicast cells and multicast cells separately according to the type of the cells. The switch is based on a shared memory type to reduce HOL blocking and deadlock. In the proposed switch, we use a control scheme that schedules stored cells to output ports to reduce the loss of traffic cells and to output effectively. We analyzed the Performance of the proposed switch through the computer simulation and the results have shown the effectiveness of the switch.

  • PDF

Dynamic Limited Directory Scheme for Distributed Shared Memory Systems (분산공유 메모리 시스템을 위한 동적 제한 디렉터리 기법)

  • Lee, Dong-Gwang;Gwon, Hyeok-Seong;Choe, Seong-Min;An, Byeong-Cheol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1098-1105
    • /
    • 1999
  • The caches in distributed shared memory systems enhance the performance by reducing memory access latency and communication overhead, but they must solve the cache coherence problem. This paper proposes a new directory protocol to solve the cache coherence problem and to improve the system performance in distributed shared memory systems. To maintain the cache coherence of shared data, processors within a limited distance reduce the communication overhead by using a bit-vector like the full directory scheme. Processors over a limited distance store pointers in a directory pool. Since the bit-vector and the directory pool remove the unnecessary cache invalidations, the proposed scheme reduces the communication traffic and improves the system performance. The dynamic limited directory scheme reduces the communication traffic up to 66 percents compared with the limited directory scheme and the number of directory access up to 27 percents compared with the dynamic pointer allocation scheme.

  • PDF

Mathematical Analysis of the Parallel Packet Switch with a Sliding Window Scheme

  • Liu, Chia-Lung;Wu, Chin-Chi;Lin, Woei
    • Journal of Communications and Networks
    • /
    • v.9 no.3
    • /
    • pp.330-341
    • /
    • 2007
  • This work analyzes the performance of the parallel packet switch (PPS) with a sliding window (SW) method. The PPS involves numerous packet switches that operate independently and in parallel. The conventional PPS dispatch algorithm adopts a round robin (RR) method. The class of PPS is characterized by deployment of parallel low-speed switches whose all memory buffers run more slowly than the external line rate. In this work, a novel SW packet switching method for PPS, called SW-PPS, is proposed. The SW-PPS employs memory space more effectively than the existing PPS using RR algorithm. Under identical Bernoulli and bursty data traffic, the SW-PPS provided significantly improved performance when compared to PPS with RR method. Moreover, this investigation presents a novel mathematical analytical model to evaluate the performance of the PPS using RR and SW method. Under various operating conditions, our proposed model and analysis successfully exhibit these performance characteristics including throughput, cell delay, and cell drop rate.

IMT: A Memory-Efficient and Fast Updatable IP Lookup Architecture Using an Indexed Multibit Trie

  • Kim, Junghwan;Ko, Myeong-Cheol;Shin, Moon Sun;Kim, Jinsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1922-1940
    • /
    • 2019
  • IP address lookup is a function to determine nexthop for a given destination IP address. It takes an important role in modern routers because of its computation time and increasing Internet traffic. TCAM-based IP lookup approaches can exploit the capability of parallel searching but have a limitation of its size due to latency, power consumption, updatability, and cost. On the other hand, multibit trie-based approaches use SRAM which has relatively low power consumption and cost. They reduce the number of memory accesses required for each lookup, but it still needs several accesses. Moreover, the memory efficiency and updatability are proportional to the number of memory accesses. In this paper, we propose a novel architecture using an Indexed Multibit Trie (IMT) which is based on combined TCAM and SRAM. In the proposed architecture, each lookup takes at most two memory accesses. We present how the IMT is constructed so as to be memory-efficient and fast updatable. Experiment results with real-world forwarding tables show that our scheme achieves good memory efficiency as well as fast updatability.