• Title/Summary/Keyword: Reinforcement Learning-based Protocol

Search Result 14, Processing Time 0.02 seconds

A Learning-based Power Control Scheme for Edge-based eHealth IoT Systems

  • Su, Haoru;Yuan, Xiaoming;Tang, Yujie;Tian, Rui;Sun, Enchang;Yan, Hairong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4385-4399
    • /
    • 2021
  • The Internet of Things (IoT) eHealth systems composed by Wireless Body Area Network (WBAN) has emerged recently. Sensor nodes are placed around or in the human body to collect physiological data. WBAN has many different applications, for instance health monitoring. Since the limitation of the size of the battery, besides speed, reliability, and accuracy; design of WBAN protocols should consider the energy efficiency and time delay. To solve these problems, this paper adopt the end-edge-cloud orchestrated network architecture and propose a transmission based on reinforcement algorithm. The priority of sensing data is classified according to certain application. System utility function is modeled according to the channel factors, the energy utility, and successful transmission conditions. The optimization problem is mapped to Q-learning model. Following this online power control protocol, the energy level of both the senor to coordinator, and coordinator to edge server can be modified according to the current channel condition. The network performance is evaluated by simulation. The results show that the proposed power control protocol has higher system energy efficiency, delivery ratio, and throughput.

A Research on Low-power Buffer Management Algorithm based on Deep Q-Learning approach for IoT Networks (IoT 네트워크에서의 심층 강화학습 기반 저전력 버퍼 관리 기법에 관한 연구)

  • Song, Taewon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • As the number of IoT devices increases, power management of the cluster head, which acts as a gateway between the cluster and sink nodes in the IoT network, becomes crucial. Particularly when the cluster head is a mobile wireless terminal, the power consumption of the IoT network must be minimized over its lifetime. In addition, the delay of information transmission in the IoT network is one of the primary metrics for rapid information collecting in the IoT network. In this paper, we propose a low-power buffer management algorithm that takes into account the information transmission delay in an IoT network. By forwarding or skipping received packets utilizing deep Q learning employed in deep reinforcement learning methods, the suggested method is able to reduce power consumption while decreasing transmission delay level. The proposed approach is demonstrated to reduce power consumption and to improve delay relative to the existing buffer management technique used as a comparison in slotted ALOHA protocol.

Equal Energy Consumption Routing Protocol Algorithm Based on Q-Learning for Extending the Lifespan of Ad-Hoc Sensor Network (애드혹 센서 네트워크 수명 연장을 위한 Q-러닝 기반 에너지 균등 소비 라우팅 프로토콜 기법)

  • Kim, Ki Sang;Kim, Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.269-276
    • /
    • 2021
  • Recently, smart sensors are used in various environments, and the implementation of ad-hoc sensor networks (ASNs) is a hot research topic. Unfortunately, traditional sensor network routing algorithms focus on specific control issues, and they can't be directly applied to the ASN operation. In this paper, we propose a new routing protocol by using the Q-learning technology, Main challenge of proposed approach is to extend the life of ASNs through efficient energy allocation while obtaining the balanced system performance. The proposed method enhances the Q-learning effect by considering various environmental factors. When a transmission fails, node penalty is accumulated to increase the successful communication probability. Especially, each node stores the Q value of the adjacent node in its own Q table. Every time a data transfer is executed, the Q values are updated and accumulated to learn to select the optimal routing route. Simulation results confirm that the proposed method can choose an energy-efficient routing path, and gets an excellent network performance compared with the existing ASN routing protocols.

A Bio-inspired Hybrid Cross-Layer Routing Protocol for Energy Preservation in WSN-Assisted IoT

  • Tandon, Aditya;Kumar, Pramod;Rishiwal, Vinay;Yadav, Mano;Yadav, Preeti
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1317-1341
    • /
    • 2021
  • Nowadays, the Internet of Things (IoT) is adopted to enable effective and smooth communication among different networks. In some specific application, the Wireless Sensor Networks (WSN) are used in IoT to gather peculiar data without the interaction of human. The WSNs are self-organizing in nature, so it mostly prefer multi-hop data forwarding. Thus to achieve better communication, a cross-layer routing strategy is preferred. In the cross-layer routing strategy, the routing processed through three layers such as transport, data link, and physical layer. Even though effective communication achieved via a cross-layer routing strategy, energy is another constraint in WSN assisted IoT. Cluster-based communication is one of the most used strategies for effectively preserving energy in WSN routing. This paper proposes a Bio-inspired cross-layer routing (BiHCLR) protocol to achieve effective and energy preserving routing in WSN assisted IoT. Initially, the deployed sensor nodes are arranged in the form of a grid as per the grid-based routing strategy. Then to enable energy preservation in BiHCLR, the fuzzy logic approach is executed to select the Cluster Head (CH) for every cell of the grid. Then a hybrid bio-inspired algorithm is used to select the routing path. The hybrid algorithm combines moth search and Salp Swarm optimization techniques. The performance of the proposed BiHCLR is evaluated based on the Quality of Service (QoS) analysis in terms of Packet loss, error bit rate, transmission delay, lifetime of network, buffer occupancy and throughput. Then these performances are validated based on comparison with conventional routing strategies like Fuzzy-rule-based Energy Efficient Clustering and Immune-Inspired Routing (FEEC-IIR), Neuro-Fuzzy- Emperor Penguin Optimization (NF-EPO), Fuzzy Reinforcement Learning-based Data Gathering (FRLDG) and Hierarchical Energy Efficient Data gathering (HEED). Ultimately the performance of the proposed BiHCLR outperforms all other conventional techniques.