Optimal Transmission Scheduling Over Multihop Networks: Structural Results and Reinforcement Learning

Lixin Yang, Yong Xu*, Weijun Lv, Jun Yi Li, Ling Shi

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

20 Citations (Scopus)

Abstract

This article studies the optimal transmission scheduling for remote state estimation over multihop networks. A smart sensor observes a dynamic system, and sends its local state estimate to a remote estimator (RE). To save energy, multihop networks are deployed to relay data packets from the smart sensor to the RE. The smart sensor needs to decide the hop number communicating with the RE by adjusting its transmission power. To minimize the estimation error and the energy consumption, the transmission scheduling is formulated as a modified Markov decision process (MDP) by incorporating historical actions into the state. A sufficient condition is constructed to guarantee that the MDP has an optimal deterministic and stationary policy. The optimal policy's structure is further obtained to reduce the computation complexity. A deep reinforcement learning algorithm, i.e., dueling double Q-network, is introduced to obtain a near-optimal policy. Finally, a simulation example is provided to illustrate the developed results.

Original languageEnglish
Pages (from-to)1826-1833
Number of pages8
JournalIEEE Transactions on Automatic Control
Volume69
Issue number3
DOIs
Publication statusPublished - 1 Mar 2024

Bibliographical note

Publisher Copyright:
© 1963-2012 IEEE.

Keywords

  • Estimation
  • Kalman filtering
  • sensor networks
  • transmission scheduling

Fingerprint

Dive into the research topics of 'Optimal Transmission Scheduling Over Multihop Networks: Structural Results and Reinforcement Learning'. Together they form a unique fingerprint.

Cite this