Remote state estimation with usage-dependent Markovian packet losses

Jiazheng Wang, Xiaoqiang Ren*, Subhrakanti Dey, Ling Shi

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

11 Citations (Scopus)

Abstract

In this paper, we consider a problem of packet scheduling in the setting of remote estimation with usage-dependent Markovian packet losses. A sensor measures the state of a discrete-time linear process, computes the estimate via a local Kalman filter, and sends the packets to a remote estimator via a network. The link state evolves as a two-state Markov chain, and its state transition depends on the network usage. The aim is to design the scheduling policy which balances the estimation quality and the energy consumption. We identify the problem as a Markov decision process (MDP) and prove the structural properties of the optimal policy. Furthermore, based on the structural properties, we derive the sufficient and necessary condition of the mean square stability of the remote estimator. Simulation examples are provided to illustrate the results.

Original languageEnglish
Article number109342
JournalAutomatica
Volume123
DOIs
Publication statusPublished - Jan 2021

Bibliographical note

Publisher Copyright:
© 2020 Elsevier Ltd

Keywords

  • Kalman filters
  • Markov decision processes
  • Networked control systems
  • State estimation

Fingerprint

Dive into the research topics of 'Remote state estimation with usage-dependent Markovian packet losses'. Together they form a unique fingerprint.

Cite this