Abstract
Underwater acoustic sensor networks (UWSNs), acting as a reliable and efficient infrastructure of the Internet of underwater things (IoUT), have attracted much research interest in recent years due to the wide range of their potential marine applications. The limited energy supply of underwater sensor nodes is a significant challenge that can be mitigated by the cyclic difference set (CDS)-based coordination asynchronous wake-up scheme. However, the CDS-based asynchronous wake-up scheme also introduces long delays in the neighbor discovery that deteriorates packet delay as well as the network lifetime. In this paper, we formulate the problem of policy selection for idle listening as a Markov decision process and exploit the framework of deep reinforcement learning to obtain the optimal policies of underwater sensor nodes. Furthermore, the long short-term memory (LSTM) networks are utilized to estimate the network traffic feature, which can improve the performance of the proposed adaptive asynchronous wake-up scheme. To verify the performance of the proposed scheme, simulations in different network scenarios are conducted with the comparison of random, fixed metric policies, and original CDS-based asynchronous wake-up schemes.
| Original language | English |
|---|---|
| Article number | 9337228 |
| Pages (from-to) | 1851-1865 |
| Number of pages | 15 |
| Journal | IEEE Transactions on Vehicular Technology |
| Volume | 70 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - Feb 2021 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 1967-2012 IEEE.
Keywords
- The internet of underwater things (IoUT)
- asynchronous wake-up scheme
- cyclic difference set (CDS)
- deep reinforcement learning
Fingerprint
Dive into the research topics of 'An Adaptive Asynchronous Wake-Up Scheme for Underwater Acoustic Sensor Networks Using Deep Reinforcement Learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver