Abstract
We introduce a model of graph-constrained dynamic choice with reinforcement modeled by positively α-homogeneous rewards. We show that its empirical process, which can be written as a stochastic approximation recursion with Markov noise, has the same probability law as a certain vertex reinforced random walk. We use this equivalence to show that for α > 0, the asymptotic outcome concentrates around the optimum in a certain limiting sense when 'annealed' by letting α ↑ ∞ slowly.
| Original language | English |
|---|---|
| Pages (from-to) | 1435-1446 |
| Number of pages | 12 |
| Journal | IEEE Transactions on Control of Network Systems |
| Volume | 9 |
| Issue number | 3 |
| DOIs | |
| Publication status | Published - 1 Sept 2022 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2014 IEEE.
Keywords
- Annealed dynamics
- dynamic choice with reinforcement
- graphical constraints
- optimal choice
- vertex reinforced random walk
Fingerprint
Dive into the research topics of 'Dynamic Social Learning Under Graph Constraints'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver