Abstract
In a partially observable Markov decision process (POMDP), if the reward can be observed at each step, then the observed reward history contains information on the unknown state. This information, in addition to the information contained in the observation history, can be used to update the state probability distribution. The policy thus obtained is called a reward-information policy (RI-policy); an optimal RI-policy performs no worse than any normal optimal policy depending only on the observation history. The above observation leads to four different problem-formulations for POMDPs depending on whether the reward function is known and whether the reward at each step is observable. This exploratory work may attract attention to these interesting problems.
| Original language | English |
|---|---|
| Pages (from-to) | 677-681 |
| Number of pages | 5 |
| Journal | IEEE Transactions on Automatic Control |
| Volume | 52 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - Apr 2007 |
Keywords
- Partially observable Markov decision process (POMDP)
- Reward-information policy
Fingerprint
Dive into the research topics of 'Partially observable markov decision processes with reward information: Basic ideas and models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver