Learning about a Moving Target in Resource Management: Optimal Bayesian Disease Control

Resource managers must often make difficult choices in the face of imperfectly observed and dynamically changing systems (e.g. livestock, fisheries, water and invasive species). A rich set of techniques exists for identifying optimal choices under such uncertainty, though that uncertainty is typically, and unrealistically, assumed to be understood and irreducible. The adaptive management literature overcomes this limitation with tools for optimal learning, however rich descriptions of system dynamics are ironed out for tractability, e.g. the model component that is targeted for learning is not allowed to vary. We overcome this trade-off through a novel extension of the existing Partially Observable Markov Decision Process (POMDP) framework, to allow for learning about a dynamically changing and continuous state. We illustrate this methodology by exploring optimal control of bovine tuberculosis in New Zealand's cattle. Disease testing---the control variable---serves to identify herds for treatment and provide information on prevalence, which is both imperfectly observed and subject to change due to controllable and uncontrollable factors. We find substantial efficiency losses from both ignoring learning (standard stochastic optimization) and from simplifying system dynamics (standard adaptive management), though the latter effect dominates. We also find that under an adaptive management approach, simplifying dynamics can lead to a belief trap in which information gathering ceases, beliefs become increasingly inaccurate and losses abound.


Issue Date:
Jul 26 2015
Publication Type:
Conference Paper/ Presentation
PURL Identifier:
http://purl.umn.edu/205715
Total Pages:
2
Series Statement:
AAEA
7716




 Record created 2017-04-01, last modified 2017-08-28

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)