WHEN IS LOGGING ROAD EROSION WORTH MONITORING? David Tomberlin, Teresa Ish ______________________________________________________________________________ Abstract: Efficient allocation of funds for erosion control on logging roads depends on information about current and potential future erosion on different road segments. Acquiring this information is typically ex- pensive, and may make no immediate contribution to erosion control. Thus, managers face a trade-off be- tween spending funds on information gathering versus on actual erosion control measures. Here, we de- velop a framework for examining this trade-off when current erosion, future erosion, and the efficacy of erosion control measures are all uncertain. Specifically, casting the manager’s problem of allocating funds between erosion control and erosion monitoring as a partially observable Markov decision process (POMDP) allows us to identify the conditions under which costly estimates of erosion levels are worth ob- taining as part of an adaptive erosion control program, and, in contrast, under what conditions the better strategy is to skip data acquisition and proceed directly to erosion control treatments. We demonstrate the POMDP approach through an application to a stylized road erosion control problem. Key words: road erosion, monitoring, partially observable Markov decision process _____________________________________________________________________________________________ Introduction Sediment loading from logging roads impairs water quality and habitat conditions in many Pacific coastal rivers and streams. In this paper, we address the question of whether logging road erosion monitoring is worth the time and expense, given that we could decline to monitor in favor of either applying reha- bilitative treatments immediately, without bothering to collect erosion data, or deferring the decision to implement road treatment or monitoring schemes (the more common practice). We examine this question within the framework of a partially observable Markov decision process (POMDP), which is well- suited to this purpose for at least two reasons. First, surface erosion is by its nature difficult to assess, even with special equipment, making the partial ob- servability approach very apt. Second, logging road erosion control can be meaningfully represented in terms of a few states and actions, and the costs of these actions can be reasonably well estimated. Our model assumes the land manager wants to minimize long-run discounted total cost and will engage in monitoring only if it’s expected to help with long- term performance. The model’s purpose is to help the land manager decide when monitoring is worth the expense. Here, we apply the model to a single road segment, but it could also be applied to an entire watershed. While there seems to be no consensus on a definition of ‘adaptive management,’ a necessary condition for management to be adaptive is that it account for the arrival of new information. Within the natural re- source management literature, most work has focused on ‘passive adaptive management,’ in which new information is incorporated into decision making as it becomes available. A more difficult approach is that of ‘active adaptive management,’ in which new in- formation is sought optimally: the manager considers the short-term cost of information gathering vs. the potential long-term benefits, and decides whether the costly information is worth having 1 . Markov decision processes (MDPs), when solved with the techniques of stochastic dynamic program- ming, yield a mapping from the system state into an optimal policy, and may be thought of as a formal representation of adaptive management. However, MDPs assume that state variables are observed per- fectly, an assumption that clearly does not hold in many natural resource management problems: animal populations, mineral reserves, and water quality, at least in many situations, cannot be known with cer- tainty, and even developing good estimates is gener- ally expensive and time-consuming. The theory of partially observable Markov decisions processes (POMDPs) was developed in response to this shortcoming of MDPs, but no numerical algo- rithms existed for POMDP solution until Sondik (1971). Despite a steady stream of improvements in both exact and heuristic solution techniques, most applied work in dynamic optimization (including control engineering, economics, and behavioral ecol- ogy) has continued to rely on MDPs built around 1 Though these ideas are taken from the control engineering literature, the most thorough treatment in a natural resource management context seems to be Walters (1986). 239