As water resources professionals, we get excited about using flow forecasts. Yet we often do not know how ‘good’ they really are and end up using them like we use the ubiquitous weather forecasts on our smart phones. We look for trends or signals, and loosely base our decisions on the information presented to us. With weather forecasts, we might make the decision to pack an umbrella or wear a sweater if rain or cooler temperatures are forecasted, yet we will not make decisions that rely on tomorrow’s temperature to be exactly 62 degrees or the chance of rainfall to be exactly 20%. Still, many of us have, over time, built some confidence in these forecasts and will go on that hike, regardless of the weather predictions. We understand that they are highly uncertain, but we have enough confidence to find them useful for the decisions we make.
When using flow forecasts to guide water management decisions, we similarly may believe the general trends or signals in the information and might release some water from our reservoirs if high inflows are forecasted. This is a useful application of flow forecast information, but if we had greater confidence in the specific values, then we might use them for more targeted decision-making. For example, we might release only the amount needed to minimize the chances that our dam will overtop, while avoiding a partially empty reservoir due to releasing too much.
So how do we convince ourselves how much trust we should (or should not!) place in the flow forecasts we have; especially as increasing climate variability introduces another layer of uncertainty into our forecasts or when we have limited data?
In this series of blogs, we will explore this challenge.
First, we will explore how to define the circumstances, or ‘events’, when having a ‘good’ forecast is important to our decision-making. We will focus our forecast evaluation on these events, which can be infrequent in any given year.
We will then look at the characteristics that constitute a ‘good’ forecast, i.e., a forecast that helps us make a good decision. If the events are infrequent, we cannot simply use common performance statistics to evaluate the forecast—they would likely be unrepresentative and misleading. Rather, we will introduce a method to assess the alignment of forecasts with conditions that actually happened. Such an alignment assessment can focus on properties of the hydrograph that are important to us, such as exceeding a flow threshold. We can now judge how useful a forecast was to us during a single event.
Finally, we will introduce a way to combine the forecast performance during individual events into a single metric that can describe the overall confidence in our forecasts.
Get ready! It will be a journey!
Continue reading Part 2 of the Practical Forecast Performance Evaluation blog series, focusing on Events.