When evaluating a product to determine it's usefulness, it is important to know how well it is performing. Verifying probabilistic information is a tricky problem. It's much easier to validate a deterministic product by looking at observations of what occurred. For this evaluation, we're offering two methods for users to utilize to look at how well the probabilistic information performed.
The first option allows for overlaying “observed” TCF polygons on past runs. These polygons are generated from MRMS reflectivity and echo tops over the entire 24 hour period. This verification product is available on the AWC main website via the TCF page. To get a sense of what these polygons look like with the corresponding MRMS reflectivity, check out the graphic below.
|
An example of the MRMS polygon verification product, showing how the polygons are generated around MRMS reflectivity that meets the TCF criteria. |
The MRMS option is intended to serve as a subjective verification, of sorts, comparing how the guidance performed against observations. The polygons are color coded by valid times that occur during the 24hr period represented by the guidance probability contours. An example of what this looks like can be seen in the graphic below, with the MRMS generated polygons overlaid on top of a past HRRRe 24hr graphic.
|
An example of the TCF probabilistic HRRRe 24hr graphic overlaid with "observed" MRMS polygons from the same time period.
|
Another way users can get a sense of how well the probabilistic product performed is by overlaying the final 4-hr TCF polygons generated by the AWC forecaster throughout the 24hr period. The Final 4-hr TCF option is intended to allow for comparison between the performance of the automated probabilities and the forecaster generated polygons, allowing users to see if there are areas that were captured by the forecasters, but not the probabilities, or vice versa. An example of this can be seen below.
|
An example of the TCF probabilistic HREF 24hr graphic overlaid with the Final 4-hr TCF polygons color coded by valid time.
|
|
In addition to overlaying observed data onto the probabilistic information, users can also look at the reliability of each guidance product (HREF & HRRRe) by model run. The reliability statistics are computed and plotted for each guidance run using TCF polygons generated from MRMS reflectivity and echo tops as observations. These reliability diagrams give an assessment of the performance of probabilistic guidance in terms of its reliability, or whether the probabilities tend to over-predict or under-predict the occurrence of a phenomenon.
In the above example, both models performed fairly well for this run, staying close to the center diagonal line which would indicate a perfect forecast. In this case, the HREF slightly over-predicted the probability values while the HRRRE slightly under-predicted the probability of occurrence. While this capability is currently only available by run, AWC plans to combine full statistics to get a better sense of overall performance during the experiment and beyond. There has also been early feedback indicating users would also like to see this information broken down by region.
While probabilistic information continues to be a challenge to fully verify, users seem to appreciate the various methods AWC has explored to give them a sense of model guidance performance and reliability. Other potential means to discern this information has been a topic of discussion among participants and stakeholders throughout the experiment thus far.