Showing posts with label Verification. Show all posts
Showing posts with label Verification. Show all posts

Thursday, August 13, 2020

AWDE Services Gathers Probabilistic TCF Feedback


The Aviation Weather Demonstration and Evaluation (AWDE) Services team provided support to the
Aviation Weather Testbed (AWT) during the two-week, all virtual, 2020 Summer Experiment. AWDE
conducted interviews with twenty-one participants to collect feedback concerning the capabilities
incorporated into the probabilistic TCF product. Participants included CWSU meteorologists, PERTI team
members, one MIC, and one NAM. All participants attended 30-60 minute virtual interviews and
provided valuable feedback concerning the operational suitability and usability of the probabilistic TCF
product.



AWDE team members asked participants questions concerning the overall usability, how the products
capabilities would add value in an operational environment, and what would improve the product’s
suitability and usability. All participants gave feedback stating the product is a good first-look graphic
that provides an overview of convective regions nationwide for planning. This graphic would be used
alongside models to compare convective focus areas for planning. Additional overlays such as ARTCC
boundaries, jet routes, and airport locations would be useful. While the product provides a 24 hour
convective forecast, most participants would benefit from smaller time increments, such as three or six
hours, and be able to focus in on certain regions/areas. The ability to overlay MRMS and the final TCF
polygons as verification provided more confidence in using the probabilistic TCF product to identify
convective weather areas.

Overall, the consensus among participants is that the product would be used as an initial tool to identify
convective areas to focus on before analyzing more regionalized areas for day one planning.

Tuesday, August 11, 2020

Verification and Reliability of Probabilistic Information

When evaluating a product to determine it's usefulness, it is important to know how well it is performing. Verifying probabilistic information is a tricky problem. It's much easier to validate a deterministic product by looking at observations of what occurred. For this evaluation, we're offering two methods for users to utilize to look at how well the probabilistic information performed. 

The first option allows for overlaying “observed” TCF polygons on past runs. These polygons are generated from MRMS reflectivity and echo tops over the entire 24 hour period. This verification product is available on the AWC main website via the TCF page. To get a sense of what these polygons look like with the corresponding MRMS reflectivity, check out the graphic below. 

An example of the MRMS polygon verification product, showing how the polygons are generated around MRMS reflectivity that meets the TCF criteria.

The MRMS option is intended to serve as a subjective verification, of sorts, comparing how the guidance performed against observations. The polygons are color coded by valid times that occur during the 24hr period represented by the guidance probability contours. An example of what this looks like can be seen in the graphic below, with the MRMS generated polygons overlaid on top of a past HRRRe 24hr graphic.

An example of the TCF probabilistic HRRRe 24hr graphic overlaid with "observed" MRMS polygons from the same time period. 

Another way users can get a sense of how well the probabilistic product performed is by overlaying the final 4-hr TCF polygons generated by the AWC forecaster throughout the 24hr period. The Final 4-hr TCF option is intended to allow for comparison between the performance of the automated probabilities and the forecaster generated polygons, allowing users to see if there are areas that were captured by the forecasters, but not the probabilities, or vice versa. An example of this can be seen below.

An example of the TCF probabilistic HREF 24hr graphic overlaid with the Final 4-hr TCF polygons color coded by valid time.

In addition to overlaying observed data onto the probabilistic information, users can also look at the reliability of each guidance product (HREF & HRRRe) by model run. The reliability statistics are computed and plotted for each guidance run using TCF polygons generated from MRMS reflectivity and echo tops as observations. These reliability diagrams give an assessment of the performance of probabilistic guidance in terms of its reliability, or whether the probabilities tend to over-predict or under-predict the occurrence of a phenomenon. 


In the above example, both models performed fairly well for this run, staying close to the center diagonal line which would indicate a perfect forecast. In this case, the HREF slightly over-predicted the probability values while the HRRRE slightly under-predicted the probability of occurrence. While this capability is currently only available by run, AWC plans to combine full statistics to get a better sense of overall performance during the experiment and beyond. There has also been early feedback indicating users would also like to see this information broken down by region.

While probabilistic information continues to be a challenge to fully verify, users seem to appreciate the various methods AWC has explored to give them a sense of model guidance performance and reliability. Other potential means to discern this information has been a topic of discussion among participants and stakeholders throughout the experiment thus far.

  

Tuesday, February 12, 2013

IFR Verification


As a follow-up to yesterday's post concerning probabilistic flight rule prediction, here are some quick verification images.  For the C&V prediction, the forecast period is 21Z - 09Z.

Each image contains the METAR observations of flight category overlaid on the gridded data set.  The first gridded data set is an analysis of observed ceiling and visibility conditions, the National Ceiling and Visibility Analysis (NCVA).  Hourly data is show along with the METARs that went into this analysis.

NCVA Field and METAR flight category for 21Z 11 Feb - 09Z 12 Feb.
The primary forecast concern in this area was the lifting of the ceilings, bringing terminals out of IFR conditions.  The dataset shown below is the GFS-LAMP IFR conditions.  Regions shaded in green are where the 18Z GFS-LAMP run from 11 Feb is predicting IFR conditions.  It clears the IFR conditions fairly well in PA/NJ/NY/CT, but keeps DL/MD/VA in IFR conditions too long.

11 Feb 18Z GFS-LAMP forecast valid same period as NCVA.
Additionally, the SREF and AFWA probability of flight category as highlighted in yesterday's post is show below.  The SREF is the 09Z run from Feb 11, and the AFWA is the 00Z run from the same day.  Only the MVFR and IFR conditions are shown in each of these products. The SREF forecast does not seem to discriminate well between MVFR and IFR conditions (all or nothing) and clears the IFR conditions along the northeast coast fairly well.  The AFWA forecast tends to clear the IFR conditions to quickly along the northeast coast.

Another feature of both of these forecast is the forecast areas of IFR around the Great Lakes region that did not verify.  MVFR conditions were observed in MI and NWRN OH.  In the SREF this area is especially overdone and we are again seeing the issues that the SREF has discriminating between MFVR and IFR conditions.  The are in the AFWA is also overdone, with IFR conditions forecast in PA where only MVFR was observed.

11 Feb 09Z SREF forecast valid same period as NCVA.
11 Feb 00Z AFWA forecast valid same period as NCVA.