Thursday, August 13, 2020

AWDE Services Gathers Probabilistic TCF Feedback


The Aviation Weather Demonstration and Evaluation (AWDE) Services team provided support to the
Aviation Weather Testbed (AWT) during the two-week, all virtual, 2020 Summer Experiment. AWDE
conducted interviews with twenty-one participants to collect feedback concerning the capabilities
incorporated into the probabilistic TCF product. Participants included CWSU meteorologists, PERTI team
members, one MIC, and one NAM. All participants attended 30-60 minute virtual interviews and
provided valuable feedback concerning the operational suitability and usability of the probabilistic TCF
product.



AWDE team members asked participants questions concerning the overall usability, how the products
capabilities would add value in an operational environment, and what would improve the product’s
suitability and usability. All participants gave feedback stating the product is a good first-look graphic
that provides an overview of convective regions nationwide for planning. This graphic would be used
alongside models to compare convective focus areas for planning. Additional overlays such as ARTCC
boundaries, jet routes, and airport locations would be useful. While the product provides a 24 hour
convective forecast, most participants would benefit from smaller time increments, such as three or six
hours, and be able to focus in on certain regions/areas. The ability to overlay MRMS and the final TCF
polygons as verification provided more confidence in using the probabilistic TCF product to identify
convective weather areas.

Overall, the consensus among participants is that the product would be used as an initial tool to identify
convective areas to focus on before analyzing more regionalized areas for day one planning.

Tuesday, August 11, 2020

Verification and Reliability of Probabilistic Information

When evaluating a product to determine it's usefulness, it is important to know how well it is performing. Verifying probabilistic information is a tricky problem. It's much easier to validate a deterministic product by looking at observations of what occurred. For this evaluation, we're offering two methods for users to utilize to look at how well the probabilistic information performed. 

The first option allows for overlaying “observed” TCF polygons on past runs. These polygons are generated from MRMS reflectivity and echo tops over the entire 24 hour period. This verification product is available on the AWC main website via the TCF page. To get a sense of what these polygons look like with the corresponding MRMS reflectivity, check out the graphic below. 

An example of the MRMS polygon verification product, showing how the polygons are generated around MRMS reflectivity that meets the TCF criteria.

The MRMS option is intended to serve as a subjective verification, of sorts, comparing how the guidance performed against observations. The polygons are color coded by valid times that occur during the 24hr period represented by the guidance probability contours. An example of what this looks like can be seen in the graphic below, with the MRMS generated polygons overlaid on top of a past HRRRe 24hr graphic.

An example of the TCF probabilistic HRRRe 24hr graphic overlaid with "observed" MRMS polygons from the same time period. 

Another way users can get a sense of how well the probabilistic product performed is by overlaying the final 4-hr TCF polygons generated by the AWC forecaster throughout the 24hr period. The Final 4-hr TCF option is intended to allow for comparison between the performance of the automated probabilities and the forecaster generated polygons, allowing users to see if there are areas that were captured by the forecasters, but not the probabilities, or vice versa. An example of this can be seen below.

An example of the TCF probabilistic HREF 24hr graphic overlaid with the Final 4-hr TCF polygons color coded by valid time.

In addition to overlaying observed data onto the probabilistic information, users can also look at the reliability of each guidance product (HREF & HRRRe) by model run. The reliability statistics are computed and plotted for each guidance run using TCF polygons generated from MRMS reflectivity and echo tops as observations. These reliability diagrams give an assessment of the performance of probabilistic guidance in terms of its reliability, or whether the probabilities tend to over-predict or under-predict the occurrence of a phenomenon. 


In the above example, both models performed fairly well for this run, staying close to the center diagonal line which would indicate a perfect forecast. In this case, the HREF slightly over-predicted the probability values while the HRRRE slightly under-predicted the probability of occurrence. While this capability is currently only available by run, AWC plans to combine full statistics to get a better sense of overall performance during the experiment and beyond. There has also been early feedback indicating users would also like to see this information broken down by region.

While probabilistic information continues to be a challenge to fully verify, users seem to appreciate the various methods AWC has explored to give them a sense of model guidance performance and reliability. Other potential means to discern this information has been a topic of discussion among participants and stakeholders throughout the experiment thus far.

  

Monday, August 10, 2020

Onto Week 2!

 We’re starting the second week of the 2020 Aviation Weather Testbed Summer Experiment today (8/10/2020). Just like last week, this week’s activities will be conducted 100% virtually. Nearly all of the AWT and FAA AWDE staff members are working from home, and most of our participants are either working from home or are considered essential employees who must report to their worksites. So far, we’ve noticed a few unexpected benefits as well as a few unexpected challenges with this year’s virtual format.


One of the unexpected benefits has been more flexibility for our participants. They’re able to evaluate an experimental aviation product without physically traveling to Kansas City, MO to spend a week in the testbed. This is possible thanks to the Aviation Weather Center's Testbed page, which can be used to stage and evaluate experimental products under password protection. This also allows our participants to utilize the product during their regular work hours, which may yield a fuller picture of how it could be incorporated into their existing workflows. 


Of course the virtual realm also introduces a few nuances that we've had to work though. Typically testbed activities would not be affected much by a tropical storm in the Atlantic Ocean. However, this year our AWDE team was directly impacted by this storm with some losing power due to the strong winds. Quick improvising allowed the interviews to continue, and we're grateful that our AWDE friends are safe.


After this evaluation concludes at the end of the week, we'll be taking a closer look at the benefits and drawbacks of a virtual format. The hope is that we can continue to utilize virtual capabilities moving forward even when we resume in-person testbed activities in the future.




Thursday, August 6, 2020

The 2020 Summer Experiment Has Gone Virtual!

Like so many other things in 2020, the Summer Experiment has gone virtual! While we sure do miss seeing and interacting with our partners and stakeholders in the testbed, we are excited to be able to keep the tradition going by continuing to evaluate and evolve products for the aviation community!


In order to facilitate an evaluation in the virtual realm, we are focusing on one product with a more targeted participant list. In keeping with one of the themes from recent experiments, we're focusing on convection in the next-day planning regime. For that reason our targeted participants include forecasters from the various Center Weather Service Units (CWSU) which are co-located within the FAA's Air Route Traffic Control Centers (ARTCC). These forecasters often collaborate with Aviation Weather Center (AWC) forecasters on the Traffic Flow Management Convective Forecast (TCF) creation and have a breadth of knowledge on how convection impacts planning in the National Airspace System (NAS). 

Map of CWSU coverage areas over the contiguous United States.



While we have looked at the idea of impact graphics for utilization in longer term planning for the NAS, this time we are bringing in probabilistic information to address uncertainty and start to better understand how this information can best be used in the planning environment.  For this evaluation, we are utilizing convective allowing ensemble guidance to diagnose the probability of convection exceeding TCF criteria. 

An example of the TCF Probability graphic highlighting the probability of sparse coverage of convection
An example of the TCF Probability graphic highlighting the probability of sparse coverage of convection


Guidance imagery, depicting the probability of exceeding sparse and medium coverage TCF thresholds, will be automatically generated from two different model sources twice per day, in the early morning and early afternoon. Evaluation of the guidance by participants will help us meet the following goals

  • Assess the usefulness of 24 hour summary information of TCF focused threats
  • Identify optimal ways to provide, interpret, and communicate probabilistic convective information
  • Assess the skillfulness of convection allowing ensemble systems for aviation focused convective forecasts
To help facilitate gathering feedback from our participants, we are once again partnering with the FAA's Aviation Weather Demonstration and Evaluation (AWDE) Services group.  Throughout the week they will help us assess the usability and utility of this product from the user's perspective via one-on-one interviews and a questionnaire. We will also have two large group discussions during which additional stakeholders will be invited to provide their perspective and input. 

Stay tuned for early results and key takeaways from the experiment!