Thursday, May 18, 2023

AWT Takes Over FPAW

Steakholders from across the aviation community are in Kansas City this week for the Spring FPAW (Friends and Partners of Aviation Weather) meeting. Today, AWT integrated with FPAW to leverage knowledge and thoughts from users and producers of aviation weather information. The primary focus today was evaluating user interpretation of probabilistic aviation information. Through online polling and discussion, users were able to provide insite on aviation probabilistic products.
AWT presents at FPAW

This morning, we started with an overview of the winter weather dashboard; which is an operational product both on and experimental on our beta website. FPAW participants appreciated the demonstration of the dashboard and being able to see plume and model trend data. They also expressed interest in more information on the impact thresholds on the dashboard.

After lunch, FPAW broke into groups to discuss non-deterministic methods for communicating aviation hazards. Utilizing some of the probabalistic desk graphics the testbed participants have been working on during the week, we got a lot of feedback on how the data are presented and how different weather phenonemon impact the general aviation community. Some of the "hot topics" from the participants were focused on how detailed the graphics are or should be, spatial coverage of the polygons, and how pilots versus meteorologists perceive data.
AWT Organizer, Jack Lind, leading discussion on probabilistic graphics

The day wrapped up with a look at the outlooks graphics from last years 2022 AWT Summer Experiment. Good discussion on the time scale of the outlooks, the busyness of a map with all aviation weather impacts, and how adding text to the graphic would enhance an outlook helped AWT hone in on some of the things the general aviation community is looking for in an outlook graphic.

Wednesday, May 17, 2023

2023 AWT Spring Experiment in full swing

The AWT is hosting meteorologists from across the country for our spring experiment! This week, participants from the Meteorological Developement Lab (MDL), Global Systems Lab (GSL), FAA Aviation Weather Demenstration and Evaluation (AWDE), Alaska Aviation Weather Unit (AAWU)/Alaska Region, Honolulu Forecast Office (HFO)/Pacific Region, and various Center Weather Service Units and Forecast Offices are here to help evaluate products and provide their insite and expertise.
Participants diagramming workflows for Hazard Services
There are three areas of focus this year: evaluating the 3-dimensional cloud forecasts from the RRFS, creation and discussion of prototype probabalistic graphics and how to present information to the general aviaiton community, and developing and evaluating workflows for Hazard Services for aviation.
Using various resources to create probabilistic forecasts

Thursday, September 15, 2022

A Look at Developing Outlooks for General Aviation

One of the major themes, or desks, of this year's experiment is looking at the potential for creating Outlook Graphics for Days 1, 2, and 3 for AWC's General Aviation (GA) customers. Current AWC operational products do not go out beyond Day1, leaving an information gap (Fig.1) for those that are looking to plan for flights in the coming days. AWC has continuously received feedback from their pilot partners that they would like to have an easy to interpret, quick glance graphic that gives them an idea of what hazards to expect in the coming days.

Fig1. Current aviation weather forecast gap at the Aviation Weather Center for aviation hazards.

Experiment participants were tasked with developing Outlook Graphics that would be valid 12z-12z the following day and depict prolonged, impactful hazards which could include turbulence, IFR, icing, precipitation, smoke, and thunderstorms. The design would be similar to the current National Forecast Charts (Fig2) developed by the Weather Prediction Center, and maintain consistency with existing NWS products (ie. WPC, NHC, NDFD).  Participants used an interactive drawing GUI on the Testbed Website that allowed them to overlay fronts, modify the background map, and draw various polygons for each forecasted hazard.

Fig2. An example of the Weather Prediction Center's National Forecast Chart for Day1.

While ultimately user input will be critical for determining the utility of the Outlook graphics, participants were asked to fucus on things like the design, what guidance to use, what hazards were most impactful and should be included, and the overall workload that would be associated with creating such a graphic. It was clear from day one of the experiment that everyone brought different perspectives and ideas to the table, leading to an active and fun desk throughout the week.

Fig3. An example of one of the Outlook Graphics created during the experiment.

Throughout the week all participants agreed that determining the severity and impact of the graphics for the GA community will be the top priority before moving forward with operationalizing the product. Participants also indicated that a more detailed summary of the impacts with location and timing associated with them would be beneficial for Day1 for both GA fliers and fellow meteorologists. Other ideas that were brought up were the potential for multiple outlook 'tabs' for the various user types (GA, Low Altitude (LA), and National Air Space (NAS) planning), or even breaking up Day 1 into two separate Outlook graphics (morning and afternoon) to account for hazards that are only impactful during those periods.

In the end, the AWT staff recieved an abundance of perspective and ideas for moving the Outlook Graphic further toward operations. The next step will be to get the graphic in front of users to really assess the intuitiveness and utility of the product for its intended users.

Wednesday, September 14, 2022

AWT Summer Experiment Beta Website

Along with the evolution of the Traffic Flow Management Convective Forecast and creation of new prototype multi-day outlook graphics, the participants are evaluating the Aviation Weather Center's new experimental website, AWC developers are in the testbed this week to help answer questions, and learn more about how the website is used in operations. 

The AWC Testbed page has been updated to the new look and feel of the beta website and participants are testing new products and ideas using the website this week! Prototype outlook graphics and convection forecasts are drawn on the website and saved as part of the evaluations.

Participants using the new testbed page to create prototype multi-day outlooks

Welcome to the AWT 2022 Summer Experiment!

The 2022 AWT Summer Experiment kicked off on Tuesday with a full house of collaborators and stakeholders from multiple entities of the aviation weather enterprise. Participants include developers from NOAA’s Earth System Research Lab (ESRL), and NCEP’s Environmental Modeling Center (EMC); meteorologists from AWC, Center Weather Service Units (CWSUs) and Weather Forecast Offices (WFOs) from across the country, and Southwest Airlines; as well as researchers from the Weather Prediction Center (WPC) and the National Severe Storms Laboratory (NSSL).

After a few years of primarily virtual testbed activities, the Aviation Weather Testbed and AWC are excited to welcome participants back to Kansas City for this collaborative evaluation.

Participants discussing the Warn-on-Forecast System (WoFS)

There are two major themes this year: evolution of the Traffic Flow Management Convective Forecast and creation of new prototype multi-day outlook graphics. Participants will be utilizing experimental model guidance and tools this weeks, exploring ways to improve existing products and potentially create new ones.

Reviewing medium range guidance for the production of multi-day outlooks


Thursday, August 13, 2020

AWDE Services Gathers Probabilistic TCF Feedback

The Aviation Weather Demonstration and Evaluation (AWDE) Services team provided support to the
Aviation Weather Testbed (AWT) during the two-week, all virtual, 2020 Summer Experiment. AWDE
conducted interviews with twenty-one participants to collect feedback concerning the capabilities
incorporated into the probabilistic TCF product. Participants included CWSU meteorologists, PERTI team
members, one MIC, and one NAM. All participants attended 30-60 minute virtual interviews and
provided valuable feedback concerning the operational suitability and usability of the probabilistic TCF

AWDE team members asked participants questions concerning the overall usability, how the products
capabilities would add value in an operational environment, and what would improve the product’s
suitability and usability. All participants gave feedback stating the product is a good first-look graphic
that provides an overview of convective regions nationwide for planning. This graphic would be used
alongside models to compare convective focus areas for planning. Additional overlays such as ARTCC
boundaries, jet routes, and airport locations would be useful. While the product provides a 24 hour
convective forecast, most participants would benefit from smaller time increments, such as three or six
hours, and be able to focus in on certain regions/areas. The ability to overlay MRMS and the final TCF
polygons as verification provided more confidence in using the probabilistic TCF product to identify
convective weather areas.

Overall, the consensus among participants is that the product would be used as an initial tool to identify
convective areas to focus on before analyzing more regionalized areas for day one planning.

Tuesday, August 11, 2020

Verification and Reliability of Probabilistic Information

When evaluating a product to determine it's usefulness, it is important to know how well it is performing. Verifying probabilistic information is a tricky problem. It's much easier to validate a deterministic product by looking at observations of what occurred. For this evaluation, we're offering two methods for users to utilize to look at how well the probabilistic information performed. 

The first option allows for overlaying “observed” TCF polygons on past runs. These polygons are generated from MRMS reflectivity and echo tops over the entire 24 hour period. This verification product is available on the AWC main website via the TCF page. To get a sense of what these polygons look like with the corresponding MRMS reflectivity, check out the graphic below. 

An example of the MRMS polygon verification product, showing how the polygons are generated around MRMS reflectivity that meets the TCF criteria.

The MRMS option is intended to serve as a subjective verification, of sorts, comparing how the guidance performed against observations. The polygons are color coded by valid times that occur during the 24hr period represented by the guidance probability contours. An example of what this looks like can be seen in the graphic below, with the MRMS generated polygons overlaid on top of a past HRRRe 24hr graphic.

An example of the TCF probabilistic HRRRe 24hr graphic overlaid with "observed" MRMS polygons from the same time period. 

Another way users can get a sense of how well the probabilistic product performed is by overlaying the final 4-hr TCF polygons generated by the AWC forecaster throughout the 24hr period. The Final 4-hr TCF option is intended to allow for comparison between the performance of the automated probabilities and the forecaster generated polygons, allowing users to see if there are areas that were captured by the forecasters, but not the probabilities, or vice versa. An example of this can be seen below.

An example of the TCF probabilistic HREF 24hr graphic overlaid with the Final 4-hr TCF polygons color coded by valid time.

In addition to overlaying observed data onto the probabilistic information, users can also look at the reliability of each guidance product (HREF & HRRRe) by model run. The reliability statistics are computed and plotted for each guidance run using TCF polygons generated from MRMS reflectivity and echo tops as observations. These reliability diagrams give an assessment of the performance of probabilistic guidance in terms of its reliability, or whether the probabilities tend to over-predict or under-predict the occurrence of a phenomenon. 

In the above example, both models performed fairly well for this run, staying close to the center diagonal line which would indicate a perfect forecast. In this case, the HREF slightly over-predicted the probability values while the HRRRE slightly under-predicted the probability of occurrence. While this capability is currently only available by run, AWC plans to combine full statistics to get a better sense of overall performance during the experiment and beyond. There has also been early feedback indicating users would also like to see this information broken down by region.

While probabilistic information continues to be a challenge to fully verify, users seem to appreciate the various methods AWC has explored to give them a sense of model guidance performance and reliability. Other potential means to discern this information has been a topic of discussion among participants and stakeholders throughout the experiment thus far.