Test and Evaluation of Rapid Post-Processing and Information … · 2017-08-22 · Test and...

Post on 09-Aug-2020

2 views 0 download

Transcript of Test and Evaluation of Rapid Post-Processing and Information … · 2017-08-22 · Test and...

Test and Evaluation of Rapid Post-Processing and Information Extraction From Large

Convection Allowing Ensembles Applied to 0-3hr Tornado Outlooks

James Correia, Jr. OU CIMMS/SPC james.correia@noaa.gov

Daphne LaDue, OU CAPS

Chris Karstens, Kent Knopfmeier, Dusty Wheatley, OU CIMMS/NSSL

Thanks to WoF (NSSL & GSD), FACETS, PHI collaborators

Reminder - R2O: Where do we fit?

Addresses NOAA objective:

“…post-processing tools and techniques to provide effective decision support for high-impact weather.”

Addresses high priority topic 4:

“…daily severe weather prediction using rapidly updating ensemble radar data assimilation and forecasts while minimizing data latency via post processing strategies for information extraction.”

Warn on Forecast in HWT: NEWS-e

• 18 member mixed physics ensemble

• Init by HRRR-E*

• Cycled radar data assimilation (15min)

• 90 minute forecasts

• 00 & 30 past the hour

• 1900-0300 UTC

NSSL Experimental Warn on forecast System for Ensembles (NEWS-e) *HRRR-E run by GSD as part of the Warn on Forecast initiative

2017 Probabilistic Hazard Information Prototype Experiment

Deterministic warnings to probabilistic information & warnings co-created with EMs and broadcasters

Severe Desk

Tornado Desk

Expectations: Looking for insight current & future paradigms

2016 HWT PHI Experiment Display

Introduce slider bars for the query: post-processing is working FOR the forecaster

Updraft helicity or vorticity

Set lead-time relative to “now”

Practical increments

1. Set the lead time to match your warning task

2. Set the intensity to match the warning task by storm

Approach as a TIME based problem

2017 HWT PHI Experiment Display

Set lead time to match task Set Member count Set Intensity

Outlier viewer when set to min - Otherwise probability flavored

Offer control of base data through query Data dynamically adjusts every minute

Approach as a TIME based problem

HWT case: 9 May 2016 Warnings & PHI: Tornadic supercell

F analysis

Prob Tor

WoF Times

ProbTor: CIMSS Wisconsin Forecast: Forecasters subjective probability e.g. confidence

T0: Forecasters subjective probability e.g. confidence

F analysis

Prob Tor

WoF Times

Week 3 fcst

Tornado 2 in demise After 2130 NEWS-e forecast, 2200 switches to tornado 3

Tornado 3 developing Forecaster has lead time on where next storm will produce a Tornado

Summarize model info rapidly, matched to task

Need to draw attention to “details” in model

How do we get forecasters to use, trust these “details”?

Challenge is to match information to the task

When & why use NWP? • PHI represents more judgments -> Risk:

– Existence of threat/hazard

– Intensity (modeled after IBW)

– Longevity (mesocyclone or tornado?)

– Confidence (as a subjective probability)

What job does NWP help the forecaster do and how does NWP do it? • All of the above?

• Confidence & Confirmation of threats … because:

“Always in a constant state of analysis” …

Perception Comprehension Projection

“…and, at its highest level, predict future events and system states based on this understanding.”

Synthesis Assessment Action

Situational Awareness (Endsley et al 1995)

”…not enough to keep up with the pace of information...

It must be interpreted and related to other information and to the task requirements.”

Prelim 2017 HWT observations

NWP doesn’t fit reliably into forecasters analysis but does fit in the projection phase of SA

“Always in a constant state of analysis”

Forecasters need to develop the (un)justified (mis)trust that comes with experience & feedback.

Agile post processing and interactive displays Make the PP relevant to the forecaster

But in an analysis state, not in projection space – what could

happen, when, and how intense?

Probability is a tool, not a solution

“Cant algorithm everything”

Summary

• Post-Processing designed with the Display in mind – Data matched to the warning task – Rapidly provided more information w/ less data – Agile probabilities through data mining

• Social and physical science combo working – Interviews & experiment exposed Challenges for NWP in

Operations • NWP has trust issues on the warning desk

– Better questions thru co-creation • Researchers & Participants, Forecasters and Partners

– Social scientist and tools needed to collect relevant DATA on the design and implementation of tools, techniques and outcomes.

Deliverables • Interviewed NWS forecasters

• Learned that CAM trust is low b/c of low familiarity and un-calibrated expectations

• CAM knowledge & use variable – f(available in A2, experience)

• Min(data) & Max(information) (size: 20kb vs 18MB) • Meet forecasters where they are, in real-time • Data adjust dynamically with Time • Latency of PP was 2-4m from ensemble completion

• Confirmation was primary use • As Sit Aware increased, NEWS-e became usable to the

production of Probabilistic Hazard Information (PHI) • Forecasters spent more time in projection rather than

diagnosis and understanding via radar interrogation

From previous years talk

Preliminary HWT observations

• Forecasters used guidance to Identify ‘hot spots & attention’ –F1,6 – Confidence in warning decisions (warn & not to warn) because: “right

now we have no tornado guidance” – All – “Always in a constant state of analysis” -- All – But “Cant algorithm everything” --F1,4,5,9

• Develop TRUST (justified & unjustified; Hoffman 2012) – Need to understand ensemble capability & skill –All – “I’ve never used this before.” --All

• Similar dichotomies seen – Wanted to increase confidence on marginal events vs Focus on higher

impacts – F1,2,5,9 – Always used rapid animation or all-tilts radar (like querying) but Rarely

used model queries b/c 30 min updates couldn’t compete with 2 min updates for attention --F4,7,8,9

Reminder: Post-processing Strategy

• The proposed post-processing paradigm will consist of five steps: 1. Rapid ID of predefined but broad objects for the

purposes of filtering and data reduction,

2. Transmitting reduced data sets while retaining information (why send zeros!)

3. Reception and regridding data (adaptable)

4. Generation of predefined probabilities (static probabilities – broad applicability)

5. Generation of user-defined probabilities (on-the-fly post processing for INSIGHT in Scientific forecasting)

2016 HWT PHI Experiment Display

Frequency Display Track Display

Frequency ƒ(members, location, time) can quantify variability

Can see motion, variability but under-dispersion hides frequency

“Hiding” temporal variability within lead time window

A Time based problem Summation of signals in Time -> adaptation