Weather forecast – Initial Condition problem Climate forecast –

Post on 16-Jan-2016

33 views 1 download

Tags:

description

Introduction to Seasonal Climate Prediction Liqiang Sun International Research Institute for Climate and Society (IRI). Weather forecast – Initial Condition problem Climate forecast – Primarily boundary forcing problem. Climate Forecasts. be probabilistic ensembling - PowerPoint PPT Presentation

Transcript of Weather forecast – Initial Condition problem Climate forecast –

Introduction to SeasonalClimate Prediction

Liqiang SunInternational Research Institute for Climate and Society (IRI)

Weather forecast – Initial Condition problem

Climate forecast – Primarily boundary forcing

problem

Climate Forecasts

be probabilistic ensembling

be reliable and skillful calibration and verification

address relevant scales and quantities downscaling

OUTLINE

Fundamentals of probabilistic forecastsIdentifying and correcting model errors

systematic errors Random errors Conditional errors

Forecast verificationSummary

Fundamentals

of

Probabilistic

Forecasts

Basis of Seasonal Climate Prediction

Changes in boundary conditions, such as SST and land surface characteristics, can influence the characteristics of weather (e.g. strength or persistence/absence), and thus influence the seasonal climate.

January 25, 2006 UNAM

Influence of SST on tropical atmosphere

30

12

30

24

12

24

24

24

10

FORECAST SST

TROP. PACIFIC (multi-models, dynamical and statistical)

TROP. ATL, INDIAN (statistical)

EXTRATROPICAL (damped persistence)

GLOBAL ATMOSPHERIC

MODELS

ECPC(Scripps)

ECHAM4.5(MPI)

CCM3.6(NCAR)

NCEP(MRF9)

NSIPP(NASA)

COLA2

GFDL

ForecastSST

Ensembles3/6 Mo. lead

PersistedSST

Ensembles3 Mo. lead

IRI DYNAMICAL CLIMATE FORECAST SYSTEM

POSTPROCESSING

MULTIMODELENSEMBLING

PERSISTED

GLOBAL

SST

ANOMALY

2-tiered OCEAN ATMOSPHERE

30

10

REGIONALMODELS

    

Probability Calculated Using the Ensemble Mean

Bo No Ao

Bf

Nf

Af

50 30 20

33 34 33

15 25 60

Contingency Table

  

1) Count the # of ensembles in each category, e.g., Total 100 Ensembles,

40 ensembles in Category “A”35 ensembles in Category “N”, and 25 ensembles in Category “B”.

2) Calibration

Probability obtained from ensemble spread

Example of seasonal rainfall forecast (3-month average & Probabilistic)

Why seasonal averages?Rainfall correlation skill: ECHAM4.5 vs CRU Observations (1951-95)

Should we onlybe forecasting forFebruary for SW US & N Mexico?

Why seasonal averages?Partial Correlation Maps for Individual Months

No independentskill for individualmonths.

Why seasonal averages?

Why probabilistic?

Observed Rainfall (SON 2004) Model Forecast (SON 2004), Made Aug 2004

RUN #1

RUN #4

Two ensemble membersfrom same AGCM, same SST forcing, justdifferent initial conditions.

Units are mm/season

Why probabilistic?

Observed RainfallSep-Oct-Nov 2004(CAMS-OPI)

Model Forecast (SON 2004), Made Aug 20041

2

3

4

5

6

7

8

Seasonal climate is a combinationof boundary-forced SIGNAL, andchaotic NOISE from internaldynamics of the atmosphere.

Why probabilistic?

Observed RainfallSep-Oct-Nov 2004(CAMS-OPI)

Model Forecast (SON 2004), Made Aug 2004

ENSEMBLE MEAN

Average model response, or SIGNAL, due to prescribed SSTswas for normal to below-normal rainfall over southern US/northern Mexico in this season.

Need to also communicate fact that some of the ensemblemember predictions were actually wet in this region.

Thus, there may be a ‘most likely outcome’, but there arealso a ‘range of possibilities’ that must be quantified.

Forecast Mean

Climate Forecast: Signal + Uncertainty

“SIGNAL”

The SIGNAL represents the ‘most likely’ outcome.

The NOISE represents internal atmospheric chaos, uncertainties in the boundary conditions, and random errors in the models.

“NOISE”

Historical distribution Climatological Average

Forecast distribution

BelowNormal

AboveNormal

Near-Normal

Resolution:Probabilities should differ from climatology as much as possible,

when appropriate

Reliability:Forecasts should “mean what they say”.

Probabilistic Forecasts

Reliability DiagramsShowing consistency between the a priori stated probabilities of an event and the a posteriori observed relative frequencies of this event.Good reliability is indicated by a 45° diagonal.

Identifying

and

Correcting

Model

Errors

Opti

mizing

Probabilistic Information

• Eliminate the ‘bad’ uncertainty

-- Reduce systematic errorse.g. MOS correction, calibration

• Reliably estimate the ‘good’ uncertainty

-- Reduce probability sampling errorse.g. Gaussian fitting and Generalized Linear Model (GLM)

-- Minimize the random errorse.g. multi-model approach (for both response & forcing)

-- Minimize the conditional errorse.g. Conditional Exceedance Probabilities (CEPs)

Systematic Spatial Errors

Systematic error in locationof mean rainfall, leads tospatial error in interannualrainfall variability, and thusa resulting lack of skilllocally.

Systematic Calibration Errors

ORIGINAL

RESCALED

Dynamical models may have quantitativeerrors in the mean climate

RECALIBRATED

ORIGINAL

… as well as in the magnitude of its interannual variability.

Statistical recalibration of the model’sclimate and its response characteristicscan improve model reliability.

January 25, 2006 UNAM

before and

after statisticalcorrection

DJFM rainfallanomaly correlation

Reducing Systematic ErrorsMOS Correction

(Tippett et al., 2003, Int. J. Climatol.)

Converges like

S = Signal-to-noise ratio

N = ensemble size

“True” rms divide by .

2

2 1

3 1N S

2

N=8 N=16

N=24 N=39

Fitting with a GaussianTwo types of error:• PDF not really Gaussian!• Sampling error

– Fit only mean

– Fit mean and variance

/ 2

2

be

N

/ 2 1

32

be N

NN

Error(Gaussian fit N=24) = Error(Counting N=40)

Minimizing Random ErrorsMulti-model ensembling

Combining models reduces deficiencies of individual models

Probabilistic skill scores (RPSS for 2m Temperature (JFM 1950-1995)

Reliability!

A Major Goal of Probabilistic Forecasts

Reliability DiagramsShowing consistency between the a priori stated probabilities of an event and the a posteriori observed relative frequencies of this event.Good reliability is indicated by a 45° diagonal.

Benefit of Increasing Number of AGCMs in Multi-Model Combination

JAS Temperature

JAS Precipitation

(Robertson et al. 2004)

Correcting Conditional Biases

METHODOLOGY

Conditional Exceedance Probabilities

The probability that the observation exceeds the amount forecast depends upon the skill of the model.

If the model were perfect, this probability would be constant. If it is imperfect, it will depend on the ensemble member’s value.

Identify whether the exceedance probability is conditional upon the value indicated. Generalized linear models with binomial errors can be used, e.g.:

Tests can be performed on 1 to identify conditional biases. If 1 = 0 then the system is reliable. 0 can indicate unconditional bias.

(Mason et al. 2007, Mon Wea Rev)€

P(Xo > Xk | Xk ) =1

1+ exp(−β 0,k − β1,k Xk )

Idealized CEPs

(from Mason et al. 2007, Mon Wea Rev)

PERFECT Reliability

Positive skillSIGNAL too strong

Positive skillSIGNAL too weak

Negative skill NO skill

β1=0

β1<0 β1= Clim.β1<0|β1|>|Clim|

β1>0

Conditional Exceedance Probabilities (CEPs)

Standardized anomaly

100% 50% 0%

Shift

Scale Use CEPs to determinebiased probability ofexceedance.

Shift model-predicted PDF towards goal of 50% exceedance probability. Note that scale is a parameter determined in minimizing the model-CEP slope.

Adjustment decreases signal Adjustment increases signal

Adjustment increases MSE Adjustment decreases MSE

CEP Recalibrationcan either

strengthen orweaken SIGNAL

CEP Recalibrationconsistentlyreduces MSE

Effect of Conditional Bias Correction

Forecast

Verification

Verification of probabilistic forecasts

• How do we know if a probabilistic forecast was “correct”?

“A probabilistic forecast can never be wrong!”

As soon as a forecast is expressed probabilistically, all possible outcomes are forecast. However, the forecaster’s level of confidence can be “correct” or “incorrect” = reliable.

Is the forecaster over- / under-confident?

Forecast verification – reliability and resolution

• If forecasts are reliable, the probability that the event will occur is the same as the forecast probability.

• Forecasts have good resolution, if the probability that the event will occur changes as the forecast probability changes.

UNAM

Reliability diagram

Ranked Probability Skill Score (RPSS)

RPSS measures the cumulative squared error between the categorical forecast probabilities and the observed category relative to some reference forecast (Epstein 1969). The most widely used reference strategy is that of “climatology.” The RPSS is defined as,

where

N=3 for tercile forecasts. fj, rj, and oj are the forecast probability, reference forecast probability, and observed probability for category j, respectively. The probability distribution of the observation is 100% for the category that was observed and is 0 for the other two categories. The reference forecast of climatology is assigned to 33.3% for each of the tercile categories.

ref

fcst

RPS

RPSrRPSS 1

N

i

i

jj

i

jjfcst ofRPS

1

2

11

N

i

i

jj

i

jjref orRPS

1

2

11

Ranked Probability Skill Score (RPSS)

The RPSS gives credits for forecasting the observed category with high probabilities, and also puts penalties for forecasting the wrong category with high probabilities.

According to its definition, the RPSS maximum value is 100%, which can only be obtained by forecasting the observed category with a 100% probability consistently.

A score of zero implies no skill in the forecasts, which is the same score one would get by consistently issuing a forecast of climatology. For the three category forecast, a forecast of climatology implies no information beyond the historically expected 33.3%-33.3%-33.3% probabilities.

A negative score suggests that the forecasts are underperforming climatology.

The skill for seasonal precipitation forecasts is generally modest. For example, IRI seasonal forecasts with 0-month lead for the period 1997-2000 scored 1.8% and 4.8%, using the RPSS, for the global and tropical (30oS-30oN) land areas, respectively (Wilks and Godfrey 2002).

Real-Time Forecast Validation

Ranked Probability Skill Score (RPSS)Problem

The expected RPSS with climatology as the reference forecast strategy is less than 0 for any forecast that differs from the climatological probability – lack of equitability

There are two important implications: The expected RPSS can be optimized by issuing

climatological forecast probabilities. The forecast may contain some potential usable

information even when RPSS is less than 0, especially if the sharpness of the forecasts is high.

There is no single measure that gives a comprehensive summary of forecast quality.

GHACOF SOND

+15% bias because of hedging

-5%

no skill

hedging

serious bias -20%

good resolution: above-normal +10% below-normal +6%

0% resolution because of large biases

weak bias +5reasonable sharpness

sharpest forecasts believable?

Summary

Seasonal forecasts are necessarily probabilisticThe models used to predict the climate are not

perfect, but by identifying and minimizing their errors we can maximize their utility

The two attributes of probabilistic forecasts are reliability and resolution. Both these aspects require verification.

Skill in seasonal climate prediction varies with seasons and geographic regions - Requires research!