SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

55
SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th ,2010 SMOS-BEC Team

Transcript of SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

Page 1: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

SMOS Quality Working GroupMeeting #2

Frascati (Rome), September 13th-14th,2010

SMOS-BEC Team

Page 2: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

2 / Ntot

Outline

• METHODOLOGY• TESTS

1. RETRIEVAL MODE

Dual from Full vs. Stokes from Full

2. BIAS MITIGATION

No correction vs. External Bias Temperature Calibration vs. OTT

3. MODELS

Model 2 vs. Model 3(16)

4. SSS SELECTION

All overpasses vs. Ascending vs. Descending

5. TB SELECTION

EAF vs. AF

6. NEW FTR

July vs. August• CONCLUSIONS (+ or -)

Page 3: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

3 / Ntot

Methodology

All the results presented are at Level 3 (10-day 2-degree product).

Retrievals have been performed using SMOS-OS Level2 Processor.

Level 2 data have been filtered according to:

Fg_ctrl_reach_maxiter1,2,3: Maximum number of iteration reached before

convergence.

Fg_ctrl_marq1,2,3: Iterative loop ends because Marquardt

increment is greater than lambdaMax (100).

Statistical characterization is done considering only points more than 200 km from the coast

Fg_sc_land_sea_coast1 = 1 & Fg_sc_land_sea_coast2 = 0

Page 4: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

4 / Ntot

Methodology

L2 L3 averaging has been performed according to:

iobsi

iobsiiL

L N

NSSSSSS

2

3

The L3 accuracy is also introduced to someway estimate the quality of the measurement

iobsi

iobsiiLiL

L N

NSSSSSSACC

2

22

3

Page 5: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

5 / Ntot

Tests

1.RETRIEVAL MODE

10 days of retrieval from July, 10th to 19th

ISEA4H9 ISEA4H8 to reduce the computational resources needed

Model 2 in the mode “Stokes from Full-Pol” has been used

OTT has been applied in accordance to the official DPGS product

L3 retrieved SSS is compared to NOAA WOA05 climatology and ARGO averaged data

Page 6: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

6 / Ntot

Tests – Dual vs. Stokes’ I

L3 maps - Dual

Amazon plume

Cold waters

Page 7: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

7 / Ntot

Tests – Dual vs. Stokes’ I

L3 maps - Stokes’ I

Amazon plume

Cold waters

Page 8: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

8 / Ntot

Tests – Dual vs. Stokes’ I

L3 maps

Page 9: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

9 / Ntot

Tests – Dual vs. Stokes’ I

L3 maps - Accuracy

2.5 psu

Page 10: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

10 / Ntot

Tests – Dual vs. Stokes’ I

L3 statistics - Dual

rms

0.2260 2.2565 2.2678

0.5559 1.4457 1.5489

rms

0.2958 1.4391 1.4692

0.5278 0.5805 0.7846

Page 11: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

11 / Ntot

Tests – Dual vs. Stokes’ I

L3 statistics – Stokes’ I

rms

0.2445 2.3505 2.3631

0.6157 1.5464 +7%

rms

0.2935 1.6756 1.7011

0.6080 0.5932 0.8495

+4%

1.6645

+14%

+8%

Page 12: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

12 / Ntot

Tests

2.BIAS MITIGATION

10 days of retrieval from July, 10th to 19th

ISEA4H9 ISEA4H8 to reduce the computational resources needed

Model 2 in the mode “Dual from Full-Pol” has been used

No correction, external brightness temperature calibration[*], and OTT have been applied

L3 retrieved SSS is compared to NOAA WOA05 climatology and ARGO averaged data

Page 13: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

13 / Ntot

Tests – Bias mitigation

External Brightness Temperature Calibration

Constant within the snapshot (xi, eta) but varying in time

Ocean Target Transformation

Constant in time but varying within the same snapshot

snapshotmodretcorr TBTBTBTB

),( modretcorr TBTBTBTB

Page 14: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

14 / Ntot

Tests – Bias mitigation

L3 maps – No bias mitigation

Page 15: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

15 / Ntot

Tests – Bias mitigation

L3 maps – External Brightness Temperature Calibration

Page 16: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

16 / Ntot

Tests – Bias mitigation

L3 maps – External Brightness Temperature Calibration

MEAN BIAS SUBTRACTED

Less intense land-sea transition effect

Page 17: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

17 / Ntot

Tests – Bias mitigation

L3 maps – Ocean Target Transformation

Page 18: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

18 / Ntot

Tests – Bias mitigation

L3 maps - Accuracy

2.5 psu

Page 19: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

19 / Ntot

Tests – Bias mitigation

L3 statistics – no bias mitigation

rms

3.9591 2.6742 4.7776

3.3468 3.7604 5.8288

rms

4.1488 1.8571 4.5455

3.5769 0.6343 3.6327

Page 20: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

20 / Ntot

Tests – Bias mitigation

L3 statistics – External Brightness Temperature Calibration

rms

2.9600 2.3886 3.8036

2.1981 1.7300 2.7972

rms

3.1541 1.7764 3.6199

2.2214 0.5801 2.2959

-14% -20%

-52% -37%

Page 21: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

21 / Ntot

Tests – Bias mitigation

L3 statistics – Ocean Target Transformation

rms

0.2260 2.2565 2.2678

0.5559 1.4457 1.5489

rms

0.2958 1.4391 1.4692

0.5278 0.5805 0.7846

-52%

-73%

-68%

-78%

Page 22: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

22 / Ntot

Tests

3.MODELS

10 days of retrieval from July, 10th to 19th

ISEA4H9 ISEA4H8 to reduce the computational resources needed

OTT has been applied as for the official DPGS product

Model 2 and Model 3(16) are compared

L3 retrieved SSS is compared to NOAA WOA05 climatology and ARGO averaged data

Page 23: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

23 / Ntot

Tests – Model 2 vs. Model 3(16)

L3 maps – Model 2

Page 24: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

24 / Ntot

Tests – Model 2 vs. Model 3(16)

L3 maps – Model 3(16)

Page 25: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

25 / Ntot

Tests – Model 2 vs. Model 3(16)

L3 maps

Page 26: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

26 / Ntot

Tests – Model 2 vs. Model 3(16)

SST L3 maps

Page 27: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

27 / Ntot

Tests – Model 2 vs. Model 3(16)

WS L3 maps FROM ASCAT

Page 28: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

28 / Ntot

Tests – Model 2 vs. Model 3(16)

Scatterplot

Page 29: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

29 / Ntot

Tests – Model 2 vs. Model 3(16)

L3 maps - Accuracy

2.5 psu

Page 30: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

30 / Ntot

Tests – Model 2 vs. Model 3(16)

L3 statistics – Model 2

rms

0.2445 2.3505 2.3631

0.6157 1.5464 1.6645

rms

0.2935 1.6756 1.7011

0.6080 0.5932 0.8495

Page 31: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

31 / Ntot

Tests – Model 2 vs. Model 3(16)

L3 statistics – Model 3(16)

rms

0.8302 2.5735 2.7041

0.8490 1.7188 1.9171

rms

0.9346 1.9486 2.1611

0.9522 0.6619 1.1597

+13%

+13%

+21%

+27%

Page 32: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

32 / Ntot

Tests

4.SSS SELECTION

10 days of retrieval from July, 10th to 19th

ISEA4H9 ISEA4H8 to reduce the computational resources needed

OTT has been applied as for the official DPGS product

L3 averaging has been performed using ALL the overpasses, only the ASCENDING ones, and only

the DESCENDING ones

L3 retrieved SSS is compared to NOAA WOA05 climatology and ARGO averaged data

Page 33: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

33 / Ntot

Tests – All vs. Ascending vs. Descending

L3 maps - All

Page 34: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

34 / Ntot

Tests – All vs. Ascending vs. Descending

L3 maps - Ascending

Fresher when ice/land enters in the FOV

Saltier when it exits

Page 35: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

35 / Ntot

Tests – All vs. Ascending vs. Descending

L3 maps - Descending Generally saltier

Fresher when ice/land enters in the FOV

Saltier when it exits

Page 36: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

36 / Ntot

Tests – All vs. Ascending vs. Descending

L3 maps – comparisons with Ext TB cal

Page 37: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

Tests – All vs. Ascending vs. Descending

land-sea contamination

a previous study

Page 38: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

38 / Ntot

Tests – All vs. Ascending vs. Descending

L3 statistics - All

rms

0.2260 2.2565 2.2678

0.5559 1.4457 1.5489

rms

0.2958 1.4391 1.4692

0.5278 0.5805 0.7846

ALL PASSES ALL PASSES

Page 39: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

39 / Ntot

Tests – All vs. Ascending vs. Descending

L3 statistics - Ascending

rms

-0.0214 3.3183 3.3183

0.2175 1.5405 1.5557

rms

-0.3585 3.0150 3.0362

0.1239 0.8506 0.8595

+32%

=

+52%

+9%

Page 40: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

40 / Ntot

Tests – All vs. Ascending vs. Descending

L3 statistics - Descending

rms

0.3851 2.9766 3.0014

0.9942 1.7178 1.9848

rms

0.7824 1.8510 2.0096

1.0820 0.9610 1.4472

+24%

+22%

+27%

+46%

Page 41: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

41 / Ntot

Tests

5.TB SELECTION

5 days of retrieval from July, 10th to 14th

ISEA4H9 has been used

Model 2 in the mode “Dual from Full-Pol” is analized

OTT has been applied as for the official DPGS product

TB with a have been filtered out to almost reproduce the AF-FOV

L3 retrieved SSS is compared to NOAA WOA05 climatology and ARGO averaged data

25.022

Page 42: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

42 / Ntot

Tests – EAF vs. AF

AF-FOV approx.

Page 43: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

43 / Ntot

Tests – EAF vs. AF

L3 maps - EAF

Page 44: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

44 / Ntot

Tests – EAF vs. AF

L3 maps - AF

Page 45: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

45 / Ntot

Tests – EAF vs. AF

L3 maps – AF minus EAF

Ascending positive

Descending negative

Page 46: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

46 / Ntot

Tests – EAF vs. AF

L3 statistics - EAF

rms

-0.1680 2.7191 2.7242

0.4170 1.1453 1.2189

rms

0.0068 1.8762 1.8762

0.3608 0.6828 0.7722

Page 47: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

47 / Ntot

Tests – EAF vs. AF

L3 statistics – AF

rms

-0.0867 2.8017 2.8030

0.5254 1.2677 1.3722

rms

0.0674 1.9639 1.9650

0.4797 0.7427 0.8841

+3%

+11%

+5%

+13%

Page 48: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

48 / Ntot

Tests

6.NEW FTR

10 days of retrieval from July, 10th to 19th and August, 20th to 29th are compared

as produced by the DPGS:

ISEA4H9 has been used

Model 2 in the mode “Dual from Full-Pol” is analyzed

OTT has been applied

L3 retrieved SSS is compared to NOAA WOA05 climatology and ARGO averaged data

Page 49: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

49 / Ntot

Tests – July vs. August

L3 maps - July

Page 50: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

50 / Ntot

Tests – July vs. August

L3 maps - August Generally fresher

Page 51: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

51 / Ntot

Tests – July vs. August

August minus July

Page 52: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

52 / Ntot

Tests – July vs. August

L3 statistics - July

rms

-0.0308 2.6585 2.6587

0.5923 1.0897 1.2403

rms

0.1905 1.7765 1.7867

0.5401 0.5574 0.7762

Page 53: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

53 / Ntot

Tests – July vs. August

L3 statistics – August

rms

-0.4285 2.8671 2.8990

0.2659 1.1050 1.1366

rms

-0.2197 1.8899 1.9027

0.2691 0.5818 0.6410

+8%

-8%

+6%

-17%

Page 54: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

Conclusions (+ or -)

1. Dual from Full vs. Stokes from Full

4-14 % increment in SSS misfit rms using Stokes’ I

2. No correction vs. External Bias Temperature Calibration vs. OTT

Ext TB cal. partially diminishes the SSS misfit rms, OTT has a very

strong improvement effect.

Ext TB cal partially corrects for the land-sea transition effect and

seems to work better in the North Atl. waters (?)

The combined use of both techniques can be envisaged…

3. Model 2 vs. Model 3(16)

Model 3 is still in definition, conf. 16 (from WISE) has been used,

performing relatively close to Model 2. difference between models

are strongly related to SST.

4. All overpasses vs. Ascending vs. Descending

Waters appear fresher when land/ice enters in the FOV and saltier

when it exits when using only ascending or descending passes, the effect

is compensated using both. Descending passes give

generally saltier SSS.

Page 55: SMOS Quality Working Group Meeting #2 Frascati (Rome), September 13 th -14 th,2010 SMOS-BEC Team.

Conclusions (+ or -)

Ext TB calibration gives more homogeneous results…again the

combined use can be envisaged…

5. EAF vs. AF

Using only AF FOV a positive bias has been found in the ascending

passes, negative in the descending, w.r.t the case of using EAF FOV.

Change in statistics is small.

6. July vs. August

August 10-day SSS misfit is in average 0.3-0.4 psu fresher than

July’s SSS misfit.

Anyway statistics are very similar and no clear improvement can

be observed.