1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE...

47
1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian Distribution A. Approved for Public Release, ASC Case No. ASC 03-2048, 8/1/03

Transcript of 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE...

Page 1: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

1

Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0

Tim Ross, AFRL/SNAR

Angela Wise, AFRL COMPASE Center, JE Sverdrup

Donna Fitzgerald, AFRL SDMS, Veridian

Distribution A. Approved for Public Release, ASC Case No. ASC 03-2048, 8/1/03

Page 2: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

2

Outline

• Acknowledgements

• Objective

• Background

• Key organizations / Personnel

• Configuration Control

• Testing

• Data

• Tools

• Reporting

Page 3: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

3

Acknowledgements• Steve Welby, DARPA

– Provided impetus for this problem set in comments during a SPIE ‘03 panel discussion, particularly that static problems have become less relevant to the problems faced by today’s military

• Ed Zelnio, AFRL/SNA– Originated the idea of wrapping static problem sets with procedures that create dynamic

problem sets

• Lannie Hudson, AFRL COMPASE Center, JE Sverdrup– Developed code for clutter chip generation, display, and analysis

• Mike Bryant, AFRL/SNA– Principally responsible for the original MSTAR data public release, suggested the

exploitation emulation component, and allowed us to use chip database/server code he developed at Wright State University

• Ron Dilsavor, AFRL/SNA– Provided guidance on methods for SAR image manipulation, inclusion of synthetic effects,

and constructive criticism of MOPs

• Mark Minardi, AFRL/SNA– Contributed area-under-ROC curve algorithm and MOP guidance generally

• ATRWG / DUSD– Developed guidelines for Problem Set definition which were considered here

• Capt. Dave Parker, AFIT– Improvements in code and documentation based on AdaptSAPS beta testing

Page 4: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

4

Objective

• AdaptSAPS 1.0 – Foster basic research in adaptive algorithms for

target detection in SAR imagery– Encourage consideration of self-assessed confidence– Support OC conscious development and testing

• Future AdaptSAPS– Provide milestones for progress in adaptive system

technology as applied to SAR exploitation– Provide standard benchmarking problem set for

comparing adaptive systems from different developers

Please provide feedback on how we can better meet these objectives.

Page 5: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

5

Background• Problem / Data Sets

– ATRWG Standard Problem Sets • Not Public Released• July 1994

– FLIR– http://www.atrwg.vdl.afrl.af.mil/committees/database/standard_data_sets.html

• Recent– SAR and Fusion– https://restricted.atrwg.vdl.afrl.af.mil/problemset/

– MSTAR 1997+ Public Data Set• SAR• SDMS• Public Released• 150+ papers using MSTAR public data

– Other Data Sets• 3D Challenge Problem (Aerosense 2003)• SDMS - - https://www.mbvlab.wpafb.af.mil/public/sdms• David Aha’s “Data Repository” for Machine Learning list

– http://www.aic.nrl.navy.mil/~aha/research/machine-learning.html

Page 6: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

6

Background

• Technical Need– SPIE’03 Panel / Mr. Steve Welby, DARPA -

We no longer face static problems.– Varied nature of the problem

• Well demonstrated SAR ATR Operating Condition (OC) sensitivities

– Difficulties of obtaining training data for all conditions of interest

Page 7: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

7

Background

• Desired attributes of Adaptive Systems– Perform some function initially

• AdaptSAPS 1.0 : Target detection in SAR imagery

– Performs better with experience– Knows how well it’s doing (accurate self

assessed confidence)– May or may not have an initial batch training

set• AdaptSAPS 1.0 : Initial batch training set provided

– Experience may or may not have supervision• AdaptSAPS 1.0 : Supervision provided

Page 8: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

8

Background

• Examples of things a system may want to adapt to:– greater resolution in aspect– target variability (versions and types)– type and difficulty of clutter and confuser

images– prior probabilities of targets and nontargets– ...

Page 9: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

9

Key Organizations / Personnel• Coordination

– AFRL/SNA• Tim Ross, AFRL/SNAR

[email protected]

• Problem Set Definition– AFRL COMPASE Center

• Angela Wise, JE Sverdrup– [email protected]

• Problem Set Distribution– AFRL SDMS

• Donna Fitzgerald, Veridian– [email protected]

• Feedback / recommendations welcome

Page 10: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

10

Configuration Control

• Security– All elements of this problem set are approved for

public release

• Configuration control plan– The Problem Set will be managed by AFRL, based

on inputs provided at the Algorithms for Synthetic Aperture Radar Imagery Conference of the SPIE International Symposium on Defense and Security (formerly AeroSense)

• 1.0 - completely unsequestered• Future - sequestered data and OC dimensions

– The Problem Set will be distributed by AFRL SDMS

Page 11: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

11

Testing

• Methodology

• Measures of Performance (MOPs)

Page 12: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

12

Methodology - Key Concepts• Image chips (of targets and non-targets)• Missions (series of Image Chips of a common character)• SUT - System Under Test (your Adaptive target detector)• AdaptSAPS main program calls

– SUT Initialization

– then loops through Missions• then loops through Image Chips calling

– SUT Exploit (passing a single test image chip to the SUT)

– SUT Adapt (passing target truth for test chip to the SUT)

– then computes performance measures

Image Chips

Mission i

SUT

AdaptSAPSTest ChipSelection

TruthPerformance Measure

Target?

Image Chip

Page 13: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

13

Methodology• Initial Batch Training Set is Provided• System-Under-Test (SUT) is taken to a “deployment”

– AdaptSAPS initializes SUT– AdaptSAPS “flys” a sequence of Missions

• For each Mission– While images remain

» AdaptSAPS makes a test image without Target truth available to SUT

» SUT analyzes image» SUT reports probability that it contains a target (ProbTgt)» AdaptSAPS then provides to SUT the Target “Truth” for the

previous image - Simulating the results of human exploitation» SUT adapts

– AdaptSAPS reports Measures of Performance

Page 14: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

14

Methodology• SUT may use for exploitation and adaptation whatever information

that is provided to the SUT when called by the main AdaptSAPS program (run_missions.m), i.e., – Mission Number

• This does not include the information related to the definition of missions (e.g., prior probabilities)

• The SUT is only being informed via the Mission Number that the mission has changed

– Filename for the test image• Which will be …\test_image.000 throughout• This does include the information in the header of the test image. Note that

all target, object, etc. fields have been removed from the test image header• The SUT should NOT use prior knowledge about the MSTAR data

collection to then use site, time, lat, long, etc. from the header to inform exploitation or adaptation.

– True Target/Nontarget• This is provided at the Target / Nontarget level only; i.e., type, serial

number, aspect, …, are NOT provided.• This is provided to the SUT only after the SUT has made its estimate of

Target/Nontarget for that chip.

Page 15: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

15

Methodology (con’t)

• AdaptSAPS Inputs / SUT Outputs– SUT Result (ProbTgt)

• AdaptSAPS Outputs / SUT Inputs– Initial Batch Training Set (offline)– Test Image– Truth (Target/Nontarget)

SUT AdaptSAPS

MSTAR Data

MOPs

Mission Definition

Page 16: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

16

MOPs• Scoring methods

– Truth from headers, Reports from SUT– All testing at chip level - no issues concerning location accuracy or

truth-to-report “association” problems– Un-weighted Averaging used, since we already control the population

mix in each Mission

• Generally interested in– adaptation efficiency

• learning with fewer sequential data points,• taking fewer CPU cycles to perform each update, • limiting growth of required memory, • ...

– robustness• adapting to more and more extreme OCs

– post-adaptation accuracy• Pd/FAR/Pid and • self-confidence accuracy

Page 17: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

17

MOPs

• The following measure is proposed as something that encourages the desired behavior, but does so imperfectly. We encourage suggestions for better or simpler measures.

Please provide feedback on how we can improve MOPs.

Page 18: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

18

MOPs• From one perspective, a given set of test data and a given SUT

produce two distributions on the reported ProbTgt - one for target test data and one for nontarget test data

• As is usual, we desire that the two distributions be well separated. – This might be measured as

• probabilistic distance measure (e.g., Bhattacharyya distance)• Pfa at a fixed Pd• Pfa or (1-Pd) when they are equal• area under the ROC curve (as we do here)

• We also desire that the reported ProbTgt be accurate– i.e., of all the reports with confidence of ProbTgt, the fraction of those

that are actually targets should be about ProbTgt

– This might be measured as• difference between actual and reported probabilities (as we do here)• mutual information between reported probabilities and correctness of

decisions (see references in notes)

Page 19: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

19

MOP-Adaptation (MOPA)

• Reported for – the overall experiment, – each Mission, and – each quartile of each mission

• Objective is to encourage the SUT to – have accurate self-assessed confidence– differentiate targets from nontargets

Page 20: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

20

MOPA (con’t)

• MOPA = (E + (1-D))/2

• Error (E)– Equal number of test instances are placed in

each of 5 bins– E is the average across the 5 bins of the

difference (RMS) between average reported ProbTgt in the bin and actual target fraction in the bin

• Discrimination (D)– Area under the Pd - Pfa ROC curve

Page 21: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

21

MOPA (con’t)

Error

(E)

Discrimination

(D)

Reported ProbTgtT

rue

Pro

bTg

t

1:1

Reported ProbTgt

Tru

e P

robT

gt

1:1

Worse Better

Pfa

Pd

Pfa

Pd

Page 22: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

22

MOPA (con’t)

• MOPA – Smaller is better– Should always be in [0,1]– If a score set does not include both target and

nontarget entries then D is undefined and therefore MOPA is undefined

– If a score set does not have at least one entry per bin then E is undefined and therefore MOPA is undefined

– Error term includes a sampling bias, so will vary at small sample sizes. Comparisons should only be made between similar sample sizes.

– Note that the current MOPs depend solely on the SUT reported score (estTargetProb) and do not use the SUT’s Target/Nontarget decision (estTgtNontgt)

Page 23: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

23

Data - Outline

• Data Characterization– References– Operating Condition (OC) Dimensions

• Initial Batch Training Data

• Adaptive Test / Train Data - “Missions”– Menu Options– Specific Missions

Page 24: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

24

Data

• References– SDMS:

https://www.mbvlab.wpafb.af.mil/public/sdms/datasets/mstar/overview.htm

– Related publications for the MSTAR public data (in notes section below)

Please provide additional citations.

Please provide further insights on data characterization.

Page 25: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

25

Target OCs - Candidate Dimensions

• Target– SN– Version– Articulation– Configuration– Type– Class– Dimensions– Prior Probabilities

• Sensing– Synthetic Noise– Depression

• Environment– Synthetic Shadow– Collection

See readme.txt for actual database fields

Page 26: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

26

Target Data CharacterizationT

arg

et T

ype

Bu

mp

er N

um

ber

C1S

1_15

C1S

1_17

C2S

1_15

C2S

1_16

C2S

1_17

C2S

1_29

C2S

1_30

C2S

1_31

C2S

1_43

C2S

1_44

C2S

1_45

C2S

2_30

C2S

2_45

C2S

3_30

C2S

3_45

To

tal

2S1 B01 274 299 288 303 1164BMP2 9563 195 233 428BMP2 9566 196 232 428BMP2 C21 196 233 429

BRDM2 E71 274 298 287 303 133 120 1415BTR60 K10YT7532 195 256 451BTR70 C71 196 233 429

D7 92v13015 274 299 573slicy 1 274 286 298 210 288 313 255 312 303 2539T62 A51 273 299 572T72 132 196 232 428T72 812 195 231 426

T72 M A04 274 299 573T72 M1 A05 274 299 573T72 M A07 274 299 573T72 M A10 271 296 567T72 AV A32 274 298 572T72 B A62 274 299 573T72 B A63 274 299 573

T72 BE A64 274 299 288 303 133 120 1417T72 S7 191 228 419

ZIL131 E12 274 299 573ZSU23/4 D08 274 299 288 303 118 119 1401

Total 17096

This is a count of the number of target instances across the three public MSTAR target CDs (Targets, Mixed Targets, & T-72 Variants)

Page 27: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

27

Target Data CharacterizationT

arg

et

Typ

e

Bu

mp

er

Nu

mb

er

C1S

1_15

C1S

1_17

C2S

1_15

C2S

1_16

C2S

1_17

C2S

1_29

C2S

1_30

C2S

1_31

C2S

1_43

C2S

1_44

C2S

1_45

C2S

2_30

C2S

2_45

C2S

3_30

C2S

3_45

2S1 B01 N N N NBMP2 9563 N NBMP2 9566 N NBMP2 C21 N N

BRDM2 E71 N N N N Afs AfsBTR60 K10YT7532 N NBTR70 C71 N N

D7 92v13015 N Nslicy 1 N N N N N N N N NT62 A51 Cf CfT72 132 N NT72 812 Cf Cf

T72 M A04 Cf CfT72 M1 A05 N NT72 M A07 N NT72 M A10 N NT72 AV A32 VCfr VCfrT72 B A62 VCf VCfT72 B A63 VCf VCf

T72 BE A64 V V V V VAth VAthT72 S7 V V

ZIL131 E12 N NZSU23/4 D08 N N N N Atgd Atgd

This is a summary of the OCs present on the three public MSTAR target CDs (Targets, Mixed Targets, T-72 Variants)

N=Nominal

A=Articulation (t=turret, g=gun, h=hatch, f=firing rack, s=sight port, d=dish)

C=Configuration (f=fuel barrels, r=reactive armor)

V=Version Variant

Page 28: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

28

T72 Version Summary

• Version 3– A32

• Version 2– A62, A63, A64, s7

• Version 1– 132, 812, A04, A05,

A06, A07, A10

Page 29: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

29

Target Sets• T72 Nominal

– T72s same version and config as 132

• T72 EOC– all T72s– equal priors across versions

• Tracked Types– all tracked types except Bulldozer (D7)

• T62, T72, BMP1, 2S1, ZSU

– equal prior across types

• Combat Types– Tracked Types plus all wheeled types except Truck (Zil)

• T62, T72, BMP1, 2S1, ZSU, BTRs and BRDMs

– equal prior across types

Page 30: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

30

Confuser Sets

• None

• Slicy

• Slicy and Truck (Zil 131)

• Slicy, Truck, and Bulldozer (D7)

Page 31: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

31

Clutter Candidate OC Dimensions

• Imaging geometry (depression, squint, ...)– Treat the same as Target OCs

• Clutter Features– See Clutter Characterization

• Confusers– Candidates include Slicy, D7, Zil Truck

See readme.txt for actual database fields

Page 32: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

32

Clutter Characterization• 1160 chips from MSTAR Public Release clutter, each 128 x 128 pixels

• Identifiers: – FS image, Row, Column, Chip name

• Features– Clutter type– Score

• as assigned by a nominal ATR prescreener

– Mean– Variance– Standard Deviation– RMS– Skewness– Kurtosis– Maximum– Total Integral - sum of pixel magnitudes across entire chip– Zero-Valued Points

Page 33: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

33

Clutter Type• Features

– Natural or Cultural– Isolated, Edge / Corner, or Homogenous Surround

• Clutter type (C1 - C6)C1 = Cultural Isolated Object FA (small building, vehicle, …); 345 chips

C2 = Natural Isolated Object FA (tree, rock, …); 310 chips

C3 = Cultural Edge / Corner FA (things from fences, roads, …); 189 chips

C4 = Natural Edge / Corner FA (things from tree lines, streams, …); 73 chips

C5 = Cultural Homogenous Area FA (on a large building, parking lot, …); 122 chips

C6 = Natural Homogenous Area FA (on a grass field, forest canopy, …); 120 chips

Page 34: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

34

Clutter TypeCultural Natural

Isolated C1 C2Edge / Corner C3 C4Homogenous C5 C6

Cultural Natural AverageIsolated 0.2489 -0.07273 0.088085Edge / Corner -0.16662 -0.17316 -0.16989Homogenous 0.08859 -0.25345 -0.08243Average 0.0569567 -0.166447 -0.05475

Clutter Type

Score Averages

Cultural Natural TotalIsolated 345 310 655Edge / Corner 189 73 262Homogenous 122 120 242Total 656 503 1159

Chip Counts

Page 35: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

35

Clutter Chip Examples

C1 – Cultural Isolated Object FA

hb06188 _814_483

C6 – Natural Homogeneous FA

hb06183 _328_269

C4 – Natural Edge/Corner FA

hb06270 _124_1503

C5 – Cultural Homogeneous FA

hb06264 _722_360

C2 – Natural Isolated Object FA

hb06204 _729_817

C3 – Cultural Edge/Corner FA

hb06188 _631_445

Page 36: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

36

Clutter Chip Examples

Most target-like Least target-like

Score = 2.67

Score = - 0.47

Score = 0.46 Score = - 0.47 Score = - 2.50

Score = - 2.58Score = 2.63 Score = 0.46

hb06242 _1257_905 hb06159 _1122_251 hb06252 _486_832 hb06204 _729_817

hb06197 _183_1374 hb06161 _945_1169 hb06188 _631_445hb06258 _537_1266

Page 37: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

37

Clutter Sets

A - Type C6 clutter; 120 chips

B - Type C3, C4, and C5 clutter; 384 chips

C - Type C2 clutter; 310 chips

D - Type C1 clutter; 345 chips

Page 38: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

38

Initial Batch Training Data

• Target: – T72, SN 132, – 17 deg dep., – 72 chips - randomly selected– Defined by a list of image numbers

• Clutter: – Set A clutter chips – 17 deg dep., – 72 chips - randomly selected– Defined by a list of image numbers with

row/column of chip center

See readme.txt for actual image lists

Page 39: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

39

Mission Menu Options

• Mission Definition– Target Set– Clutter Set– Confuser Set– Prior probabilities (Tgt, Confuser, Clutter)– Total number of images in the mission

Nontargets

Page 40: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

40

Target / Nontarget

• AdaptSAPS Version 1.0 encourages consideration of one particular definition of “target”– i.e., has Target / NonTarget coded in create_DB.m,

future versions may make this easier to change.

• The following are only used as Targets– 2s1_gun, bmp2_tank, brdm2_truck, btr60_transport,

btr70_transport, t62_tank, t72_tank, zsu23-4_gun

• The following are only used as Nontargets– d7_bulldozer, clutter, slicey, zil131_truck

Page 41: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

41

Mission Menu Options (con’t)

• Prior Probabilities– Numbers are probabilities of targets, confusers,

clutter chips– e.g., 0.4, 0.1, 0.5– Prior probabilities are:

• Target - first number (e.g., 0.4)• Nontarget - sum of second and third no. (e.g., 0.6)

• Total Number of Images per Mission– Since quartiles are scored, multiples of 4 are

convenient– Basic missions all have 120 images, but work with

larger (thousands even) of images are also of interest

Page 42: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

42

Basic Missions

Mission No.

Mission Name Target Set

Clutter Set

Confuser Set

Synt

Priors (Tgt, Confuser, Clutter)

Expl

Total no. of mission images

1 BenignT72 Nominal A None

No0.4, 0.0, 0.6

L1 120

2 Baseline T72 EOC B Slicy N0.4, 0.1, 0.5 L 120

3Target-Rich T72 EOC B Slicy

No0.6, 0.1, 0.3

L1 120

4Target-Poor T72 EOC B Slicy

No0.3, 0.1, 0.6

L1 120

5Hard Clutter T72 EOC D Slicy

No0.4, 0.1, 0.5

L1 120

6 Confusers T72 EOC B

Slicy, Truck, and Bulldozer

None0.4, 0.3, 0.3

L1, 1 120

7Tracked Tgts

Tracked Types B Slicy

No0.4, 0.1, 0.5

L1 120

8Wheeled Tgts

Combat Types B Slicy

No0.4, 0.1, 0.5

L1 120

9 ModerateTracked Types C

Slicy and Truck

No0.4, 0.2, 0.4

L1 120

10 HardCombat Types D

Slicy, Truck, and Bulldozer

None0.4, 0.3, 0.3

L1, 1 120

Page 43: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

43

Missions• Notes for all Missions

– We include 15-17 deg. Depression and exclude >17 throughout

– We don’t have Articulation Variants at the included depression angles

– We’re assuming that the Collection is not a significant OC

– The offline training data is not consider to be a “mission”, but Mission 1 (with similar OCs) is a Mission. Adaptation is desired on Mission 1.

– The pre-defined Missions (1-10) are of interest, but a given user’s approach may suggest other, more appropriate, missions; that is of interest also.

• e.g., a particular approach may focus on version variants only, or use many more images per mission, or ...

Page 44: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

44

Methodology - Tools

See readme.txt for tool installation, set-up, and execution

SUT AdaptSAPS

MSTAR Data

MOPs

Mission Definition

Page 45: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

45

AdaptSAPS Consists of …• This briefing• The MSTAR Public Release data

– Available from SDMS at https://www.mbvlab.wpafb.af.mil/public/sdms/datasets/mstar/overview.htm

– Includes MSTAR Clutter, MSTAR Targets, MSTAR/IU T-72 Variants, and MSTAR/IU Mixed Targets

• Batch Training Set– Defined in readme.txt, lists target and clutter chip identifiers

• Tools– Documentation in readme.txt

– Installation and setup• Clutter chip generation from full scene clutter images• Database generation for target, confuser, and clutter Operating Conditions• Mission Definition - Matlab script for generating image lists from the parameters for enumerated

missions• Spreadsheet with Clutter Characterization information

– Clutter.xls

– Execution, including• Main (run_missions.m)• Example SUT (egSutInit.m, egSutExploit.m, egSutAdapt.m)• Server of test images and truth (sarOracle.m)• Performance Measures (getMOPs.m)

Page 46: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

46

Reporting• Publications Utilizing the AdaptSAPS Challenge

Problem are encouraged to:– Acknowledge the AdaptSAPS SDMS web site.– Include the missions defined here as examples in the sequence

order of 1, 2, 3, 4, 5, 6, 7, 8, 9, 10– Include other user-defined missions and orders of missions

– Include MOPA, E, and D (as reported by AdaptSAPS) for each mission

– Describe the methods used and the extent to which development was done outside the AdaptSAPS set-up

• Because the data is not sequestered, the AdaptSAPS process attempts to force adaptation by controlling presentation of information, but the lack of sequestration remains a concern. The legitimacy of adaptive performance claims may be best supported by a description of the approach with sufficient detail to allow duplication of results.

Page 47: 1 Adaptive SAR ATR Problem Set AdaptSAPS Ver. 1.0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian.

47

Future Version Considerations• Exploitation Model Implementation

– Including information about priors

• Synthetic Effects– Note - synthetic effects apply to Target, Confuser, and Clutter Data

– Noise Level

– Shadow

• Confidence Intervals for MOPA and its components

• Methodology– May not provide initial batch training set

– May provide more detailed truth (e.g., target type, aspect, …)

– May score more detailed reports (e.g., target type, aspect, …)

– May not provide any truth (i.e., unsupervised)

– May Provide imagery / truth on a predetermined schedule rather than on-demand

– May provide imagery in sets rather than as individual images

Please provide suggestions for Version 2.0