Hyperspectral Imaging System Modeling

14
VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 117 Hyperspectral Imaging System Modeling John P. Kerekes and Jerrold E. Baum To support hyperspectral sensor system design and parameter trade-off investigations, Lincoln Laboratory has developed an analytical end-to-end model that forecasts remote sensing system performance. The model uses statistical descriptions of scene class reflectances and transforms them to account for the effects of the atmosphere, the sensor, and any processing operations. System-performance metrics can then be calculated on the basis of these transformed statistics. The model divides a remote sensing system into three main components: the scene, the sensor, and the processing algorithms. Scene effects modeled include the solar illumination, atmospheric transmittance, shade effects, adjacency effects, and overcast clouds. Modeled sensor effects include radiometric noise sources, such as shot noise, thermal noise, detector readout noise, quantization noise, and relative calibration error. The processing component includes atmospheric compensation, various linear transformations, and a number of operators used to obtain detection probabilities. Models have been developed for several imaging spectrometers, including the airborne Hyperspectral Digital Imagery Collection Experiment (HYDICE) instrument, which covers the reflective solar spectral region from 0.4 to 2.5 μ μ μm. This article presents the theory and operation of the model, and provides example parameter trade studies to show the utility of the model for system design and sensor operation applications. H () are being applied to a number of areas, in- cluding the environment, land use, agricul- tural monitoring, and defense. Because it uniquely captures spatial and spectral information, hyperspec- tral imagery is often processed by traditional auto- mated image processing tools as well as analyst-inter- active approaches derived from spectroscopy. HSI products often contain both quantitative and qualita- tive information, arranged in an image to show the spatial relationships present in a scene. The interaction of spatial and spectral informa- tion, the dependence on ancillary or library informa- tion in the processing, and the wide range of possible HSI products prevent using a single or multiple instrument metric(s) to characterize system perfor- mance in a general manner. Thus design and sensitiv- ity analyses of hyperspectral systems require a more comprehensive approach than traditional imaging systems. This goal of capturing the effects of the entire sens- ing and processing chain motivates our HSI system modeling. We are interested in developing tools to understand the sensitivities and relative importance of various HSI system parameters in achieving a level of performance in a given application. Examples in- clude understanding how sensitive the detection of a subpixel object is to atmospheric haze or instrument noise. Another aspect is understanding which system parameters are most important and in what situa- tions. Having a comprehensive modeling capability is key to exploring these issues.

Transcript of Hyperspectral Imaging System Modeling

Page 1: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 117

Hyperspectral Imaging SystemModelingJohn P. Kerekes and Jerrold E. Baum

■ To support hyperspectral sensor system design and parameter trade-offinvestigations, Lincoln Laboratory has developed an analytical end-to-endmodel that forecasts remote sensing system performance. The model usesstatistical descriptions of scene class reflectances and transforms them toaccount for the effects of the atmosphere, the sensor, and any processingoperations. System-performance metrics can then be calculated on the basis ofthese transformed statistics. The model divides a remote sensing system intothree main components: the scene, the sensor, and the processing algorithms.Scene effects modeled include the solar illumination, atmospherictransmittance, shade effects, adjacency effects, and overcast clouds. Modeledsensor effects include radiometric noise sources, such as shot noise, thermalnoise, detector readout noise, quantization noise, and relative calibration error.The processing component includes atmospheric compensation, various lineartransformations, and a number of operators used to obtain detectionprobabilities. Models have been developed for several imaging spectrometers,including the airborne Hyperspectral Digital Imagery Collection Experiment(HYDICE) instrument, which covers the reflective solar spectral region from0.4 to 2.5 µµµµµm. This article presents the theory and operation of the model, andprovides example parameter trade studies to show the utility of the model forsystem design and sensor operation applications.

H () arebeing applied to a number of areas, in-cluding the environment, land use, agricul-

tural monitoring, and defense. Because it uniquelycaptures spatial and spectral information, hyperspec-tral imagery is often processed by traditional auto-mated image processing tools as well as analyst-inter-active approaches derived from spectroscopy. HSIproducts often contain both quantitative and qualita-tive information, arranged in an image to show thespatial relationships present in a scene.

The interaction of spatial and spectral informa-tion, the dependence on ancillary or library informa-tion in the processing, and the wide range of possibleHSI products prevent using a single or multipleinstrument metric(s) to characterize system perfor-

mance in a general manner. Thus design and sensitiv-ity analyses of hyperspectral systems require a morecomprehensive approach than traditional imagingsystems.

This goal of capturing the effects of the entire sens-ing and processing chain motivates our HSI systemmodeling. We are interested in developing tools tounderstand the sensitivities and relative importanceof various HSI system parameters in achieving a levelof performance in a given application. Examples in-clude understanding how sensitive the detection of asubpixel object is to atmospheric haze or instrumentnoise. Another aspect is understanding which systemparameters are most important and in what situa-tions. Having a comprehensive modeling capability iskey to exploring these issues.

Page 2: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

118 LINCOLN LABORATORY JOURNAL VOLUME 14, NUMBER 1, 2003

To support quick assessments of these kinds of sen-sitivity analyses, we have pursued a statistical para-metric modeling approach, based on earlier work [1],as opposed to a physics-based system simulationmethod [2–4]. The method described in this articlecan be run quickly and efficiently through a largenumber of parameter configurations to understandthese sensitivities.

End-to-End Remote Sensing System Model

The end-to-end remote sensing system model in-cludes all the elements in the scene (illumination, sur-face, and atmospheric effects), the sensor (spatial,spectral, and radiometric effects), and the processingalgorithms (calibration, feature selection, and appli-cation algorithm) that produce a data product. Figure1 presents an overview of the model.

The underlying premises of the model are that thevarious surface classes of interest, such as trees orroads, can be represented by first- and second-orderspectral statistics, and that the effects of various pro-cesses in the end-to-end spectral imaging system canbe modeled as transformations and functions of thosestatistics.

The model is driven by an input set of system pa-rameter descriptions that define the scenario, includ-ing the scene classes, atmospheric state, sensor charac-

teristics, and processing algorithms. Table 1 containsa list of model parameters and options available, aswell as their symbols used in this article.

These parameters are used in analytical functionsto transform the spectral reflectance first- and second-order statistics of each surface class through the spec-tral imaging process. The spectral mean and spectralcovariance matrix of each class are propagated fromreflectance to spectral radiance to sensor signals, andfinally to features, which are operated on to yield ametric of system performance. The following sectionsdescribe the scene, sensor, and processing modules ofthe model.

Scene Module

The end-to-end remote sensing system model consid-ers a scene to consist of one or more backgroundclasses and an object class. The user supplies the pro-portion of the scene filled by each background classand the fraction of a pixel occupied by the objectclass. Each class is described by its first- and second-order spectral reflectance statistics (mean vector andcovariance matrix). With user-supplied descriptionsof the atmosphere and the observation geometry, anatmospheric code transforms weighted combinationsof these reflectance vectors and matrices into surface-reflected and path-scattered radiances. These radi-

FIGURE 1. Block diagram of the end-to-end remote sensing system model. Surfaceclasses, such as trees or roads, can be represented by first- and second-order spectralstatistics. The effects of various processes in the end-to-end spectral imaging systemcan be modeled as transformations and functions of those statistics.

Scene description

Scene module

Reflectance statistics

library

Sensor settings

Sensor module

Sensor parameter

files

Processing settings

Processing module

Processing algorithm

descriptions

Spectral radiance statistics

Spectral signal statistics

User inputs

Performance metrics

Page 3: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 119

Table 1. Input System Parameters in Model

Scene

Total number of background classes M (≥ 1)

Area fraction of scene occupied by class m 0 ≤ fm ≤ 1, Σfm = 1

Pixel fraction occupied by subpixel object 0 ≤ fT ≤ 1

Object fraction in shadow 0 ≤ fS ≤ 1

Fraction of sky visible from object in shadow 0 ≤ fsky ≤ 1

Spectral covariance scaling factor for class m gm

Solar zenith angle 0 ≤ θs < 90°

Atmospheric model Tropical, midlatitude summer, midlatitude winter,subarctic summer, subarctic winter, 1976 U.S. standard

Meteorological range (visibility) V

Aerosol model Rural or urban

Cloud at 2.4 km altitude Yes or no

Sensor

Sensor type HYDICE, Hyperion, and others

Number of spectral channels K

Channel wavelength, bandwidth λ, ∆λ

Spectral quantum efficiency η

Spectral optics transmittance τ

Pixel integration time t

Saturation spectral radiance Lmax

Number of radiometric bits Q

Sensor platform altitude z

Sensor view angle 0° ≤ θv < 90° (nadir = 0°)

Sensor noise factor gn

Relative calibration error cR

Data bit-error rate Be

Processing

Number of features F

Spectral regions for use as features Wavelength regions

Feature selection algorithm Contiguous regions, principal components, band averaging

Atmospheric compensation Empirical line method (ELM) or none

Performance algorithm metric Constrained energy minimization (CEM) spectral matchedfilter, total error, spectral characterization accuracy

Desired false-alarm rate 10–6 < PFA < 10–2

Page 4: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

120 LINCOLN LABORATORY JOURNAL VOLUME 14, NUMBER 1, 2003

ances are then combined to produce the mean andcovariance statistics of the spectral radiance at the in-put aperture of a spectral imaging sensor. The scenegeometry, reflectance inputs, transformations, and at-sensor radiances are detailed below.

Scene Geometry and Subpixel Object Model

The model assumes a simple area-weighted linearmixing model for a subpixel object within a scene thatcontains M background classes, as shown in Figure 2.Each background class m occupies a fraction fm of thescene, with the constraint that the fractions sum toone. The background class containing the subpixelobject is denoted m*. It is important to note that thismodel does not actually simulate a specific spatial lay-out; rather, it accounts for the effects of the multiplebackground classes through the area-weightingscheme.

A simple linear model is assumed for the object-class pixel. The subpixel fraction fT , with 0 ≤ fT ≤ 1,defines the fractional area of the pixel occupied by theobject with direct line of sight to the sensor. Parts ofthe object occluded by the background are accountedfor in the background fraction.

Input Reflectance Statistics

The object and background spectral reflectance statis-tics are computed externally and provided as inputs tothe model. They may be derived from field spectrom-eters, laboratory measurements, airborne spectrom-eter imagery converted to reflectance, or physics-based simulations. For each class, the input statisticsconsist of a spectral mean reflectance vector ρ and aspectral reflectance covariance matrix Γρ.

The model assumes that the reflectance distribu-tion of each class, background or object, is unimodal.Thus the data used to compute the statistics must becarefully screened, through spectral clustering or his-togram techniques, to ensure they form a clusteraround a single mean point in the multidimensionalspace. In some cases, a single terrain category needs tobe separated into multiple reflectance classes to en-sure that each class is unimodal. For example, thesingle category “grass” may need to be split intoclasses of “dry grass” and “healthy grass,” each with adifferent mean reflectance. Also, the model considers

the reflectance vectors to be hemispherical reflectancefactors for completely diffuse surfaces. Effects relatedto the more complicated structure of the bidirectionalreflectance distribution function are not considered.

Atmospheric Radiance and Transmittance

The model uses the Air Force Research Laboratorycode MODTRAN [5] to compute the solar illumina-tion and atmospheric effects. A number of calls aremade to the code to calculate the various radiancevectors used to transform the reflectance statistics toradiance statistics. For convenience in the current ver-sion of the software, the sensor channel spectral re-sponse functions are convolved with the spectral radi-ances immediately after each MODTRAN run iscompleted. Thus the spectral radiance vectors at theoutput of the scene model have the same dimension-ality as the sensor.

Mean Spectral Radiance†

The total mean spectral radiance for each class is thesum of LS(ρ), the total surface reflected radiance (dif-fuse and direct) for a mean reflectance ρ, and LP(ρave),which is the path-scattered radiance (adjacent andpath) calculated with the scene average reflectanceρave as the surface reflectance. Figure 3 shows thepaths for these various radiance components. This

† All radiance calculations are performed as functions of wavelength,but for clarity in presentation, the subscript λ has been dropped.

FIGURE 2. Notional scene geometry with multiple back-ground classes and a single subpixel object. This model as-sumes a simple area-weighted linear mixing model for asubpixel object within a scene that contains one or morebackground surface classes.

Backgroundclass 1

Backgroundclass 3

Background class 2

Subpixelobject

Sensor pixel

Page 5: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 121

formulation for the path radiance models the “adja-cency effect,” which is discussed below. The modelhas been developed for sensors operating in the reflec-tive solar portion of the optical spectrum, with scenesnear room temperature. Thermal emission effects arenot considered.

Background Classes. A separate call to MODTRANis made for each background class m, as well as onefor the background scene average. The total meanspectral radiance LB for each case is

L L LB S m P avem= +( ) ( ) ,ρ ρ

and

L L LB S ave P aveave= +( ) ( ) .ρ ρ

The scene average reflectance ρave is computed by us-ing the class fractions fm:

ρ ρave mm

M

mf==∑

1

. (1)

FIGURE 3. Sources of illumination and their paths from the sun to the sceneand into the sensor. The total surface-reflected radiance arriving at the sen-sor consists of direct and diffuse components. The total path-scattered radi-ance arriving at the sensor consists of adjacent and path components.

Object Class. The mean spectral radiance L̃T forthe object class is

˜ ( ) ( ) .˜L L LT S T P ave= +ρ ρ (2)

The surface reflectance used in the MODTRAN callto generate the first term of Equation 2 is computedas

˜ ( ) .*ρ ρ ρT T T T mf f= + −1

The weighted sum of the object-class mean reflec-tance and the background-class m* mean reflectanceimplements the linear model described above.

Path Radiance Calculation. For all mean spectralradiance calculations, the path-scattered contributionLP is calculated by using the scene fractional area-weighted average reflectance ρave, as shown in Equa-tion 1. (Note that when MODTRAN runs with the“multiple scattering” option on, the atmosphericpath-scattered radiance term depends on surface re-flectance.) This approach accounts for atmosphericadjacency effects caused by the scattering of nearby

Radiance arriving at the sensor

Ldiffuse

Ldirect

Ladjacent

Lpath

Scene

Sensor pixel

Subpixel object

Sun

Airborne orspaceborne sensor

Page 6: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

122 LINCOLN LABORATORY JOURNAL VOLUME 14, NUMBER 1, 2003

surface-reflected radiance into the sensor’s instanta-neous field of view. The assumption here is that thescattering scale of the effect covers the entire scenebeing collected. Studies have shown this adjacency ef-fect can occur out to several hundred meters [6], typi-cal of the scenes considered by the model.

Spectral Radiance Covariance

The transformation of the spectral reflectance covari-ance statistics Γρ to spectral radiance covariance sta-tistics ΓL follows the same linear atmospheric modelassumed in the mean calculations. We interpolatespectral radiances calculated for surface reflectancesequal to zero and one by using the entries of the re-flectance covariance matrices.

These transformations use the following diagonalmatrices, with the described vectors along the diago-nals and zeros elsewhere: ΛLS1 is the total surface-re-flected spectral radiance for a surface reflectance of 1,ΛLP1 is the atmospheric path-scattered spectral radi-ance for a surface reflectance of 1, and ΛLP0 is the at-mospheric path-scattered spectral radiance for a sur-face reflectance of 0.

Background Classes. The background spectral radi-ance covariance matrices for each background class mand the scene average are computed as

Γ Λ Γ Λ

Λ Λ Γ Λ Λ

LB LS B LS

LP LP B LP LP

m m

ave

= +

−[ ] −[ ]1 1

1 0 1 0

ρ

ρ ,

and

Γ Λ Γ Λ

Λ Λ Γ Λ Λ

LB LS B LS

LP LP B LP LP

ave ave

ave

= +

−[ ] −[ ]1 1

1 0 1 0

ρ

ρ .

Object Class. The object-class spectral radiance co-variance matrix ΓLT is computed by using the totalsurface-reflected radiance output from MODTRAN:

Γ Λ Γ Λ

Λ Γ Λ

Λ Λ Γ Λ Λ

LT LS T LS

T LS B LS

LP LP B LP LP

f

fm

ave

= +

− +

−[ ] −[ ]

21 1

21 1

1 0 1 0

1

ρ

ρ

ρ

( )

.*

Sensor Module

The sensor module takes the spectral radiance meanand covariance statistics of the various ground classesand applies sensor effects to produce signal mean andcovariance statistics that describe the scene as imagedby an imaging spectrometer. The sensor module in-cludes a limited number of radiometric noise sources,with no spatial or spectral sources of error. Also, as wenoted earlier, the channel spectral response of the sen-sor is applied during the input radiance calculationsdescribed in the previous section.

Radiometric noise processes are modeled by add-ing variance to the diagonal entries of the spectral co-variance matrices. Off-diagonal entries are not modi-fied because it is assumed that no channel-to-channelcorrelation exists in the noise processes.

The radiometric noise sources come from detectornoise processes, including photon (shot) noise, ther-mal noise, and multiplexer/readout noise [7]. The to-tal detector noise σn is then calculated as the root-sum-square of the photon noise, the thermal noise,and the multiplexer/readout noise. Because detectorparameters are often specified in terms of electrons,the noise terms are root-sum-squared in that domainbefore being converted to noise equivalent spectralradiance.

After the total detector noise σn (in electrons) hasbeen converted back to the noise equivalent spectralradiance σLn , it is then scaled by a user-specified noisefactor gn . Next( )gn Ln

σ 2 is added to the diagonal en-tries of the spectral covariance matrices for each sen-sor spectral channel.

Another noise source is relative calibration error cR.This error is also assumed to be uncorrelated betweenspectral channels; it is described by its standard devia-tion σcR

as a percentage of the mean signal level. Ex-pressed as a variance, it is added to the diagonal en-tries of the covariance matrices of each class, as withthe other noise sources.

The last two noise sources are quantization noisein the analog-to-digital conversion and bit errors inthe communications or data recording system. Thesesources depend on the assumed dynamic range of thesensor Lmax. The quantization error variance σnq

2 iscalculated for a system with Q radiometric bits as

Page 7: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 123

σnq Q

L22

112 2 1

=−

max .

The model for the effect of bit errors in the datalink (or onboard storage) assumes that bit errors areuniformly distributed across the data word and couldbe of either sign. Thus, for Q bits, the error will takeon one of 2Q values, ±2i for i = 0,…,Q – 1, withequal probability of 1/(2Q). The noise variance σnBe

2 ,caused by a bit error rate of Be , is

σnBe q

Qq

Q

e

BQ

L22

0

1

22 1

=−

=

∑ max .

These last two noise terms are also added to the di-agonal entries of the spectral covariance matrices.

For reporting as a performance metric, the class-dependent sensor signal-to-noise ratio is calculated asthe ratio between the mean signal and the square rootof the sum of the noise variance terms.

Processing Module

The signal means and covariances computed by thesensor module are then transformed by optionswithin the processing module to produce outputs forevaluating the modeled hyperspectral system. An at-mospheric compensation algorithm may be appliedto the signal statistics to retrieve class reflectance sta-tistics, reduced-dimensionality feature vectors may beextracted from the class signal vectors, and scalar per-formance metrics may be calculated. Each of theseprocessing options is described below.

Atmospheric Compensation

Atmospheric compensation is accomplished by defin-ing surrogate low- and high-reflectance calibrationpanels and computing the slope and offset of a two-point linear fit between the mean panel signals andthe known reflectances. This approach models theempirical line method (ELM) often used for atmo-spheric compensation. The slopes and offsets are ap-plied to the mean signal and covariance matrices ofthe object and background classes to compute the re-trieved (or estimated) class reflectance mean ρ̂ andcovariance Γ̂ statistics.

Feature Selection

Several options exist for extracting a reduced-dimen-sionality feature vector F from the signal vector: (1)all channels within contiguous regions (e.g., to avoidwater-vapor absorption spectral regions), (2) princi-pal components, and (3) band averaging to simulatemultispectral channels. Each option is implementedas a linear transformation by applying an appropriatefeature-selection matrix Ψ to the mean vectors andcovariance matrices of the object class and each back-ground class. This matrix Ψ can be applied in eitherthe retrieved reflectance domain (if atmospheric com-pensation was performed) or the signal domain (di-rectly on the statistics output by the sensor model):

F XT= Ψ (3)

and

Γ Ψ Γ ΨFT

X= , (4)

where X refers to the signal type (retrieved reflectanceor sensor signal) for the object class and each back-ground class.

Performance Metrics

Three algorithms are available to determine a perfor-mance metric for a given scenario: (1) spectral charac-terization accuracy, a measure of how well the spectralreflectance can be retrieved from the sensor measure-ments; (2) a version of a spectral matched filter,known as constrained energy minimization (CEM)[8], that can be used to predict probability of detec-tion versus probability of false-alarm curves (PD/PFA);and (3) total error, which approximates the sum offalse-alarm and missed-detection probabilities to pro-duce a scalar performance metric.

The first performance metric, spectral character-ization accuracy, is quantified by both the mean dif-ference SCbias between the retrieved surface reflec-tance of the object and its initial known reflectance,and by the standard deviation σSC of the difference foreach spectral channel l :

SC l l lbias( ) ( ) ( ) ,ˆ= −ρ ρ

and

Page 8: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

124 LINCOLN LABORATORY JOURNAL VOLUME 14, NUMBER 1, 2003

σ σ σρ ρSC l l l( ) ( ) ( ) .ˆ= −2 2

The second performance metric, the matched fil-ter, uses a known object spectral “signature” and anestimate of the background spectral covariance tominimize the energy from the background and to em-phasize the desired object. In the model implementa-tion, the known “signature” is the object’s originalmean spectral reflectance used at the input to themodel. The filter operator w is

w FB FT FB

FT FBT

FB FT FB

ave ave

ave ave ave

=−

− −

ˆ ( )

( ) ˆ ( ),

ˆ

ˆ ˆ

Γ

Γ

ρ

ρ

ρ ρ

ρ ρ ρ ρ

1

1

where the subscript F indicates the signal means andcovariances have been transformed to the desired fea-ture space by using Equations 3 and 4. The filter isapplied to the combined object/background class fea-tures and to the features of each background class.The operator w transforms the mean and covariancefrom each class feature space to a scalar test statisticwith mean θ and variance σθ

2:

θ ρ ρ

θ ρ ρ

σθ ρ

TT

FT B

BmT

FBm FB

TFT

w

w m M

w w

ave

ave

T

= −

= − =

=

( ) ,

( ) ,

ˆ ,

ˆ ˆ

ˆ ˆ for 1

2

K

Γ

and

σθ ρBm mw w m MT

FB2 1= =ˆ .Γ for K

After this transformation, the PD/PFA curve canthen be calculated. The probability of detection PDm(computed separately for each background class m)is computed for a user-specified probability of falsealarm PFA by assuming a Gaussian probability densityfunction for the test statistic output. This assumptionis somewhat justified by the Central Limit Theorembecause the operator is a summation of a large num-ber of random variables. The threshold hm is deter-mined from the variance σθ

2Bm

and mean θBmfor each

background class m and the desired probability offalse alarm:

h Pm B FAm Bm= + −θ σθ Φ 1( ) .

The function Φ–1 returns the cutoff value such thatthe area under the standard normal curve to the rightof the cutoff is equal to the argument of the function.Then, using hm, the probability of detecting the ob-ject in background class m is

Px

dxDT

hm

T Tm

= −−

∫1

2 2

2

2σ π

θ

σθ θ

exp( )

.

For scenarios with multiple backgrounds, thethreshold h* yielding the minimum PD is used to re-compute the false-alarm probabilities for the otherclasses. These new probabilities are then summed byusing the background class area fractions fm to yield acombined PFA:

P f P hFA m FAm

M

m=

=∑ ( *) .

1

The combined PD is simply the minimum PDm:

P PD Dm= min .

The third performance metric included in themodel approximates the total error Pe (i.e., the over-lap between multivariate distributions) in a two-class,equal a priori probability case:

P P Pe D FA≈ − +1

21

1

2( ) .

Pe is approximated by using the standard normal errorfunction and the Bhattacharyya distance Bdist in thefollowing function, which was found to provide agood estimate of the true error value [9]:

Px

dxeBdist

= −

∫1

2 2

2

2πexp ,

where

B F F F Fdist T BT FT FB

T B

FT FB

FT FB

= −( ) +

+

+

−1

8 2

1

22

1Γ Γ

Γ Γ

Γ Γ

( )

ln .

Page 9: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 125

While Pe does not distinguish between errorscaused by false alarms and those caused by missed de-tections, it does provide a single scalar metric that canbe used for relative performance comparisons. It isnormally used in the model to assess the relative con-tribution to system error from the various system pa-rameters, as shown in the section on example results.

Implementation

The model has been implemented as part of a soft-ware package named Forecasting and Analysis ofSpectroradiometric System Performance (FASSP).The package is written in the Interactive Data Lan-guage (IDL) of Research Systems Incorporated totake advantage of IDL’s integrated graphical user in-terface and plotting capabilities, as well as the port-ability between computing platforms. The model hasbeen set up to run on UNIX, PC, and Macintoshplatforms. A typical parameter trade study takes onlya few minutes to run.

FASSP contains several routines for reading the in-formation necessary for executing runs. Files thatcontain the material reflectance statistics and the sen-sor models are stored in ASCII files. The parametersthat define a model run are stored outside of theFASSP environment in ASCII files called parameterfiles. Parameter files can be created and edited eitherinside FASSP or outside and then imported.

When FASSP is run, the individual scene, sensor,and processing modules (see Figure 1) are executed inseries, without user interaction, using the model val-ues in the parameter file as their inputs. It is not un-usual to test several different sensor parameters orprocessing methods against a single scenario. In thiscase, the scene module results are static and only thesensor and processing modules need to be repeated.Because executing the scene module dominates themodel run time, this approach allows analyses to beconducted quickly.

Run output is saved in an IDL native binary for-mat. The output of a model run can be restored eitherwithin FASSP or it can be restored and manipulatedin IDL outside the FASSP environment. Graphicaldisplays are also available for plotting results fromvarious stages of the model, such as materialreflectances, at-sensor spectral radiance, signal-to-

noise ratio, PD /PFA curves, and other processingmetrics. FASSP allows the plots to be saved in JPEG,PostScript, or ASCII formats.

Validation

The model and its implementation have been vali-dated with airborne hyperspectral imagery. A recentpublication [10] provides comparisons of measureddata and model predictions at points in the end-to-end spectral imaging process. Comparisons includethe spectral radiance at the sensor input aperture, thesensor signal-to-noise ratio, and the detection perfor-mance (PD /PFA) after applying a spectral matched fil-ter. At all points in the process, the model predictionscompare favorably with the empirical data.

Example Results

One advantage of an end-to-end system model is thatthe user controls all of the scene, sensor, and process-ing parameters and can specify nearly arbitrary con-figurations. With these capabilities, our model can beapplied to investigate which parameters have themost impact on a system’s performance and to studythe sensitivity of a given system’s performance tochanges in the scene and/or sensor. We present oneexample of a “relative impact” analysis, followed bytwo examples of “sensitivity” analyses.

Relative Roles of System Parameters

Our model includes an option to study automaticallythe relative roles of system parameters in a quantita-tive manner. We considered, as an example, the prob-lem of detecting subpixel roads in a forested back-ground. This application could arise in the use ofmoderate spatial-resolution hyperspectral imagery toderive a road network layer for a Geographic Infor-mation System (GIS). For our example, we used asensor model of a generic hyperspectral imager. Re-flectance statistics for a gravel road and an area withtrees were derived from atmospherically compensatedradiance data collected by the airborne HyperspectralDigital Imagery Collection Experiment (HYDICE)instrument [11].

The analysis was conducted by calculating the totalprobability of error Pe with all parameters at theirnominal values, and then recalculating Pe as each of

Page 10: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

126 LINCOLN LABORATORY JOURNAL VOLUME 14, NUMBER 1, 2003

the parameters was changed to an “excursion” value,while leaving all other parameters at their nominalvalues. The parameter excursion values were chosento represent an ideal value or one that would have aminimal effect on system performance. Table 2 liststhe system parameters studied, their nominal values,their excursion values, the model-calculated probabil-ity of error, and a quantitative estimate of the relativeimportance to system performance of each parameter.The Pe metric was used as a measure of the separabil-ity of the subpixel road class and the forest back-ground. The relative importance of each parameterwas computed by taking the difference between theerror calculated by using the excursion value of thatparameter and the system nominal error, and thennormalizing by the sum of all such differences.

The results indicate that the most important sys-tem parameters are the number of spectral channels,the forest background covariance scaling, and the me-teorological range (atmospheric visibility). The im-portance of the number of spectral channels is not

only because the dimensionality of the measurementsis increased, but also because the additional channelscover a broader spectral range. In this example, thenominal 30 channels covered only 0.4 to 0.7 µm,while the excursion 60 channels covered 0.4 to 1.0µm. After these three parameters, the sensor viewangle has the next biggest influence, then the subpixelfraction, and so on. In this scenario, the impact of off-nadir viewing is predicted to be more significant thanoff-zenith solar angle. Although the model does nottake into account the bidirectional reflectance distri-bution function of the surface, this result can be ex-plained by the increase in the path radiance scatteredinto the sensor aperture, which, because of the adja-cency effect [6], reduces separability between the roadand the forest classes.

It is important to note that the conclusions fromthis type of analysis are extremely dependent uponthe scenario configuration and the system parametersconsidered. The results shown in Table 2 are not in-tended to apply to the general case, but are shown to

Table 2. Relative Importance of System Parameters in Detecting Subpixel Roads in a Forest

Scenario Nominal Excursion Pe for RelativeParameter Value Value Excursion Value* Role

Number of spectral channels 30 60 0.1245 28%

Background variability scaling factor 1 0 0.1740 24%

Meteorological range 5 km 50 km 0.1894 22%

Sensor view angle (nadir = 0°) 60° 0° 0.3002 12%

Object subpixel fraction 0.5 1.00 0.3557 7%

Sensor relative calibration error 2% 0% 0.3778 4%

Solar zenith angle 60° 0° 0.3974 3%

Number of radiometric bits 8 16 0.4235 0%

Object variability scaling factor 1 0 0.4249 0%

Sensor noise scaling factor 1 0 0.4254 0%

Bit-error rate 10–9 0 0.4255 0%——

Total 100%

*(Pe = 0.4255 with all nominal values)

Page 11: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 127

illustrate a use of the model and a methodology forquantitatively assessing the relative importance of di-verse system parameters in an end-to-end spectral im-aging system.

Subpixel Detection Sensitivity toVarious System Parameters

Another use of the model is to study how sensitive agiven sensor might be to either design changes orscene conditions. This type of analysis can be helpfulwhen setting system requirements or when settingdata collection parameters for systems that have con-trollable parameters.

To illustrate this use of the model, we present twoexamples showing parameter sensitivities. The sce-nario for both is the detection of subpixel roads in aforested background, as previously described, butwith the use of a road spectrum (from a library or pre-vious data collection) and a spectral matched filter.The analyses use a model of the Hyperion [12] sensor,and an atmospheric compensation algorithm with

1% (1 σ) accuracy. Table 3 presents other parametersand values assumed in the analyses.

In the examples given below, the probability of de-tection (at a constant false-alarm rate) is presented asa function of the pixel fill factor fT , the fraction of apixel occupied by the road. This pixel fill factor can beloosely translated to an actual object size, given a pixelground resolution, and assuming the object is notsplit across adjacent pixels. As an example, Figure 4illustrates how a 15-m-wide road running throughthe middle of a 30-m resolution pixel would roughlycorrespond to a 50% pixel fill factor (ignoring the de-tailed effects of sensor and atmospheric point spreadfunctions). Because the model assumes a linear mix-ing of the object and the background class, we canpresent PD as a function of the pixel fill factor andavoid specifying a particular sensor spatial resolutionand object size. Thus the analysis results can be ap-plied to a range of absolute pixel ground resolutionsand object sizes. The exact performance, however,will vary because of sensor noise considerations andthe variability of the spectral statistics at the variousresolutions.

Sensitivity to Atmospheric Meteorological Range.Changes in atmospheric visibility, specified by themeteorological range parameter, will affect the signallevel in a spectrally dependent manner, as well as af-fect the amount of radiance scattered from the back-ground into the object pixel. Even though the sce-nario includes an atmospheric compensation step,this scattered radiance can affect performance. Figure5 shows that the required pixel fill factor fT for a high

FIGURE 4. Overhead view showing 50% pixel fill factor for a15-m road in a forest. This scene geometry ignores the de-tailed effects of sensor and atmospheric point spreadfunctions.

Table 3. Detection Scenario for Road in Forest(Nominal)

Parameter Value

Object subpixel fraction Variable

Background Trees

Meteorological range 10 km

Solar zenith angle 45°

Sensor view angle (nadir = 0°) 0°

Sensor relative calibration error 1%

Sensor noise scaling factor 1

Number of radiometric bits 12

Bit-error rate 10–9

Number of spectral channels 121

Spectral processing algorithm CEM spectralmatched filter

False-alarm rate (per pixel) 10–5

Forest

Road15 m

30-m pixel

Page 12: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

128 LINCOLN LABORATORY JOURNAL VOLUME 14, NUMBER 1, 2003

detection probability (PD ≥ 0.9) increases with de-creasing meteorological range. However, the increasein required fT is relatively moderate (from 30% to40%) for a significant decrease (80 km to 5 km) inthe meteorological range. Thus, in this scenario, wecan conclude that detection performance is onlymoderately affected by atmospheric haze over a rea-sonable range of visibility values.

Sensitivity to Random Calibration or CompensationErrors. The effects of random error in this analysis arestudied by adding zero-mean “noise” with a standarddeviation σ equal to a user-specified percentage of themean radiance. In a real sensor system, this randomerror could come from a number of sources, such asresidual nonuniformity correction error or randomerrors in the atmospheric compensation.† Figure 6presents the sensitivity of detection probability torandom errors of 0%, 1%, 2%, and 4% of the meansignal level. This range of additive error is typical forstate-of-the-art sensors, nonuniformity correction

routines, and atmospheric compensation algorithms.As in the previous example, for this range of values,the detection performance sensitivity is moderate,with the required (PD ≥ 0.9) fill fraction changingfrom 30% to 40%, as 4% random error is added.

Summary and Conclusions

We have presented an approach to predict detectionperformance, analyze sensitivities, and determinerelative contributions of system parameters for multi-spectral or hyperspectral sensors used in the detectionof subpixel objects. The end-to-end system modelbuilds on a previously developed approach using first-and second-order spectral statistics and transforma-tions of those statistics to predict performance. En-hancements include a linear mixing model for thesubpixel objects, additional sensor modeling capabil-ity, atmospheric compensation approaches, and amatched filter detection algorithm. Unlike imagesimulation models, this model does not produce asimulated image, but rather predicts detection or er-ror probabilities. Thus our model avoids the compu-tational complexity of pixel-by-pixel ray tracing simu-lation approaches.

The model has been, and continues to be, vali-

FIGURE 6. Sensitivity of road detection to additional (be-yond sensor detector and electronics noise) random error(with a standard deviation expressed as a percentage of themean signal) as a function of pixel fill factor.

FIGURE 5. Sensitivity of road detection to atmospheric me-teorological range as a function of pixel fill factor fT. Notethat the required fill factor for a high detection probability(PD ≥ 0.9) increases with decreasing meteorological range.

† Another way to interpret these error levels is to recognize that withthe additive error, the spectral radiance in each spectral channel has asignal-to-noise ratio hard limited to the inverse of the additive error.For example, with 2% random error added, the signal-to-noise ratiocannot be higher than 50.

0

0.2

0.4

0.6

0.8

1.0

0 10 20 30 40 50

Pixel fill factor (%)

Pro

babi

lity

of d

etec

tion

80 km

20 km

5 km

Meteorological range

PD = 0.9

0

0.2

0.4

0.6

0.8

1.0

0 10 20 30 40 50

Pixel fill factor (%)

Pro

babi

lity

of d

etec

tion

0%1%2%4%

Relative calibration

error

PD = 0.9

Page 13: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

VOLUME 14, NUMBER 1, 2003 LINCOLN LABORATORY JOURNAL 129

dated by showing good agreement between predic-tions and measurements of spectral radiances, sensorsignal-to-noise ratios, and detection probabilities de-rived from airborne hyperspectral sensor data.

We presented an example model analysis thatshowed which system parameters were most impor-tant in a given scenario. Also, we presented examplesthat predicted detection performance sensitivity to at-mospheric and sensor parameters. In the cases exam-ined, the most important and most sensitive modelparameters were characteristics of the surface classesand environmental conditions of the scene, ratherthan sensor parameters or algorithmic choices. Cer-tainly, though, situations exist in which these otherparameters or choices can have significant effect.

The model has many advantages over other perfor-mance prediction tools, including quick execution,which enables extensive sensitivity studies to be con-ducted expeditiously. The model does, however, havea number of limitations, some of which can be ad-dressed with further development, while others areinherent in the analytical approach. Inherent limita-tions include the inability to model specific geom-etries of scene objects or materials, especially thosethat involve multiple reflections, and sensor artifactsor processing algorithms that involve nonlinearoperations.

Limitations of the model that we plan to addressthrough additional development include implemen-tation of additional sensor types and artifacts (e.g.,Fourier-transform instruments, spectral jitter) andprocessing algorithms (e.g., physics-based atmo-spheric compensation, anomaly detection, linearunmixing, material identification). Also, we continueto assess the appropriate statistical distributions forvarious classes of hyperspectral data, as well as de-velop confidence intervals for detection probabilitypredictions using appropriate models for the variabil-ity of the contributing factors.

Even as the model is evolving and improving, it hasalready been useful in understanding the potentialperformance and limiting factors in spectral imagingscenarios supported by its current status. Thus it rep-resents a step along the path toward the ultimate goalof a comprehensive understanding necessary for theoptimal design and use of remote sensing systems.

Acknowledgments

The authors gratefully acknowledge the Office of theDeputy Undersecretary of Defense (Science andTechnology) and the Spectral Information Technol-ogy Applications Center for supporting this work.The authors would also like to thank their colleaguesin the Sensor Technology and System Applicationsgroup—Kris Farrar and Seth Orloff—for their con-tributions to the development of the model.

R E F E R E N C E S1. J.P. Kerekes and D.A. Landgrebe, “An Analytical Model of

Earth-Observational Remote Sensing Systems,” IEEE Trans.Syst. Man Cybern. 21 (1), 1991, pp. 125–133.

2. B.V. Shetler, D. Mergens, C. Chang, F.C. Mertz, J.R. Schott,S.D. Brown, R. Strunce, F. Maher, S. Kubica, R.K. deJonckheere, and B.C. Tousley, “Comprehensive HyperspectralSystem Simulation: I. Integrated Sensor Scene Modeling andthe Simulation Architecture,” SPIE 4049, 2000, pp. 94–104.

3. R.J. Bartell, C.R. Schwartz, M.T. Eismann, J.N. Cederquist,J.A. Nunez, L.C. Sanders, A.H. Ratcliff, B.W. Lyons, and S.D.Ingle, “Comprehensive Hyperspectral System Simulation: II.Hyperspectral Sensor Simulation and Preliminary VNIR Test-ing Results,” SPIE 4049, 2000, pp. 105–119.

4. J.R. Schott, Remote Sensing: The Image Chain Approach (Ox-ford University Press, New York, 1997).

5. A. Berk, L.S. Bernstein, and D.C. Robertson, “MODTRAN:A Moderate Resolution Model for LOWTRAN 7,” GL-TR-89-0122, Spectral Sciences, Burlington, Mass. (30 Apr. 1989).

6. Y.J. Kaufman, “Atmospheric Effect on Spatial Resolution ofSurface Imagery,” Appl. Opt. 23 (19), 1984, pp. 3400–3408.

7. J.S. Accetta and D.L. Schumaker, The Infrared and Electro-Optical Systems Handbook 4: Electro-Optical Systems Design,Analysis, and Testing, M.C. Dudzik, ed. (Environmental Re-search Institute of Michigan, Ann Arbor, Mich., 1993).

8. W.H. Farrand and J.C. Harsanyi, “Mapping the Distributionof Mine Tailings in the Coeur d’Alene River Valley, Idaho,through the Use of a Constrained Energy Minimization Tech-nique,” Rem. Sens. Environ. 59 (1), 1997, pp. 64–76.

9. S.J. Whitsitt and D.A. Landgrebe, “Error Estimation andSeparability Measures in Feature Selection for Multiclass Pat-tern Recognition,” LARS Publication 082377, Laboratory forApplications of Remote Sensing, Purdue University, W.Lafayette, Ind. (Aug. 1977).

10. J.P. Kerekes and J.E. Baum, “Spectral Imaging System Analyti-cal Model for Subpixel Object Detection,” IEEE Trans. Geosci.Rem. Sens. 40 (5), 2002, pp. 1088–1101.

11. L.J. Rickard, R. Basedow, and M. Landers, “HYDICE: AnAirborne System for Hyperspectral Imaging,” SPIE 1937,1993, pp. 173–179.

12. J. Pearlman, C. Segal, L. Liao, S. Carman, M. Folkman, B.Browne, L. Ong, and S. Ungar, “Development and Operationsof the EO-1 Hyperion Imaging Spectrometer,” SPIE 4135,2000, pp. 243–253.

Page 14: Hyperspectral Imaging System Modeling

• KEREKES AND BAUMHyperspectral Imaging System Modeling

130 LINCOLN LABORATORY JOURNAL VOLUME 14, NUMBER 1, 2003

. is a staff member in the SensorTechnology and System Appli-cations group. His primary areaof research has been the devel-opment of remote sensingsystem performance modelsfor geophysical parameterretrieval and object detectionand discrimination applica-tions. After joining LincolnLaboratory in 1989, he workedon performance analyses forballistic missile discriminationproblems. He then developedanalysis models supportingweather-satellite instrumentdevelopment. In particular, hedeveloped models to study theaccuracy of proposed sensorsystems in retrieving the atmo-spheric vertical temperatureand water vapor profiles.Recently, he has developedmodeling approaches for theprediction of the system per-formance of hyperspectralimaging systems. He is asenior member of the Instituteof Electrical and ElectronicEngineers (IEEE), and amember of the AmericanGeophysical Society, theAmerican MeteorologicalSociety, and the AmericanSociety for Photogrammetryand Remote Sensing. He is anassociate editor of the IEEETransactions on Geoscience andRemote Sensing, and chairs theBoston Chapter of the IEEEGeoscience and Remote Sens-ing Society. He received hisB.S., M.S., and Ph.D. degreesin electrical engineering fromPurdue University.

. is an associate staff member inthe Sensor Technology andSystem Applications group,where he works on modelingand analyzing hyperspectralremote sensing systems. Jerryjoined Lincoln Laboratory in1989. He has analyzed Geosta-tionary Operational Environ-mental Satellite (GOES)imager data, modeled infraredsensor performance in battle-field simulations, and evalu-ated active- and passive-infra-red detection algorithms.Before coming to the Labora-tory, he worked as a seniorengineer at Textron DefenseSystems, Wilmington, Massa-chusetts, for three and a halfyears, where he helped developthe detection algorithm for anairborne infrared sensor. In aprevious life, Jerry taught highschool physics in Marylandand Massachusetts for almostten years. He earned an A.B.degree from Brandeis Univer-sity in 1975 and an M.S.degree from the University ofMaryland, College Park in1980, both in physics.