[IEEE 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for...

7
Proceedings of the 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI'94) La Vegas, NV Oct. 2-5, 1994 An application of Data Fusion to Landcover Classification of Remote Sensed Imagery: a Neural Network Approach Alessandra Chiuderia'*', S tefano Finib and Vito Cappellinia'b aDipartimento di Ingegneria Elettronica, Via di S. Marta 3, 50139 Florence, Italy bFondazione Scienza per I' Ambiente, Viale Galileo 32, 50125 Florence, Italy Abstract -This paper focuses on the possibilities offered by neural networks applied to multisensor image data processing. The great number of existing and planned instruments for Earth observation (satellites, sensors) highlights the need of specific techniques for processing, and, in particular, for merging, the large amount of data that will be available in future years. Moreover emphasis is given to the importance of fusing data acquired by sensors operating in different regions of the electromagnetic spectrum. Neural Networks ("s) are employed to perform fusion of TM data with SAR data in order to obtain a landcover classification on an agricultural area in the surroundings of Florence (Italy). Two different architectures of NN are presented and employed, the Counterpropagation network and the Kohonen map; the results obtained in both casesare reported and discussed. I. INTRODUCTION Remote sensing is undoubtedly the most powerful tool for Earth observation, for investigations on natural resources and for monitoring the environment both on a global and on a regional scale. Passive and active sensors are used to make measurements of the Earth's surface in the visible, infrared, and microwave wavelengths of the electromagnetic spectrum. When remotely sensed data have been collected it is possible to estimate the physical parameters of the observed natural targets [l]. The estimated physical parameters are used by scientists as input into global and regional scale numerical models in order to study the development of natural phenomena on our planet. The development of remote sensing activities related to monitoring of the Earth will generate in the next few years a very large amount of data necessary for a better understanding of environmental phenomena. This amount of data, which is quite expensive to manage, offers however the possibility to process data of the same area acquired with different sensors operating at different altitudes with different spatial resolutions, at different times and having different technical specifications [2]. (*) The author is supponed by a grant of the Italian Space Agency (A.S.0 It is thus very interesting and useful to develop some techniques to fuse together data which are sensitive to different physical parameters of the observed area. Microwave, infrared and visible channel fusion can provide information spreading all over the electromagnetic spectrum. Microwave data provided by synthetic aperture radar (SAR) are sensitive to the dielectric properties of the ground and are useful to estimate roughness of surfaces and soil moisture content. Infrared data are sensitive to the surface temperature of the ground while data acquired in the visible bands provide information on the reflectance of the observed area and are useful to estimate chemical properties of the target. In this way microwave, infrared and visible data fusion synthesise dielectric, thermal and chemical information. An integrated processing of data can produce a greater knowledge of the phenomena to be studied than an interpretation of each single channel, regardless of other collected data. In remote sensing applications the approaches to the data fusion problem are mainly oriented to photointerpretation (e.g. [3] [4] [SI). As far as landcover classification is concerned, each source of information is considered as a different channel of acquisition, and the set of numerical values obtained by composing the available information is dealt with as a multidimensional variable. The main problem arising from this approach is the fact that the methods used to manage these data require some kind of hypothesis on the statistical distribution of the data themselves. When dealing with just one source of data, it is possible, by making some simplifying hypothesis, to assume, that the multidimensional variable associated with the data has a Gaussian distribution [6], and thus still obtain some good results [7]; unfortunately there is no theoretical reason that supports us while making the same kind of assumption when dealing with multisource data. II.NEURAL NETWORKS IN REMOTE SENSED DATA PROCESSING A. Backgrourid interconnected elementary processors, called neurons. A neural network can be described as a set of highly 0-7803-2072-7194 $4.00 0 1994 IEEE 756

Transcript of [IEEE 1994 IEEE International Conference on MFI '94. Multisensor Fusion and Integration for...

Proceedings of the 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI'94) L a Vegas, NV Oct. 2-5, 1994

An application of Data Fusion to Landcover Classification of Remote Sensed Imagery: a Neural Network Approach

Alessandra Chiuderia'*', S tefano Finib and Vito Cappellinia'b

aDipartimento di Ingegneria Elettronica, Via di S. Marta 3, 50139 Florence, Italy bFondazione Scienza per I' Ambiente, Viale Galileo 32, 50125 Florence, Italy

Abstract -This paper focuses on the possibilities offered by neural networks applied to multisensor image data processing. The great number of existing and planned instruments for Earth observation (satellites, sensors) highlights the need of specific techniques for processing, and, in particular, for merging, the large amount of data that will be available in future years. Moreover emphasis is given to the importance of fusing data acquired by sensors operating in different regions of the electromagnetic spectrum.

Neural Networks ("s) are employed to perform fusion of TM data with SAR data in order to obtain a landcover classification on an agricultural area in the surroundings of Florence (Italy). Two different architectures of NN are presented and employed, the Counterpropagation network and the Kohonen map; the results obtained in both casesare reported and discussed.

I. INTRODUCTION

Remote sensing is undoubtedly the most powerful tool for Earth observation, for investigations on natural resources and for monitoring the environment both on a global and on a regional scale. Passive and active sensors are used to make measurements of the Earth's surface in the visible, infrared, and microwave wavelengths of the electromagnetic spectrum. When remotely sensed data have been collected it is possible to estimate the physical parameters of the observed natural targets [l]. The estimated physical parameters are used by scientists as input into global and regional scale numerical models in order to study the development of natural phenomena on our planet.

The development of remote sensing activities related to monitoring of the Earth will generate in the next few years a very large amount of data necessary for a better understanding of environmental phenomena. This amount of data, which is quite expensive to manage, offers however the possibility to process data of the same area acquired with different sensors operating at different altitudes with different spatial resolutions, at different times and having different technical specifications [2].

(*) The author is supponed by a grant of the Italian Space Agency (A.S.0

It is thus very interesting and useful to develop some techniques to fuse together data which are sensitive to different physical parameters of the observed area. Microwave, infrared and visible channel fusion can provide information spreading all over the electromagnetic spectrum. Microwave data provided by synthetic aperture radar (SAR) are sensitive to the dielectric properties of the ground and are useful to estimate roughness of surfaces and soil moisture content. Infrared data are sensitive to the surface temperature of the ground while data acquired in the visible bands provide information on the reflectance of the observed area and are useful to estimate chemical properties of the target. In this way microwave, infrared and visible data fusion synthesise dielectric, thermal and chemical information.

An integrated processing of data can produce a greater knowledge of the phenomena to be studied than an interpretation of each single channel, regardless of other collected data.

In remote sensing applications the approaches to the data fusion problem are mainly oriented to photointerpretation (e.g. [3] [4] [SI). As far as landcover classification is concerned, each source of information is considered as a different channel of acquisition, and the set of numerical values obtained by composing the available information is dealt with as a multidimensional variable. The main problem arising from this approach is the fact that the methods used to manage these data require some kind of hypothesis on the statistical distribution of the data themselves.

When dealing with just one source of data, it is possible, by making some simplifying hypothesis, to assume, that the multidimensional variable associated with the data has a Gaussian distribution [6], and thus still obtain some good results [7 ] ; unfortunately there is no theoretical reason that supports us while making the same kind of assumption when dealing with multisource data.

II .NEURAL NETWORKS IN REMOTE SENSED DATA PROCESSING

A. Backgrourid

interconnected elementary processors, called neurons. A neural network can be described as a set of highly

0-7803-2072-7194 $4.00 0 1994 IEEE

756

fl i, Output nodes

Input I I I nodes

Fig. 1 . A Multilayer Neural Network

The neurons are organised into layers and each neuron of a given layer is connected to the neurons of the adjacents layers.

Generally an NN is composed by an input layer, one or more hidden layers and an output layer. Each neuron in the network receives as input the weighted sum of the outputs of the neurons of the previous layer, passes the summed value to a given function, and outputs this value to the upper layer's neurons.

The data to be processed are presented to the input layer, the values are transmitted, by the connections, to the upper layers, and the final value of the computation is supplied by the output neurons.

Among a great number of different applications, NNs have been shown to be particularly suitable for classification purposes. In this case, the input to the network is constituted by the pattern (appropriately coded) which is to be classified and the output is the class which is to be assigned to it.

As far as classification is concerned, NNs can be divided into two broad different categories: supervised and unsupervised networks.

In the former case, the user provides to the network a set of labelled patterns, where every pattern is associated to the class it belongs to: the network is then iteratively trained to learn the peculiarity of each class by modifying the connection's weights in order to reduce the overall classification error. Once the training phase is completed, it is possible to assign the class label to a new set of unknown patterns; this second stage is called generalisation. The most widely employed network of this first type is the Multi Layer Perceptron network trained by means of the Error Back Propagation algorithm [8].

As far as unsupervised networks are concerned, the processing can be still subdivided into two distinct moments,

the training and the generalisation, but in this case, in the training phase no information on the class membership is provided: the network is thus trained to learn the distribution of the input patterns rather than the features which identify the various classes. At the end of the training phase, the patterns of the input space are grouped into several clusters according to their similarity on the basis of predefined metrics. In the generalisation phase, each new pattern is assigned to the nearest (according to the same metrics employed during the training) cluster. If classification (i.e. assignment of a class label to each pattern) is the final goal of the processing, some extra supervised techniques must however be taken into account. The most widely employed unsupervised network is the Self Organising Map (SOM) followed by the supervised Learning Vector Quantization (LVQ) algorithm [9]

The two approaches, supervised and unsupervised, are complementary: the former one gives better performances and solves thoroughly the classification task but definitely requires the presence of a (representative) training set. The unsupervised approach on the other hand, even though it does not perform a pattern classification in the strict sense, provides a subdivision of the patterns into separate clusters which can be (in many cases) easily transformed into a real classification, if necessary. In this case, thus, no labelled data are necessary in order to obtain some preliminary, but still acceptable results.

In particular NNs find one of their most interesting applications in the landcover classification task for remote sensing images, as can be seen for example in [lo, 11, 121: in these papers the authors employ the well known Multilayer Perceptron network, trained with a Back Propagation (BP) algorithm.

Unfortunately, one of the major problems when dealing with remotely sensed data is given by the acquisition and the processing of ground truth data, which represents, for all supervised algorithms, the training set.

As a matter of fact several steps are necessary for the acquisition and the use of the training data:

1. Identification, on site, of the crops present on the scene (the campaign should be camed out during satellite acquisition)

2.Identification, on site, of areas which are significant of the crop's health and state of growth

3.Identification, on the remote sensed image data, of the areas at points 1 and 2

4.Extraction of the spectral values corresponding to the pixels which fall into the training regions

In particular point 3 is very critical: each pixel on the image has a ground resolution of several meters (10 m for SPOT images, 30 m for Landsat TM), great care should thus be taken when identifying the training areas on the image, avoiding, when possible, the inclusion of pixels which are not part of the class under consideration, for example narrow

757

roads which separate fields, irrigation canals etc. etc. It is thus easy to understand that the BP algorithm, which is

definitely one of the most effective algorithms for training NNs, has, in remote sensing applications, a great drawback: the landcover classification task cannot be accomplished, even in part, without the ground truth data.

The networks we propose in this paper, on the contrary, present two distinct phases, an unsupervised and a supervised one, which can be trained in a second time; the two phases allow us to obtain first some preliminary, but still acceptable, results even without the ground data [13], and, then, to improve these results by adding the supervised layer, if and when the data for training will become available.

B. The Counterpropagation (CP) network This network was presented first by Nielsen in 1987 [14],

but the version employed here is rather simpler than the original one 1151. The neurons of the network are organised in three layers: an input layer, which acts just as a fan-out, a hidden layer, called the Kohonen layer, governed by the winner-take-all unsupervised strategy, and an output layer, called the Grossberg layer, which is trained by the Widrow- Hoff or Grossberg rule. Fig. 2 illustrates the architecture of the CP network.

The weights connecting the input units with the units of the Kohonen layer are updated, in order to be "as close as possible" to the input vectors, the employed metric being the scalar product. Only the weight which achieves the maximum value of the scalar product with a given input vector is updated according to the following equation:

where: w(t+ 1) w(t) x(t) is the input vector a(t)

w(t+ 1)= w(t) + a(t)[x(t) - w(t)]

is the weight vector at the (t+ l) th iteration is the value of the weight at the tth iteration

is the leming rate parameter

Grossberg f f f f f f

Input layer 1 1 f

At the end of this first phase, the input vectors are subdivided into groups according to their similarities, or equivalently, according to their position in the feature space. This preliminary result can be considered as an unsupervised landcover classification.

If a training set is available, the training of the third (supervised) layer can take place to refine the results obtained so far.

The output layer is given by as many units as the classes that are to be discriminated on the scene, every unit being connected with all the units of the unsupervised layer.

At every step of the training algorithm, an input vector is presented to the network, the unit of the first layer which achieves the maximum scalar product is identified, and only the weights connecting this unit to the output layer are updated, according to the following equation:

where: v(t + 1) v(t) y(t) b(t)

v(t+ l)=v(t) + b(t)[y(t) - ~ ( t ) ]

is the weight vector at the (t + 1)* iteration is the weight vector at the t* iteration is the desired output vector is the learning rate parameter

C. The Self Organising Map (SOM) and Learning Vector Quantization (L VQ) algorithms

The SOM was first presented by Tuevo Kohonen in 1988. In this network the units are arranged in a bidimensional array as shown in Fig. 3. Each unit is connected to the input units and the training is driven by an unsupervised strategy. The main difference between a SOM and a network simply based on the competitive learning strategy is the introduction of the concept of neighbourhood of the unit to be updated.

In this network the input pattern is presented, the winning unit (according to the Euclidean distance metrics) is identified and the updating process of the weights involves all the units which fall into the neighbourhood of the winning one. The radius of the neighbourhood is a decreasing function of the number of iterations: in the early phases of the training, nearly half of the units present in the network are updated, while at the end, only the weights connecting the inputs to the winning unit are modified. The updating equations are the following:

if i E Nc(t) otherwise

ki(t + 1) = ki(t) + a(t)[x(t) - ki(t)] ki(t + 1) = ki(t)

ki(t + 1) is the weights vector after (t + 1) iterations ki(t) is the weights vector after t iterations Nc(t) is the neighbourhood of the winning unit c

where

a ( t ) is the learning rate ( O c a( t ) < 0.9)

Fig. 2 The Counterpropagation neural network

758

At the end of the learning step the weights connecting the

I I T I Fig.3 The Self Organizing Map (SOW

inputs to the map units are trained in order to simulate the input data distribution, the units present a topological ordering i.e. neighbouring units respond to similar patterns. The pattern in the input space are thus clustered, but , if a classification task must be performed, a second supervised training phase has to be carried out.

Kohonen himself suggests, for these purposes, the Learning Vector Quantization algorithm. At the end of the unsupervised algorithm, each information class (i.e. ground truth class) will generally activate several units of the SOM, and thus the crucial point of the supervised labelling phase is to separate as much as possible the units which fall near the class boundaries.

The map units are first labelled, by means of a training set, by majority voting, then, each pattern of the training set is presented and the closest unit, according to the Euclidean distance, is established.

The weight vector associated to the winning unit c is updated according to the following law:

where P(t) is a learning rate parameter (0 < p(t) < 0.01)

111. MATERIAU AND METHODS

In the following sections a brief description of both data and tools employed for the landcover classification task is

\given; in the first section we report the technical characteristics of the instruments employed for data acquisition together with the characteristics of the test site,

while in the second section a brief description of the neural networks topology and the algorithm's behaviour is given.

A. Data Collection The area of interest is situated in Montespertoli, an

agricultural area on the outskirts of Florence (Italy). The acquisition instruments employed are a multifrequency (band P,L,C) multipolarization SAR and a multispectral Scanner (TMS) operating with 10 channels in the optical-infrared region and 2 channels in the thermal band. The two sets of data have different spatial resolution: 25 meter by 25 meter for the TM data and 6 meter in range and 12 meter in azimuth for the S A R image. These data were collected during the Multisensor Airborne Campaign organised by the Italian Space Agency (ASI) in cooperation with NASA and JPL in Europe in the Summer of 1991. The SAR was flown by a DC8 aircraft while the TM was flown by an ER2 aircraft.

Tables I and I1 summarise the technical specifications of the sensors, the flight parameters and the different spatial resolutions of collected image data.

TABLE I TMS DEDALUS CHARACTERISTICS

Sensor Characteristics IFOV FOV 43 Nominal Altitude 19812 m

206 m / s Nominal Speed Swat Amplitude 15.6 Km Pixellscan line 716 Scan Speed'

1.25 mrad

12.5 scansls

The combination of data relative to different regions of the electromagnetic spectrum, allows us to exploit complementary information on the area: information about structure, texture and roughness is provided by SAR data, while chemical and thermal properties can be inferred by the TM data by taking into account the visible and IR channels respectively.

TABLE I1 DC-8 AIRSAR CHARACI'ERISTICS

Sensor Characteristics Frequencies Bands L, P, C Polarizations HH, HV, W Nominal Altitude 17925 m

232 mls Nominal Speed Swat Amplitude 10 Km Incidence Angle 15 - 60" Pixellscan line 716 Scan Speed' 12.5 scansls

Two images of the same area collected by the described different sensors are shown in Fig.4 and Fig.5. The different resolution and the different geometry used for data acquisition are quite evident. In order to able to exploit the information carried by these two data set a coregistration of the two

759

images was necessary: SAR data were first re-sampled, by averaging, to 24 meter by 24 meter pixel size, and successively co-registered on the TMS image by means of a 5* order polynomial, leading to residuals of 0.284 and 0.330 in the x and y directions respectively. The final resampling was performed by means of cubic interpolation.

TABLE 111 CORRBFQNENCE BEWEEN TMS AND LAND~AT TM CHANNEIS

TMS Channel TM Channel Wavelength (pm)

2 1 0.45 - 0.52 3 2 0.52 - 0.60 4 B 0.60 - 0.62 5 3 0.63 = 0.69 6 C 0.69 - 0.75 7 4 0.76 - 0.90

1 A 0 . 4 2 - 0.45

8 D 0.91 - 1.05 9 5 1.55 - 1.75 10 7 2.08 - 2.35 1 1 6 8.5 - 14.0 low pain 12 6 8.5 - 14.0 high pain

B. The CP network In order to evaluate the advantages given by the use of

integrated data for landcover classification purposes two Counterpropagation networks were employed: the first one was trained only on the TMS data, while for the second one both TMS and SAR data were employed.

In both cases the Kohonen layer was constituted by 18 neurons, while 6 were the neurons present in the output layer, corresponding to the following ground truth classes: wheat, woods, alfalfa, vineyards, bare soil, grassland

The networks were trained for 5000 iterations, employing for the unsupervised layer the whole image (256 x 256 pixels), while a training set composed of 992 pixels was used for the supervised layer. The performances of the two networks were evaluated on a test set composed by 236 pixels.

In Tables IV and V the results of the classifications by using the two different networks are reported. Each entry of the table represents the number of pixels drawn from the test set assigned to the given class.

The great difference between the accuracies of the two different experiments highlights the importance of data fusion in remote sensing image classification: in the experiments conducted on the TMS data alone, despite the overall accuracy of 65.68% (75.71% on the training set), the classes corresponding to the alpha-alpha and grassland cover type were completely misclassified, while the same two classes were very well separated in the experiments with the combined TMS and SAR data leading to an overall accuracy of 90.68% (90.02% for the training set). On the other hand, as mentioned at the end of section 1, the traditional classifiers, such as the Maximum Likelihood, could not be employed for this merged data set, which makes NN a very useful tool for landcover classification task.

C. The SOM arul L VQ algorithtn The low performances of the Counterpropagation network

in the TMS alone case can be ascribed not only to the fact that the optical-infrared data are not separable enough for the ground truth classes present on the scene, but also to the fact that the number of neurons employed in the unsupervised layer was probably too low to allow the network to separate the classes.

Fig. 4. TMS data: Band 5

760

Fig. 5 S A R data - L hand - HH polarization

On the other hand increasing the number of neurons increases the computing time in the training phase.

TABLE Tv CONFUSION MATRIX FOR TMS DATA: CP NETWORK ( ~ ~ - 6 1

N N Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Output 1 51 0 0 0 0 0 2 1 30 31 0 0 12 3 0 0 0 0 0 0 4 1 0 0 62 5 0 5 4 1 0 21 12 0 6 0 0 0 0 0 0

TABLE V CONFUSION MATRIX FOR Thfs+sAR DATA: CP NFTW0P.K (9-18-6)

The results of the classifications by means of this second architecture are reported in Tables VI and VII, while Fig. 6 and Fig. 7 show the topological maps after the training was completed.

As can be seen, in this case, the integration of SAR data did not really increase the classification accuracy, which is very high already on the TMS data set: in both cases, all the classes were well identified and the overall accuracy on the test set is 97.88% for the TMS data alone, while slightly better for the TMS + SAR data set, 98.73 %.

TABLE VI CONFUSIONMATRIX FOR TMS DATA: SOM AND LVQ N ~ O K ( 6 3 6 )

NN Class 1 Class 2 Class 3 Class 4 Class 5 Class 6

NN Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Output 1 5 1 0 0 0 0 0 2 0 30 0 0 0 0 3 0 0 35 0 0 0 4 5 1 0 83 13 0 5 0 0 0 0 4 0

1 55 0 0 0 0 0 2 0 30 0 0 0 0 3 0 0 31 0 0 0 4 I ! 0 83 3 0 5 0 0 0 0 14 0 6 0 0 0 0 0 12

As a second set of experiments, two Self Organising Maps were implemented for the same data set: just as hefore, one was trained on the TMS data alone, while for the second one the TMS + SAR data set was employed. In this case both the nehvorks were composed by 36 neurons arranged in a 6 by 6 bidimensional grid, the input layers being composed by 6 and 9 neurons respectively, as in the Counterpropagation case. For the supervised labelling a LVQ algorithm was employed. The unsupervised training of the SOM required 10.000 iterations over the whole image, while for the LVQ algorithm lo00 iterations over the training set were necessary.

Fig. 6 . Kohonen Map for TMS data

TABLE VI1 CONFUSION MATRLX FOR TMS+SAR DATA:SOM AND LVQ N ~ O K (9-36

NN Class 1 Class 2 Class 3 Class 4 Class 5 Class 6

1 55 0 0 0 0 0 2 0 31 0 0 0 0 3 0 0 31 0 0 0 4 1 0 0 83 2 0 5 0 0 0 0 15 0 6 0 0 2 0 0 12

It is interesting to notice that, in figures 6 and 7, the units devoted to the recognition of a given class are grouped together in the final labelling of the map and that the units concerning classes 3 and 5 do not constitute a “compact” group which might mean that they are not effectively very well spectrally separated from the others. In any case, due to the higher number of units employed (6 times the number of classes to be discriminated) and to the use of neighbourhoods in the early phases of the training (the first 4000 iterations), the network was also able to correctly classify the patterns belonging to more “difficult” classes.

IV. CONCLUSIONS

In this paper the use of neural networks for fusion of optical-infrared data with microwave data for landcover classification purposes has been investigated. Two different network architectures were employed: a Counterpropagation and a Self Organising Map. In both cases a comparison between the classification accuracy obtained by using one or both data sources were performed.

76 1

008080 1

Fig. 7 . Kohonen Map for TMS + S A R data

In the first case, the data integration improved dramatically the accuracy: two classes (alfalfa and baresoil) were completely misclassified when using just one source of data, while when the combined TMS + SAR data were employed the accuracy was quite good for every class.

This improvement is certainly due to the presence of microwave data; on the other hand it must be pointed out that the number of units employed in the unsupervised layer were probably insufficient to allow a separation of the desired classes when using only one set of data.

As a matter of fact, when the Kohonen map was employed it was shown that the network having a higher number of neurons obtained some excellent results also on the TMS data alone, and thus in this case, the use of the integrated data set is not suitable: as a matter of fact, not only the training of a network having 9 input nodes instead of 6 is longer, but most important, all the pre-processing connected to the co- registration of the two images must be performed as a preliminary step.

The main problems related to the use of a two dimensional Kohonen map are essentially connected to the extremely long training times, so we must conclude that if the processing time or the computing means are limited, the use of multisource data is of fundamental importance in order to obtain good performances because it provides to the network more information allowing a better class separation.

If the image dimensions are relatively small, the processing time is not mandatory or the computing facilities are very powerful, the advantages of the integration of microwave data and optical data are not so impressive and thus they do not justify the extra pre-processing necessary in order to enable the use of such a set of integrated data.

ACNOWLEDGEMENT

The authors would like to thank the Joint Research Centre of the European Communities, situated in Ispra (Varese) Italy, for giving Alessandra Chiuderi the opportunity of taking advantage of their extremely powerful computing facilities: most of the processing presented in this paper has been carried out during a stage Alessandra Chiuderi did at the Environmental Modelling, Mapping and Application Laboratories and which has strongly contributed to the realization of the present work.

REFERENCES

R.M. Hord: Remote Sensing, Methods and Applications. John Wiley & Sons, New York (1 990) P.S. Chavez,S.C. Sides, J.A. Anderson: Comparison of Three Different Methods to Merge Multiresolution and Multispectral Data: Landstat TM and SPOT Panchromatic. P U R S , vol. L VII, n. 3 (1991) W. J . Carper, T. M. Lillesand, R. W. Kiefer: The Use of Intensity- Hue-Saturation Trasformations for Merging SPOT panchromatic and Multispectral Image Data - P U R S , Vol 56, N. 4, (1990), pg. 459 - 46 7 J. R. Hams, R. Murray, T . Hirose: IHS Transform for the Integration of Radar Imagery with other Remotely Sensed Data -

V. K. Shettingara: A Generalized Component Substitution Technique for Spatial Enhancement of Multispectral Images Using a Higher Resolution Data Set - PE&RS, Vol. 58, N. 5, (!992), pg. 561 - 567 I.L. Thomas et al.: ClassiJcation of Remotely Sensed Images. Adam Hilger. Bristol England (1987) J.A. Benediktsson, P.H. Swain. O.K. Ersoy: Neural Network Approaches Versus Statistical Methods in Classification of Multisource Remote Sensing Data, IEEE Geoscience and Remote Sensing, vol28, N. 4 (1990) D.E. Rumelhart, J.L. McClelland:Parallel Distributed Processing. MIT Press, Cambridge MA (1986) T . Kohonen: The Self Organizing Map - Proc. of the IEEE, Special Issue on Neural Networks, I: Theory and Modelling, Vol. 78, N. 9, pg. I464 - 1480 H. Bishof, W. Schneider, A. Pinz: Multispectral Classification of Landsat-Images Using Neural Networks, IEEE Geoscience and Remote Sensing, vol 30, N. 3 (1990) I . Key, J.A. Maslanik, A, I . Schweiger: Classification of Merger AVHRR and SMMR Artic Data with Neural Networks, PE&RS, Vol 55 , N. 9 (1989) G. G. Wilkinson, 1. Kanellopoulos, W. Mehl, J . Hill: Land Cover Mapping Using Combined LANDSTA Thematic Mapper Imagery and ERS-I Synthetic Aperture Radar Imagery - Proc. PECORA-12, Remote Sensing Conference on Lnnd Information from Space-based Systems, Sioux Falls, USA, (1993). V. Cappellini, F. Butini, A. Chiuderi, S. Fini: Reti Neural; per la Classificazione del Territorio: uno Strumento per la Fusione dei Dati Telerilevati, Proc of A.I.T. (1992) R. Hectht-Neilsen: Counterpropagation Networks. Applied Optics,

P.D. Wasserman: Neural Computing, Theory and Practice. Van Nostrand Reinhold, New York (1989)

PE&RS, Vol. 56, N. 12, (1990), pg 1631 - 1641

VOI 26, n. 23, pg 4979 - 4984

762