FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a...

6
FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED LAND COVER MAPPING IN AGRICULTURAL AREAS T. Riedel, C. Thiel, C. Schmullius Friedrich-Schiller-University Jena, Institute of Geography, Earth Observation, Loebdergraben 32, D-07743 Jena, Germany, Email: [email protected] ABSTRACT Focus of this paper is on the integration of optical and SAR data for improved land cover mapping in agricultural areas. The test site is located east of Nordhausen, Thuringia, Germany. From April to December 2005 Landsat-5 TM, ASAR APP and ERS-2 data were acquired continuously over the test site building up a comprehensive time series. Regarding the fusion of optical and SAR data following three aspects will be addressed by this paper. The value of different methodologies for the synergistic usage of both data types is subject of a first analysis. Multitemporal SAR images provide an important data base for land cover and crop type mapping issues. This will be demonstrated in the second section of this paper. Finally, a classification scheme for the generation of basic land cover maps using both optical and SAR data will be presented. With respect to operational applications the proposed procedure should have a high potential for automation. 1. INTRODUCTION The availability of up-to-date and reliable land cover and crop type information is of great importance for many earth science applications. For operational appli- cations the development of robust, transferable, semi- automated and automated approaches is of special inter- est. In regions with frequent cloud cover such as Central Europe the number of suitable optical data is often lim- ited. The all-weather capability is one major advantage of SAR data beyond optical systems. Furthermore, radar sensors provide information complementary to those contained in visible-infrared imagery. In the optical range of the electromagnetic spectrum the information depends on reflective and emissive characteristics of the Earth surface, whereas the radar backscatter coefficient is primarily determined by structural and dielectric at- tributes of the surface target. The benefit of combining optical and SAR data for improved land cover mapping was demonstrated in several studies [1, 2, 3]. Multisen- sor image data analyses are often performed without an alteration of the digital numbers amongst the different data types. The term image fusion itself is defined as ‘the combination of two or more different images to form a new image by using a certain algorithm’ [4]. In general, the data fusion process can be performed on pixel, feature or decision level. With the availability of multifrequency and high-resolution spaceborne SAR data such as provided by the TerraSAR-X and PALSAR ALOS mission an increased interest in tools exploiting the full information content of both data types will arise. Objective of this paper is a comparison of different image fusion techniques for optical and SAR data in order to improve the classification accuracy in agricul- turally used areas. Furthermore, the potential of multitemporal SAR data for land cover and crop type mapping will be demonstrated. In this context optimal image parameters for the derivation of basic land cover classes will be defined. On base of these findings (amongst other things) a processing chain for the auto- mated generation of basic land cover products will be introduced. 2. STUDY AREA AND EXPERIMENTAL DATA The study area “Goldene Aue” is located east of Nord- hausen at the southern border of the middle mountain range Harz, North Thuringia, Germany and is charac- terized by intensive agricultural land use. Main crop types are winter wheat, rape, corn and winter barley (Fig. 1). Figure 1. Location of the test site and crop type map from 2005 From April to December 2005 optical and SAR data were acquired continuously over the test site building up a comprehensive time series. During the main growing season (April – late August) 2 Landsat-5 TM, 9 Envisat _____________________________________________________ Proc. ‘Envisat Symposium 2007’, Montreux, Switzerland 23–27 April 2007 (ESA SP-636, July 2007)

Transcript of FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a...

Page 1: FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a higher separability using the fused image. However, this hypothesis was not supported

FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED LAND COVER MAPPING IN AGRICULTURAL AREAS

T. Riedel, C. Thiel, C. Schmullius

Friedrich-Schiller-University Jena, Institute of Geography, Earth Observation, Loebdergraben 32, D-07743 Jena,

Germany, Email: [email protected]

ABSTRACT Focus of this paper is on the integration of optical and SAR data for improved land cover mapping in agricultural areas. The test site is located east of Nordhausen, Thuringia, Germany. From April to December 2005 Landsat-5 TM, ASAR APP and ERS-2 data were acquired continuously over the test site building up a comprehensive time series. Regarding the fusion of optical and SAR data following three aspects will be addressed by this paper. The value of different methodologies for the synergistic usage of both data types is subject of a first analysis. Multitemporal SAR images provide an important data base for land cover and crop type mapping issues. This will be demonstrated in the second section of this paper. Finally, a classification scheme for the generation of basic land cover maps using both optical and SAR data will be presented. With respect to operational applications the proposed procedure should have a high potential for automation. 1. INTRODUCTION The availability of up-to-date and reliable land cover and crop type information is of great importance for many earth science applications. For operational appli-cations the development of robust, transferable, semi-automated and automated approaches is of special inter-est. In regions with frequent cloud cover such as Central Europe the number of suitable optical data is often lim-ited. The all-weather capability is one major advantage of SAR data beyond optical systems. Furthermore, radar sensors provide information complementary to those contained in visible-infrared imagery. In the optical range of the electromagnetic spectrum the information depends on reflective and emissive characteristics of the Earth surface, whereas the radar backscatter coefficient is primarily determined by structural and dielectric at-tributes of the surface target. The benefit of combining optical and SAR data for improved land cover mapping was demonstrated in several studies [1, 2, 3]. Multisen-sor image data analyses are often performed without an alteration of the digital numbers amongst the different data types. The term image fusion itself is defined as ‘the combination of two or more different images to form a new image by using a certain algorithm’ [4]. In general, the data fusion process can be performed on

pixel, feature or decision level. With the availability of multifrequency and high-resolution spaceborne SAR data such as provided by the TerraSAR-X and PALSAR ALOS mission an increased interest in tools exploiting the full information content of both data types will arise. Objective of this paper is a comparison of different image fusion techniques for optical and SAR data in order to improve the classification accuracy in agricul-turally used areas. Furthermore, the potential of multitemporal SAR data for land cover and crop type mapping will be demonstrated. In this context optimal image parameters for the derivation of basic land cover classes will be defined. On base of these findings (amongst other things) a processing chain for the auto-mated generation of basic land cover products will be introduced. 2. STUDY AREA AND EXPERIMENTAL DATA The study area “Goldene Aue” is located east of Nord-hausen at the southern border of the middle mountain range Harz, North Thuringia, Germany and is charac-terized by intensive agricultural land use. Main crop types are winter wheat, rape, corn and winter barley (Fig. 1).

Figure 1. Location of the test site and crop type map

from 2005 From April to December 2005 optical and SAR data were acquired continuously over the test site building up a comprehensive time series. During the main growing season (April – late August) 2 Landsat-5 TM, 9 Envisat

_____________________________________________________

Proc. ‘Envisat Symposium 2007’, Montreux, Switzerland 23–27 April 2007 (ESA SP-636, July 2007)

Page 2: FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a higher separability using the fused image. However, this hypothesis was not supported

ASAR APP and 6 ERS-2 scenes were recorded (Fig. 2). On July 10 optical and SAR data were acquired nearly simultaneously providing an excellent data base for image fusion analysis. Parallel to each satellite overpass extensive field data were obtained including crop type, growth stage and vegetation height.

Figure. 2 EO-data base

3. METHODOLOGY All EO-data were pre-processed on base of widely used standard techniques. As parts of the test site are charac-terized by significant topography, the normalization procedure introduced by Stussi et al. [5] was applied to all SAR data. The pre-processing of the multispectral optical images includes atmospheric correction and orthorectification using the free accessible C-band SRTM DEM. In the context of multisource image data fusion a critical pre-processing step is an accurate co-registration of all EO-scenes used. After pre-processing different image fusion approaches were applied. Generally, in literature following methods for the integration of optical and SAR data are com-monly applied: combination of both data sets without an alteration of the original input channels and image fu-sion on pixel- and decision level. In the framework of this study the potential of the first and the second data integration approach for land cover and cop type map-ping will be assessed. Performed pixel-based fusion techniques include IHS-transformations, principle com-ponent analyses (PCA), a multiplicative approach and wavelet-transformations. The benefit of these proce-dures was estimated by separability analyses and a com-parison of the classification accuracies achieved by a simple pixel-based maximum likelihood classification (MLK). The Jeffries-Matusita distance (JM), which is widely used in the field of remote sensing to determine the statistical distance between two multivariate, Gaus-sian distributed signatures, was calculated. The JM

distance varies between 0 and 1414, whereas 0 signifies no and 1414 a very high separability. Main objective of this study was to set up an automatic and transferable classification scheme for the derivation of basic land cover categories. The proposed processing chain – shown in Figure 3 – is composed of three main stages. The first step comprises the segmentation of the optical EO-data using the multiresolution segmentation approach [6] implemented in the eCognition software. Next, for each land cover class potential training sites will be selected automatically on base of a decision tree. As the application of fixed thresholds sometimes will fail, the thresholds values for reflectance, backscattering coefficient, ratios and texture information specified in the nodes of the decision tree will be adapted to each EO-scene separately. To achieve this, for each land cover type an optimal set of characteristic image pa-rameters was defined by analyzing the time series avail-able in a systematic manner. Additionally, information reported in literature and libraries (e.g. European RA-dar-Optical Research Assemblage library - ERA-ORA) were considered. By the combination of this expert knowledge about typical target characteristics (e.g. low reflectance of water bodies in the near infrared) and histogram analyses, it is possible to assess scene-spe-cific threshold values. In the third stage of the proposed classification scheme the identified trainings sites will be used as input for a supervised classification. In the framework of this study three classification techniques – nearest neighbour, fuzzy logic and a combined pixel-/object based approach – were compared. In the latter case a pixel-based maximum likelihood classification was performed. The final land cover category assigned to each image object corresponds to the most frequent class per image segment. Postclassification procedures involve simple GIS-analysis such as the recoding of island segments within residential areas. Thematic map accuracy of the final land cover products was assessed by calculating the confusion matrix and the kappa coefficient for fifty randomly distributed reference points per land cover category. The class membership of each reference target was specified on base of official land information GIS layers and field data.

Figure 3. Proposed processing chain

Page 3: FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a higher separability using the fused image. However, this hypothesis was not supported

4. RESULTS 4.1 Pixel-based image fusion versus combination of optical and SAR data – a comparison In literature two methods for the integration of optical and SAR data are commonly applied. Image fusion products were generated on base of Landsat-5 TM (channel 3, 4 and 5) and HV-polarized ASAR APP data acquired nearly simultaneously on July 10, 2005. The results of the multiplicative approach and the wavelet transformations (tested filter functions: Haar, Daube-chies, Coiflet, symmetric) are visually very similar to the original optical data. For other approaches such as the PCA fusion product and the HIS transformation, the SAR information is more pronounced. To assess the benefit of these approaches for land cover and crop type mapping separability analyses were performed. By the combination of the optical and SAR data without an alteration of the pixel values the separability rises sig-nificantly. Contrary, for the images fused on pixel level the separability remains unchanged or even declines for all class pairs. These findings were confirmed by the achieved classification accuracies. Using the optical and SAR data as independent input layer for a supervised MLK-classification, the overall accuracy increases by 7.2 % (74.4 % → 81.6 %). Especially for the classes urban areas, grassland, winter wheat, winter barley, peas and corn an improved accuracy could be achieved. The land use / crop type maps derived on base of the images fused on pixel level shows no improvement for all classes. For some land use categories the user and pro-ducer accuracies even declines. For the mapping of urban areas several studies demon-strated the power of textural features in conjunction with spectral or/and backscatter information using me-dium resolution EO-data [7, 8]. In the framework of this study it was investigated whether it is possible to im-prove the extraction of textural features using the pixel-based fusion products as input layer. Previous analyses indicated the potential of the textural measures second angular moment (SAM) computed on base of the grey level co-occurrence difference vector (GLDV) for the Landsat-5 TM data as well as the standard derivation and the neighbourhood grey level dependency matrix (NGLD) for SAR data [9]. A visual interpretation of the textural features extracted on base of the pixel-based fusion products indicates a higher separability using the fused image. However, this hypothesis was not supported by the achieved classification accuracies. Best overall performance was found using the SAR data for texture extraction only. The urban area maps derived from the fused products as well as from the optical data were less accurate and stable in time. In conclusion, the investigation indicated that the appli-cation of pixel-based image fusion procedures is not a

suitable tool to combine medium resolution optical and SAR data. Perhaps, it will be appropriate to fuse images with different spatial resolution such as Envisat ASAR WSM and MERIS data. 4.2 Potential of multitemporal SAR data for land cover and crop type mapping A further objective of this paper was to demonstrate the power of multitemporal SAR data for crop type map-ping issues, as in Central Europe the number of optical data available is often limited due to frequent cloud cover. For example, over the Nordhausen test site only two cloudless Landsat-5 TM scenes were acquired be-tween April and mid of August, i.e. during the main growing season. The first image was recorded on April 22 and the second on July 10. Both acquisition dates are not well suited for crop type mapping. In consequence, the achieved classification accuracies for the monotem-poral optical data are not sufficient (Tab. 1). By using both Lansat-5 TM scenes available, the accuracy of the final land cover map could be improved significantly. However, the results of the investigations showed that the classification accuracies obtained on base of the SAR data only exceeds those of the optical data indi-cating a high potential of multitemporal radar data for land cover and crop type mapping. To improve the mapping of urban areas the usage of textural features extracted on base of HH-polarized SAR data is recom-mended. In order to reduce misclassifications between forests and agriculture/grassland as well as urban areas and agriculture/grassland the multitemporal minimum in HV-polarisation could be used. Regarding the optimal polarization for the derivation of crop type information the analysis indicated that the cross-polarized data are most suitable. The final product achieved on base of the ERS-2 data was less accurate, though it has to be con-sidered that the number of SAR scenes used for classifi-cation was not equal (HV 9 vs. VV 6). This will be analyzed in more detail in future investigations. Finally, the results listed in Table 1 showed that by the combi-nation of optical and SAR data the classification accu-racy could be improved. The retrieved land cover maps are illustrated in Figure 4. Though a simple pixel-based MLK-classification approach was followed, the derived land cover products look very ‘smooth’, especially the results obtained on base of the SAR data as well as the optical and SAR data. The differentiation of urban and unvegetated areas was problematic in all cases. Indeed, by the integration of the SAR information in the classification process misclassifications could be reduced significantly. Not surprisingly, the results showed that it is not possible to distinguish coniferous and deciduous / mixed forests on base of SAR data only. In the east of the test site near the Berga-Kelbra reservoir meadow and pasture are

Page 4: FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a higher separability using the fused image. However, this hypothesis was not supported

Excl. SAR

HV, VV, tex.,

HV-m

HV, VV

HV, tex.,

HV-m

HV VV, tex.

VV

SAR 80.2 77.9 73.3 71.9 64.3 65.2

TM 2104 52.8 82.8 81.8 80.4 80.4 75.8 76.8

TM 1007 68.3 82.4 82.7 83.6 82.2 80.1 80.6 TM 2104 & 1007 77.9 83.7 83.7 84.0 83.8 82.9 82.5

Tab. 1. Classification accuracies – 20 land cover classes, 50 reference points per class – optical (orange),

SAR (green), combination (blue) predominant. These grassland areas were detected by both the optical and SAR data. Regarding the crop types cultivated in the test site the classification result indi-cated a low separability of cereals. All classification results show a mix-up between winter wheat and sum-mer wheat, winter rye, triticale and oat. However, the number of fields or/and the mean field acreage of these crops (except winter wheat) is low. Thus, on base of the crop type distribution in the test site in 2005 it is not possible to draw a conclusion on the potential of multitemporal C-band SAR data (and Landsat-5 TM data) for the differentiation of cereals.

4.3 Classification scheme for the automated genera-tion of basic land cover maps Main objective of this study was to set up a classifica-tion scheme for the derivation of basic land cover cate-gories with a high potential for automation. As outlined in chapter 3 the second processing step compromises the selection of potential training samples on base of a deci-sion tree. The absolute threshold values at each node of the decision tree are estimated stepwise making use of expert knowledge in combination with histogram analy-ses. The analyses outlined above demonstrated the util-ity of textural features derived on base of HH-polarised C-band data and the multitemporal minimum in HV-polarisation for land cover mapping. Furthermore it was shown, that image fusion at pixel level is not a suitable tool to improve the accuracy of the final land cover product. In Table 2 the characteristic image parameters used to compute the thresholds for each land cover class are listed. Exemplary the threshold estimation process is described in detail for the land cover category coniferous forest. In agreement with the general knowledge of reflectance properties, the analyses of the time series available as

Water

Coniferous forest

Deciduous/mixed forest

Urban areas

Unvegetated areas

Grassland

Clover

Winter wheat

Winter barley

Winter rye

Triticale

Summer barley

Summer wheat

Oat

Rape

Corn

Peas

Hop

Potatoes

Sugar beets

Water

Coniferous forest

Deciduous/mixed forest

Urban areas

Unvegetated areas

Grassland

Clover

Winter wheat

Winter barley

Winter rye

Triticale

Summer barley

Summer wheat

Water

Coniferous forest

Deciduous/mixed forest

Urban areas

Unvegetated areas

Grassland

Clover

Winter wheat

Winter barley

Winter rye

Triticale

Summer barley

Summer wheat

Oat

Rape

Corn

Peas

Hop

Potatoes

Sugar beets

Fig. 4. Land use map – combination optical & SAR (upper left), optical (upper right) and SAR (bottom)

Page 5: FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a higher separability using the fused image. However, this hypothesis was not supported

Class Image Parameter Water bodies NIR Coniferous forest NIR, MIR, NDVI Dec. / mixed f. - leafoff Green, NDVI, tex. – HH-pol. Dec. / mixed f. - leafon Green, NIR - MIR Unvegetated areas Multitemp. min. red, HH, HV Urban areas NIR, MIR, tex. – HH-pol., min.

HV-pol. Grassland NDVI, HV-pol.

Tab. 2. Characteristic image parameter used for the automatic selection of training samples

well as of the EO-libraries showed that coniferous forest areas are characterized by a low near and middle infra-red reflectance. First of all, segments most probably belonging to the water class were excluded by un-selecting all image objects with a reflectance in NIR below the threshold for water. In the next step an initial threshold for coniferous forest in MIR is defined as the minimum histogram value with a segment frequency greater than five plus twenty. By this process mainly coniferous forest and - depending on growth stage – agricultural crops are selected. Both classes could be separated analyzing the corresponding histogram in NIR (Fig. 5). Thereby the lower peak represents coniferous forest segments and the higher one agricultural fields. Applying the threshold estimated for the NIR channel, the final threshold in MIR could be estimated by histo-gram analyses. In the end, as sometimes a very small number of urban objects are selected, the mean NDVI ± 0.05 is computed and used for the selection of the final training samples of coniferous forest.

Figure 5. Threshold estimation by histogram analysis –

example coniferous forest By the proposed methodology a large number of poten-tial trainings sites will be selected. For validation issues the algorithm was tested for all EO-data available. The quality of the training samples selected was assessed by a visual inspection and the calculation of the confusion matrix on base of the reference data used for the accu-racy assessment of the final land cover map. The user accuracy, i.e. the probability that a training sample is in

agreement with the reference data, usually exceeds 90%. However, especially for grassland problems in finding correct training samples arise when monotempo-ral optical data are available only. For classification three different approaches were com-pared considering the training sites and threshold values specified by the methodology described above. First, a simple nearest neighbour classification was performed using all potential training samples as input data. The second classification procedure implies the usage of the class thresholds estimated to set up an object-based fuzzy classification rule. Finally, a combined pixel-based/ object-based approach was applied. The Landsat-TM 5 scenes acquired on April 22 and July 10, 2005 as well as multitemporal ASAR APP data from 2005 were used as input for classification. The obtained map accu-racies differ significantly (Tab. 3), whereby the best per-formance was found by far for the combined pixel-/object-based classification. Class Nearest

neighbour Fuzzy classification

Combined pixel-/object

PA UA PA UA PA UA Water 98.0 100.0 95.8 100.0 96.0 100.0 Coniferous f. 67.4 100.0 91.3 100.0 100.0 92.5 Mixed/dec. f. 90.0 73.8 100.0 89.1 92.0 95.8 Grassland 60.0 93.8 69.4 85.0 96.0 88.9 Unveg. areas 36.0 100.0 48.9 92.0 74.0 97.4 Urban areas 81.3 78.0 75.0 83.7 84.0 72.4 Agriculture 97.9 46.1 97.7 53.8 89.6 89.6 Overall acc. 75.6 82.5 90.2 Kappa 0.715 0.796 0.886 Tab. 3. Classification accuracies - TM acquired in April and July 2005 and multitemporal SAR data from 2005

(PA – producer accuracy; UA – user accuracy) For validation issues the proposed methodology was tested using different sets of input data including images acquired in 2003 and 2005 (Tab. 4). The corresponding classification accuracies are listed in Table 5. Not sur-prisingly, the obtained land cover maps are less accurate in comparison to those generated on base of multitem-poral optical data. Stable classification accuracies were achieved for the land cover classes water, coniferous forest, deciduous/mixed forest and urban areas. As mo-notemporal optical data were used for classification only, increased misclassifications occur between un- ASAR APP IS2 – HH/HV

Landsat-TM/ETM Date 1 Date2 Date3

Set 1 21.04.05 22.04.05 10.07.05 24.08.05 Set 2 10.07.05 22.04.05 14.08.05 18.09.05 Set 3 06.08.03 11.06.03 20.08.03 14.09.03 Set 4 22.04.03 16.07.03 20.08.03 14.09.03

Tab.4 Input data sets used for validation

Page 6: FUSION OF OPTICAL AND SAR SATELLITE DATA FOR IMPROVED …€¦ · fusion products indicates a higher separability using the fused image. However, this hypothesis was not supported

Set1 Set2 Set3 Set4 PA UA PA UA PA UA PA UAWA 96.0 96.0 94.0 100. 98.0 100. 96.0 98.0CF 100 94.2 100. 94.2 95.9 92.2 95.4 95.4D/MF 94.0 97.9 92.0 79.3 92.0 93.9 96.0 92.3GL 56.0 93.3 54.0 87.1 92.0 76.7 84.0 63.6UA 68.0 94.4 62.0 81.6 72.0 83.7 64.0 94.1UR 78.0 86.7 82.0 74.6 80.0 81.6 83.0 78.0AG 97.9 54.7 91.7 66.7 81.3 84.8 58.3 63.6OA 84.2 82.1 87.3 82.3KA 0.81 0.79 0.85 0.80

Tab. 5. Classification results - validation vegetated areas, grassland and agricultural fields. In consequence, the producer and user accuracies of the corresponding land use categories decline. 5. CONCLUSIONS AND OUTLOOK In the first section of this paper two common methods for the integration of optical and SAR data were com-pared. The investigation showed that a simple combina-tion of the information provided by both data types is more suitable to improve land cover map accuracy than image fusion at pixel level. By the combination of opti-cal and radar images a significant rise in product accu-racy could be achieved. Promising classification results were obtained using multitemporal SAR data only. The classification accuracy exceeds those obtained on base of two Landsat-5 TM scenes acquired in April and July. Finally, a combined object-/pixel-based classification scheme for the generation of basic land cover maps providing a high potential for automation has been pre-sented. The training samples used as input for a super-vised classification were selected on base of an object-based decision tree with scene-specific, flexible thresh-olds. The absolute threshold values were calculated by the combination of expert knowledge and histogram analyses. In order to improve the mapping accuracy for urban areas textural features extracted from HH-polar-ized SAR data were incorporated in the classification procedure. For validation issues the methodology was applied to different sets of input data acquired over the Nordhausen test site in 2003 and 2005. With the availability of polarimetric and high-resolution spaceborne X-and L-band SAR data as provided by the TerraSAR-X and PALSAR ALOS mission a further improvement is expected for both the automatic selec-tion of potential training samples as well as the final map accuracy. For example, L-band SAR data are known to provide an excellent database for forest cover mapping, i.e. the misclassification between forested areas, agricultural crops and grassland are expected to decrease. Regarding urban area mapping several studies emphasized the potential of X- und L-band data. Fur-thermore, due to the very high spatial resolution, texture

measures extracted on base of TerraSAR-X and PAL-SAR ALOS data will most probably improve the gen-eration of residential area maps. 6. ACKNOWLEDGEMENT The ENVISAT ASAR and ERS-2 data were provided courtesy of the European Space Agency (Category-1 Project C1P 3115). The Enviland project – subproject scale integration - is funded by the German Ministry Economy and Technology (MW) and the German Aero-space Centre (DLR) (FKZ 50EE0405). 7. REFERENCES

1. Alparone, L., Baronti, S., Garzelli, A. & Nencini, F. (2004). Landsat ETM+ and SAR image fusion based on generalized intensity modulation. IEEE Trans. Geosc. RS, 42(12): 2832 – 2839.

2. Amarsaikhan, D. & Douglas, T. (2004). Data fusion and multisource image classification. Int. J. RS 25/17: 3529 – 3539.

3. Hegarat-Mascle S L, Quesney A, Vidal-Madjar D, Taconet O, Normand M & Loumagne (2000). Land cover discrimination from multitemporal ERS images and multispectral Landsat images: a study case in an agricultural area in France. Int. J. RS, 21: 435 – 456.

4. Pohl, C. & Van Genderen, J. L. (1998). Multisen-sor image fusion in remote sensing: concepts, me-thods and applications. Int. J. RS 19/5: 823 – 854.

5. Stussi N., A. Beaudoin, T. Castel & P. Gigord (1995). Radiometric correction of multiconfigura-tion spaceborne SAR data over hilly terrain. In: Proc. Int. Symp. on Retr. of Bio- and Geophy. Para. from SAR Data for Land Appl., Toulouse, France, 10-13 October 1995, pp. 469-478.

6. Baatz, M. & Schäpe, A. (2000). Multiresolution Segmentation– an optimization approach for high quality multi-scale image segmentation. In: Strobl J et al. (Eds.): Angew. Geogr. Informationsverar-beitung XII. Beiträge zum AGIT-Symposium Salzburg, Herbert Wichmann Verlag, pp. 12–23.

7. Dekker, R. J. (2003). Texture analysis and classifi-cation of ERS SAR images for map updating of ur-ban areas in The Netherlands. IEEE Trans. Geosc. RS, 41(9):1950 – 1958.

8. Dell’ Acqua, F., Gamba, P. (2003). Texture-based characterization of urban environments on satellite SAR images. IEEE Trans. on Geosc. RS, 41(1): 153 – 159.

9. Riedel, T., Thiel, C. & Schmullius, C. (2006): An object-based classification procedure for the derivation of broad land cover classes using optical and SAR data. In: Proc. Object-Based Image Processing (OBIA), Salzburg, Austria, July 4-5, 2006, on CD.