[IEEE 2009 4th International Conference on Recent Advances in Space Technologies (RAST) - Istanbul,...

4
Damaged Building Detection in Aerial Images using Shadow Information Beril Sırmac ¸ek and Cem ¨ Unsalan Computer Vision Research Laboratory Department of Electrical and Electronics Engineering Yeditepe University ˙ Istanbul, 34755 TURKEY e-mail:[email protected] Abstract—Automatic detection of damaged buildings from aerial and satellite images is an important problem for rescue planners and military personnel. In this study, we present a novel approach for automatic detection of damaged buildings in color aerial images. Our method is based on color invariants for building rooftop segmentation. Then, we benefit from grayscale histogram to extract shadow segments. After building verification using shadow information, we define a new damage measure for each building. Experimentally, we show that using our damage measure it is possible to discriminate nearby damaged and undamaged buildings. We present our experimental results on aerial images. I. I NTRODUCTION Natural disasters such as earthquakes or hurricanes may cause a great damage to a region. Although these disasters are inevitable, it is still possible to minimize the problems afterwards. After an earthquake or a hurricane, the road network may be damaged. Therefore, the region may not be accessible using ground transportation. It is also highly possible that the communication network to be damaged. These deficiencies may limit the information flow from the disaster region. However, it is utmost important for rescue planners to get reliable information from these regions to effectively guide their resources. To get reliable information from a disaster region, one possible solution is sending an aerial surveillance system. This system may collect aerial images from the disaster region. Although the images may be of use for rescue planners, it is still hard to manually locate damaged buildings in these images. With the same reasoning, automatically locating the damaged buildings after a military airstrike is utmost importance to military personnel. This information may give insight on the success of the airstrike. Therefore, automatic damaged building detection from aerial or satellite images is an important problem in remote sensing. Satellite and aerial images have different properties that make it hard to develop generic algorithms for damaged build- ing detection. These images differ in scale (resolution), sensor type, orientation, quality, and ambient lighting conditions. In addition to these difficulties, buildings may have complicated structures and can be occluded by other buildings or trees. One has to consider both aspect, structural, and deterministic cues to construct a solution to this challenging problem. In order to handle these problems, we propose a novel building damage detection system in this study. Our aim is to automatically locate the damaged buildings from aerial images in a fast manner. This information may be invaluable for both rescue planners and military personnel. In related literature, some researchers used the shadow information to detect buildings. Huertas and Nevatia [3] used the relationship between buildings and shadows. They first extracted corners from the image. They labeled these corners as either bright or shadow. They used the bright corners to form rectangles. Shadow corners confirm building hypothesis for these rectangles. McKeown et al. [4] detected shadows in aerial images by thresholding. They showed that building shadows and their boundaries contain important information about building heights and roof shapes. Zimmerman [11] integrated basic models, and multiple cues for building de- tection. He used color, texture, edge information, shadow and elevation data to detect buildings. His algorithm extracts buildings using blob detection. Tsai [7] compared several invariant color spaces including HSI , HSV , HCV , YIQ, and YC b C r models for shadow detection and compensation in aerial images. Vu et al. [9] also used shadows to model buildings. They showed that shadow information can be used to estimate damages and changes in buildings. Chen and Hutchinson [1] developed a system to detect damages using bi- temporal grayscale satellite images. First, they compared two images to detect pixel based changes. Then, they extracted object based changes by a probabilistic approach. In this study, we assume that the damaged region is not imaged beforehand. Therefore, it is not possible to compare two images (as in standard change detection algorithms) to detect damaged buildings. We have to detect the damaged buildings using just one image (obtained after the earthquake or the airstrike). Our method depends on detecting building rooftops and shadow information from color aerial images. In order to extract the building rooftops we benefit from invariant color features. In our previous studies, we were able to locate the building rooftops in a reliable manner [6]. Here, we extract shadow segments by automatically thresholding the grayscale image. As we extract both information in a robust manner, then we group rooftop and shadow segments using image 978-1-4244-3628-6/09/$25.00 ©2009 IEEE 249

Transcript of [IEEE 2009 4th International Conference on Recent Advances in Space Technologies (RAST) - Istanbul,...

Damaged Building Detection in Aerial Imagesusing Shadow Information

Beril Sırmacek and Cem UnsalanComputer Vision Research Laboratory

Department of Electrical and Electronics EngineeringYeditepe University

Istanbul, 34755 TURKEYe-mail:[email protected]

Abstract—Automatic detection of damaged buildings fromaerial and satellite images is an important problem for rescueplanners and military personnel. In this study, we present anovel approach for automatic detection of damaged buildings incolor aerial images. Our method is based on color invariants forbuilding rooftop segmentation. Then, we benefit from grayscalehistogram to extract shadow segments. After building verificationusing shadow information, we define a new damage measure foreach building. Experimentally, we show that using our damagemeasure it is possible to discriminate nearby damaged andundamaged buildings. We present our experimental results onaerial images.

I. INTRODUCTION

Natural disasters such as earthquakes or hurricanes maycause a great damage to a region. Although these disastersare inevitable, it is still possible to minimize the problemsafterwards. After an earthquake or a hurricane, the roadnetwork may be damaged. Therefore, the region may notbe accessible using ground transportation. It is also highlypossible that the communication network to be damaged.These deficiencies may limit the information flow from thedisaster region. However, it is utmost important for rescueplanners to get reliable information from these regions toeffectively guide their resources. To get reliable informationfrom a disaster region, one possible solution is sending anaerial surveillance system. This system may collect aerialimages from the disaster region. Although the images may beof use for rescue planners, it is still hard to manually locatedamaged buildings in these images. With the same reasoning,automatically locating the damaged buildings after a militaryairstrike is utmost importance to military personnel. Thisinformation may give insight on the success of the airstrike.Therefore, automatic damaged building detection from aerialor satellite images is an important problem in remote sensing.

Satellite and aerial images have different properties thatmake it hard to develop generic algorithms for damaged build-ing detection. These images differ in scale (resolution), sensortype, orientation, quality, and ambient lighting conditions. Inaddition to these difficulties, buildings may have complicatedstructures and can be occluded by other buildings or trees. Onehas to consider both aspect, structural, and deterministic cuesto construct a solution to this challenging problem. In order to

handle these problems, we propose a novel building damagedetection system in this study. Our aim is to automaticallylocate the damaged buildings from aerial images in a fastmanner. This information may be invaluable for both rescueplanners and military personnel.

In related literature, some researchers used the shadowinformation to detect buildings. Huertas and Nevatia [3] usedthe relationship between buildings and shadows. They firstextracted corners from the image. They labeled these cornersas either bright or shadow. They used the bright corners toform rectangles. Shadow corners confirm building hypothesisfor these rectangles. McKeown et al. [4] detected shadowsin aerial images by thresholding. They showed that buildingshadows and their boundaries contain important informationabout building heights and roof shapes. Zimmerman [11]integrated basic models, and multiple cues for building de-tection. He used color, texture, edge information, shadowand elevation data to detect buildings. His algorithm extractsbuildings using blob detection. Tsai [7] compared severalinvariant color spaces including HSI , HSV , HCV , Y IQ,and Y CbCr models for shadow detection and compensationin aerial images. Vu et al. [9] also used shadows to modelbuildings. They showed that shadow information can be usedto estimate damages and changes in buildings. Chen andHutchinson [1] developed a system to detect damages using bi-temporal grayscale satellite images. First, they compared twoimages to detect pixel based changes. Then, they extractedobject based changes by a probabilistic approach.

In this study, we assume that the damaged region is notimaged beforehand. Therefore, it is not possible to comparetwo images (as in standard change detection algorithms) todetect damaged buildings. We have to detect the damagedbuildings using just one image (obtained after the earthquakeor the airstrike). Our method depends on detecting buildingrooftops and shadow information from color aerial images. Inorder to extract the building rooftops we benefit from invariantcolor features. In our previous studies, we were able to locatethe building rooftops in a reliable manner [6]. Here, we extractshadow segments by automatically thresholding the grayscaleimage. As we extract both information in a robust manner,then we group rooftop and shadow segments using image

978-1-4244-3628-6/09/$25.00 ©2009 IEEE 249

processing techniques. The ratio of the rooftop region to itsshadow region gives insight on the height of the building underconsideration. Since we have many buildings in a given region,they also give insight on the actual ratio for the undamagedbuildings. After obtaining this information, we locate damagedbuildings. We tested our damaged building detection systemon aerial images and obtained reasonable results.

II. BUILDING DETECTION

We consider color aerial images in RGB color format. Ourfirst aim is to detect buildings and their shadows automatically.Then, we will use this information to define a measure toestimate the degree of damage. We provide a sample test imagein Fig. 1.

Fig. 1. Istanbul1 test image from our aerial image dataset.

As can be seen in Fig. 1, there are only undamaged buildingsin this region. We benefit from color invariants and grayscaleinformation to extract rooftop and shadow segments from thistest image. We explore them next.

A. Detecting Rooftop and Shadow Segments

Color invariants help to extract color properties of objectswithout being affected by imaging conditions. Imaging condi-tions can be counted as, the illumination of the environment,the surface properties of the object, the highlights or shadowson the object, and the change of the angle of view. Gevers andSmeulders [2] proposed several color invariants.

We extract the color information in the aerial image usinga specific color index based on our previous study [8]. There,we used multispectral information from satellite images as thered and near-infrared bands. Here, we follow the same strategyto define a color invariant, but with the red and green bandsof the color aerial image as:

ψr =4π

arctan(R−G

R+G

)(1)

where R stands for the red band and G stands for the greenband of the color image. This color invariant has a value of

unity for red colored objects independent of their intensityvalues. Similarly, it has a value of minus unity for greencolored objects in the image. Therefore, the red rooftops (ofbuildings) can be easily segmented using ψr. Since mostbuildings have red rooftops in the test region, this invariantis of great use to detect buildings. To segment out the redrooftops automatically, we benefit from Otsu’s thresholdingmethod [5].

In order to detect shadow segments automatically, we usegrayscale histogram of image. First, we smooth this imagewith a median filter. Since, shadows generally appear in thedarker regions of an image, we choose the first local minimumin the histogram as the threshold value. We extract shadowsegments by thresholding the grayscale image with this auto-matically calculated threshold value. Using this method, weobtain the rooftops and shadow segments as in Fig. 2.

Fig. 2. Building rooftop and shadow segments in Istanbul1 test image.

In Fig. 2, blue segments represent detected shadows and redsegments represent detected red rooftops in the Istanbul1 testimage. We provide the detection results on a blank image toincrease visibility of segments.

B. Estimating the Illumination Direction

In previous approaches, the illumination direction is pro-vided to the system manually [10]. We assume that theillumination direction can be estimated if a connected rooftopand shadow region couple can be located in the image. Weconsider the illumination direction as the vector originatingfrom the center of the rooftop region to ending at the center ofthe shadow region. Based on this definition, for the rooftop andshadow couple, if center of the rooftop region is at (xb, yb),and the center of the shadow region is at (xs, ys), then theillumination angle θ is:

θ = arctan( |yb − ys||xb − xs|

)(2)

The quadrant θ lies is also important. We can adjust θaccording to its actual quadrant as:

250

θ =

⎧⎪⎪⎨⎪⎪⎩

θ if xs > xb, ys < yb

π − θ if xs < xb, ys < yb

π + θ if xs < xb, ys > yb

2π − θ if xs > xb, ys > yb

(3)

We pick a sample building in the Istanbul1 test image andprovide its illumination direction in Fig. 3. We zoom into asmall part of the image to magnify illumination direction.

Fig. 3. Automatically calculated illumination direction of Istanbul1 testimage.

In Fig. 3, the yellow arrow indicates the automaticallydetected illumination direction. As can be seen, the arrow isdirected from the red rooftop segment to the center of shadowsegment.

C. Verifying the Building Appearance

If the rooftop (of a building) to be detected is not red, thenour color invariant ψr may not be sufficient to detect it fromthe aerial image. To detect such rooftops (hence buildings),we have to look for other cues. Since we determined theillumination angle θ in Eqn. 3, this information may be ofhelp to verify red rooftops as well as infer non-red rooftops.To do so, we introduce a hypothesis test such that; if wedetect a shadow somewhere in the image it should originatefrom a building. Therefore, we use the illumination directioninformation to estimate the possible building location. Theillumination angle and direction is calculated using the redrooftop and shadow couples of other buildings (in the image)as we have introduced in the previous section. We assumethat, the building should be in the opposite direction of theillumination vector. We locate a 30×30 window on this center.We call this region as the estimated building segment. Theformula to calculate estimated building location as:

(xe, ye) = (xs + d cos θ, ys + d sin θ) (4)

where, (xs, ys) is the location of the shadow center. (xe, ye)represents the coordinates of the estimated building center. dis the possible distance that a building can be located. In thisstudy, we use this distance as 17 pixels considering the sizeof the buildings in our test images.

III. MEASURING THE DEGREE OF DAMAGE

As we obtain the rooftop and shadow segments, we define ameasure to determine the degree of damage. For this purpose,we calculate the ratio of rooftop and shadow areas for eachbuilding as:

r =N

M(5)

where N is the area of the rooftop segment and M is thearea of the corresponding shadow segment. Since shadow androoftop areas are larger for undamaged buildings, this ratiogives similar results. But if the building is decayed or ifthere is a structural damage on it, its shadow region will besmaller which leads rooftop to shadow ratio to have highervalues. Again, to note here, we do not have image of the testregion taken beforehand. Therefore, this ratio gives importantinformation about the degree of damage using a single image.

IV. EXPERIMENTAL RESULTS

In this section, we provide two test images, one contain-ing undamaged buildings (Istanbul1) and the other contain-ing damaged buildings (Istanbul2). Detected buildings inIstanbul1 test image are given in Fig. 4. Damage measures,r, of these buildings are calculated as follows. On the upperside only first three buildings are detected, and their damagemeasures are calculated as 1.94, 1.79, and 2.26 respectively.For the buildings lying horizontally in the center of the image,damage measures are calculated as 1.79, 1.79, 2.34, 2.38,2.70, 2.35, 2.31 respectively. Finally, for the building on thelower left side of the image, the damage measure is calculatedas 2.23. It can be seen that the obtained damage measuresof these buildings are very similar. Mean of these damagemeasures is calculated as μ = 2.17. Since the user knows thatall of these buildings are healthy, the degree of the damage onother buildings can be estimated by comparing their damagemeasures with the μ value.

Fig. 4. Detected buildings in Istanbul1 test image. (Undamaged buildingmeasures are calculated on these detected buildings.)

251

After calculating the damage measures for the undamagedbuilding set (selected by the user), we use the Istanbul2image that contains damaged buildings in order to test ouralgorithm as given in Fig. 5. For the building which is onthe upper left side of this test image, damage measure is 1.6.The second building on the upper side could not be foundby our system, so damage degree could not be measured.For the building on the upper left side of the image, thedamage measure is obtained as 1.6 again. By comparing withμ, we can say that these two buildings are undamaged. Forthe building on the lower left side of the image, the damagemeasure is calculated as 2.78. This result is also similar todamage measures of undamaged buildings, and that indicatesthis building is also undamaged. On the lower side of theimage, damage measures of last three buildings are calculatedas 7.75, 4.22, 4.67 respectively. These values are very highcompared to μ. Therefore, these values can give an idea touser about the damage in these buildings.

Fig. 5. Detected buildings in Istanbul2 test image.

V. CONCLUSIONS

In this study, we present a novel method for automaticdamaged building damage detection from color aerial images.We first extract building rooftops using invariant color features.Then, we extract the shadow information using grayscale his-togram. We locate neighboring building and shadow regions.We use these couples to determine the illumination directionand verify building appearance. Finally, we define a measureto estimate damage degree using rooftop and shadow segments

for each building. Test results on real aerial images indicatethe possible use of our method in practical applications. Weare still working on improving our damaged building detectionsystem. We believe that proposed system will be of great usefor both disaster management and military applications.

REFERENCES

[1] Z. Chen and T. Hutchinson, “A probabilistic classification frameworkfor urban structural damage estimation using satellite images,” UrbanRemote Sensing Joint Event 2007, pp. 1–7, 2007.

[2] T. Gevers and A. W. M. Smeulders, “Pictoseek: Combining color andshape invariant features for image retrieval,” IEEE Transactions onImage Processing, pp. 102–119, 2000.

[3] A. Huertas and R. Nevatia, “Detecting buildings in aerial images,”Computer Vision, Graphics and Image Processing, vol. 41, pp. 131–152, 1988.

[4] R. B. Irvin and D. M. McKeown, “Methods for exploiting the rela-tionship between buildings and their shadows in aerial imagery,” IEEETransactions on Systems, Man, and Cybernetics, vol. 19, no. 1, pp.1564–1575, 1989.

[5] N. Otsu, “A threshold selection method from gray-level histograms,”IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, pp. 62–66, 1979.

[6] B. Sırmacek and C. Unsalan, “Building detection from aerial imagesusing invariant color features and shadow information,” in Proceedingsof International Symposium on Computer and Information SciencesISCIS’2008, 2008, pp. –.

[7] V. J. D. Tsai, “A comparative study on shadow compensation ofcolor aerial images in invariant color models,” IEEE Transactions onGeoscience and Remote Sensing, vol. 44, no. 6, pp. 1661–1671, 2006.

[8] C. Unsalan and K. L. Boyer, “Linearized vegetation indices based ona formal statistical framework,” IEEE Transactions on Geoscience andRemote Sensing, vol. 42, pp. 1575–1585, 2004.

[9] T. Vu, M. Matsouka, and F. Yamazaki, “Shadow analysis in assistingdamage detection due to earthquake from quickbird imagery,” Proceed-ings of the 10th international society for photogrammetry and remotesensing congress, pp. 607–611, 2004.

[10] G. Zhou, W. Chen, J. Kelmelis, and D. Zhang, “A comprehensive studyon urban true orthorectification,” IEEE Transactions on Geoscience andRemote Sensing, vol. 43, no. 9, pp. 2138–2147, 2005.

[11] P. Zimmermann, “A new framework for automatic building detection an-alyzing multiple cue data,” in International Archives of Photogrammetryand Remote Sensing IAPRS’2000, vol. 33, 2000, pp. 1063–1070.

252