InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress...

69
InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles by Hesam Hamledari A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Department of Civil Engineering University of Toronto © Copyright by Hesam Hamledari (2016)

Transcript of InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress...

Page 1: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles

by

Hesam Hamledari

A thesis submitted in conformity with the requirements for the degree of Master of Applied Science

Department of Civil Engineering University of Toronto

© Copyright by Hesam Hamledari (2016)

Page 2: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

ii

InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles

Hesam Hamledari

Master of Applied Science

Department of Civil Engineering

University of Toronto

2016

Abstract

In this research, an envisioned automated intelligent robotic solution for automated indoor data

collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled

“InPRO”, is presented. InPRO consists of four stages, namely: 1) automated path planning; 2)

autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of

progress; and, 4) automated updating of 4D building information models (BIM). The works

presented in this thesis address the third stage of InPRO. A series of computer vision-based

methods that automate the assessment of construction progress using images captured at indoor

sites are introduced. The proposed methods employ computer vision and machine learning

techniques to detect the components of under-construction indoor partitions. In particular, framing

(studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering,

and painting) are automatically detected using digital images. High accuracy rates, real-time

performance, and operation without a priori information are indicators of the methods’ promising

performance.

Page 3: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

iii

Acknowledgments

My experience at University of Toronto has been full of adventures, challenges, discoveries, and

long-lasting achievements in a research field that has now become an inseparable part of me. I

would like to thank the individuals who continuously supported me during the past two years.

First and foremost, I would like to extend my deepest gratitude to my supervisor, Professor Brenda

McCabe, for making this life-changing experience possible. I am truly honored and privileged to

have conducted my research studies under her supervision. Without any doubt, she has been the

most influential individual I have met throughout my life, who not only made my academic

achievements possible, but also was a genuine source of inspiration. She truly believed in me even

when I could not believe in myself; she shared with me her broad knowledge and enthusiasm for

authentic research; and she always supported me with patience. She would always have a special

place in my heart.

I am grateful to Professor Rezazadeh Azar for his constructive and honest comments on my works.

His continuous support helped me better approach Computer Vision and Machine Learning. I

would like to thank Dr. Shahi for his continuous and heart-warming support, priceless advice, and

constructive comments. I am indebted to Adrienne De Francesco and Steve Miszuk who

tremendously helped me in the validation stage of my works and site visits. My research

achievements could have not been possible without their support.

I am grateful for the support of my colleagues and friends in our research group: Yuting Chen,

Felix Wei, Emilie Alderman, Amber Li, Patrick Marquis, and Hiba Ali. Especially, I would like

to thank Yuting for being a great office mate, friend, and someone I could always learn from. Her

determination, accomplishments, and diligence in research were always a source of inspiration.

A heartfelt thank you goes to my parents, Mehry and Morteza, and my sister, Homa. They

supported me with their encouragement and love. They were always in my thoughts.

Finally, I would like to thank Shakiba, my ingenious companion. Her many sacrifices made it

possible for me to reach where I am today. Her unconditional love made it all possible.

Page 4: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

iv

Table of Contents

Acknowledgments.......................................................................................................................... iii

Table of Contents ........................................................................................................................... iv

List of Tables ................................................................................................................................. vi

List of Figures ............................................................................................................................... vii

List of Acronyms .............................................................................................................................x

Chapter 1 ..........................................................................................................................................1

Introduction .................................................................................................................................1

1.1 Objectives ............................................................................................................................2

1.2 Contributions........................................................................................................................3

1.3 Thesis Structure ...................................................................................................................4

Paper I .........................................................................................................................................6

2.1 Abstract ................................................................................................................................6

2.2 Introduction ..........................................................................................................................7

2.3 Background ..........................................................................................................................8

2.4 Challenges ..........................................................................................................................10

2.4.1 Manual Data Collection Methods ..........................................................................10

2.4.2 Limitations of Existing Vision-Based Solutions....................................................11

2.5 Proposed Visual Recognition Solutions.............................................................................12

2.5.1 Step 1: Extraction of the Object’s Approximate Outline .......................................12

2.5.2 Step 2: Shape Analysis and Object Localization ...................................................14

2.6 Implementation and Results ...............................................................................................15

2.7 Conclusion .........................................................................................................................16

2.8 Acknowledgement .............................................................................................................17

Page 5: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

v

Paper II ......................................................................................................................................18

3.1 Abstract ..............................................................................................................................18

3.2 Keywords ...........................................................................................................................19

3.3 Introduction ........................................................................................................................19

3.4 Indoor progress monitoring and related work ....................................................................20

3.4.1 Automated computer vision methods ....................................................................22

3.4.2 Unmanned aerial vehicles and their applications...................................................23

3.5 The research context ..........................................................................................................25

3.6 Automated Visual Recognition of Interior Partitions ........................................................25

3.6.1 The stud module .....................................................................................................26

3.6.2 The Insulation module ...........................................................................................28

3.6.3 The drywall module ...............................................................................................32

3.6.4 Electrical outlet module .........................................................................................34

3.7 Validation and results ........................................................................................................36

3.7.1 Validation metrics and results ................................................................................39

3.7.2 Selection of threshold and other input parameters .................................................43

3.7.3 Validation of vision-based methods used on UAVs ..............................................45

3.8 Conclusions and Future Work ...........................................................................................49

3.8.1 Future work ............................................................................................................50

3.9 Acknowledgement .............................................................................................................52

References .................................................................................................................................53

Page 6: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

vi

List of Tables

Table 2.1. The results of testing the proposed algorithms ............................................................ 16

Table 3.3.1. The specifications of the Bebop quadcopter ............................................................. 37

Table 3.3.2. The distribution of images in each database ............................................................. 39

Table 3.3.3. The results of testing the proposed algorithms on the three image databases .......... 40

Table 3.3.4. The average obtained run times for the proposed methods, for each category and

image resolution ............................................................................................................................ 41

Page 7: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

vii

List of Figures

Figure 1.1. InPRO: an automated intelligent robotic solution for indoor progress monitoring. .... 3

Figure 1.2. The four stages of InPRO. ............................................................................................ 4

Figure 2.1. Distribution of pixel intensities for a 680×380 pixels image containing cropped

instances of steel studs: (a) LAB color space; and (b) HSV color space ...................................... 13

Figure 2.2. Extraction of studs outline: (a) the input frame; (b) the output of the first step

displaying studs in white ............................................................................................................... 14

Figure 2.3. The second step, shape analysis and object localization: (a) the line segments

generated by progressive probabilistic Hough; and (b) the final drawn lines, shaping the studs . 15

Figure 2.4. Examples of detected steel studs in images: (a) the first example, 15 true positives, 1

false negative, and 1 false positive; and (b) the second example, 9 true positives, 1 false positive,

and 1 false negative ....................................................................................................................... 15

Figure 2.5. Examples of detected electrical boxes in images: (a) the input frame; (b) the output of

the first step, containing the approximate shapes; and (c) the output of the second step, shape

analysis and object localization .................................................................................................... 16

Figure 3.1. An overview of the research context .......................................................................... 25

Figure 3.2. The output of different stages of stud detection module; (a) the input image; (b)

thresholded L channel; (c) line segments generated by probabilistic Hough transform (d) output

lines ............................................................................................................................................... 27

Figure 3.3. The output of different stages of stud detection module; (a) the input image; (b)

thresholded L channel; (c) line segments generated by probabilistic Hough transform (d) output

lines ............................................................................................................................................... 27

Figure 3.4. The algorithm for visual detection of insulation in images ........................................ 29

Figure 3.5. The detection of insulation in images: (a) the input image; (b) the sample patch used

in this example; (c) the color-coded sample path according to the results of k means clustering;

Page 8: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

viii

(d) the color-coded input image; (e) the binary mask generated after label matching; (d) extracted

insulation blankets ........................................................................................................................ 31

Figure 3.6. The flowchart of the algorithm for detection of three states of progress for drywall

Sheets ............................................................................................................................................ 32

Figure 3.7. Extraction of plastered regions: (a) input frames; (b) two examples of single

thresholding; (c) the results of the proposed method .................................................................... 33

Figure 3.8. The identification of installed drywall sheets: (a) input images; (b) the vertical edges

extracted using a Sobel kernel; (c) the detected sheets of drywall ............................................... 34

Figure 3.9. The algorithm for detection of electrical outlets in images ........................................ 35

Figure 3.10. The detection of electrical outlets in images: (a) the input image containing four

electrical boxes; (b) the result of thresholding; (c) filtered blobs before using bitwise operator;

(d) the detected boxes; (e) the input image containing four electrical sockets; (f) the result of

thresholding; (g) the filtered results; (h) the detected sockets ...................................................... 36

Figure 3.11. The Bebop quadcopter used for this study ............................................................... 38

Figure 3.12. The examples of images in the UAV (first row) and smartphone database (second

row); the images are scaled down, and their aspect ratio and relative sizes are not preserved .... 39

Figure 3.13. Precision, recall, and run time plotted against different image resolutions: (a) studs;

(b) insulation; (c) drywall; (d) electrical outlets ........................................................................... 42

Figure 3.14. Two examples of the detected studs in images: (a) the first example, eleven

correctly detected, one FN, and one FP; (b) the second example, all studs correctly detected .... 44

Figure 3.15. Two examples of detected insulation blankets in images ........................................ 44

Figure 3.16. Examples of the electrical outlets detected: (a) the first example, six correctly

detected and one FN; (b) the second example, all correctly detected ........................................... 45

Figure 3.17. Two screenshot of flight statistics available through the device’s user graphical

interface: (a) the whole flight; (b) a portion of the flight, in more details .................................... 47

Page 9: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

ix

Figure 3.18. The variations of precision and recall against the velocity: (a) stud; (b) insulation;

(c) drywall; (d) electrical outlet .................................................................................................... 48

Figure 3.19. Examples of metallic electrical boxes in challenging scenes of indoor environment

....................................................................................................................................................... 50

Page 10: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

x

List of Acronyms

BIM Building Information Model

CAD Computer-aided design

CDF Cumulative distribution function

CLAHE Contrast Limited Adaptive Histogram Equalization

CPU Central Processing Unit

GPS Global Positioning System

HOG Histogram of Oriented Gradients

HSV Hue, Saturation, and Value

MEP Mechanical, electrical, and plumbing

OpenCV Intel® Open Source Computer Vision Library

RFID Radio-frequency identification

SFM Structure from Motion

SVM Support Vector Machine

UAV Unmanned Aerial Vehicles

UWB Ultra-wideband

VTOL Vertical takeoff and landing

Page 11: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

1

Chapter 1

Introduction

Construction managers need to continuously monitor the state of work at construction sites to

reduce cost and schedule overruns and make informed decisions (Navon and Sacks 2007). Current

monitoring practices are manual, time consuming, costly, and inefficient (Golparvar-fard et al.

2009). Active monitoring of progress at construction sites can prevent costly defects, provide a

clearer view of the construction processes, and increase situation awareness (Akinci et al. 2006).

As a result, novel research streams were initiated in the past two decades with a focus on the use

of technology to automate construction monitoring. These technologies include laser scanning

(Tang et al. 2010), digital cameras (Bohn and Teizer 2010; McCabe and Clarida 2004), radio

frequency identification (RFID) (Akinci et al. 2003), ultra-wideband (UWB) (Shahi et al. 2013),

and unmanned aerial vehicles (UAV) (Siebert and Teizer 2014).

To achieve automated progress monitoring, as-is conditions at sites should be captured and

reflected into building information models (BIM), a process known as “as-built modeling”

(Brilakis et al. 2010; Patraucean et al. 2015). The state-of-the-art automated progress monitoring

studies differ in the reality capture technologies they employ, which can generally be categorized

into three groups: Image-based solutions, laser scanning, and radio-based technologies (UWB and

RFID).

Image-based solutions, the focus area of this research, can be categorized into 3D reconstruction

and 2D image processing techniques. The former uses a series of images, captured at different

viewpoints, to create 3D point clouds from which different building components are extracted and

identified (Golparvar-Fard et al. 2011); the latter extracts the semantic information from a single

2D image. The information extracted from the 3D point clouds or a single image is then compared

with as-planned progress available in the form of 4D BIMs, in which the schedule is integrated

with the 3D models (Golparvar-fard et al. 2009).

There are limitations associated with the state-of-the-art image-based techniques. First, they have

not been studied for indoor construction sites. The use of 3D reconstruction-based techniques at

indoor sites requires manual entry of viewpoints and locations (Roh et al. 2011) for each image as

GPS and other locating systems do not typically work indoors. In addition, the required 2D image

Page 12: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

2

processing techniques have not yet been developed to detect indoor project-related components

(Kropp et al. 2012). Second, data collection processes are still manual, requiring someone to

inspect and capture images of the active areas at the site (Teizer 2015). This is because fixed

cameras lose their effectiveness when the work moves indoors (Bohn and Teizer 2010), and cannot

provide images needed for 3D reconstruction-based techniques due to their limited coverage.

Third, there has been little study on the integration of 2D image processing-based techniques with

BIMs (Kim et al. 2013). This hampers the practical use of image-based techniques due to the

tedious task of model updating (Patraucean et al. 2015).

This thesis contributes to an envisioned system in which indoor construction progress is

automatically captured and documented. Called “InPro” (indoor progress monitoring), the

system uses UAVs to capture images in active areas and return them for processing. The results

are then used to update the building information model (BIM) to show progress and as-built

conditions.

1.1 Objectives

To eliminate the limitations of image-based techniques for indoor sties, this research aims to

develop a 2D image processing-based system that automates not only the data collection

processes at indoor sites, but also the assessment of progress. The primary objectives of this

research are:

To test the viability of using unmanned aerial vehicles (UAV) for capturing still and

video images at indoor construction sites

To develop algorithms that extract the state of indoor construction from 2D images

To develop a reliable means of estimating the percent completion of various construction

activities so that the schedule within a building information model (BIM) can be updated

and maintained.

Page 13: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

3

1.2 Contributions

An automated UAV-based progress monitoring system,

entitled “InPRO” (Fig. 1.1), was proposed that aims to

automate the data collection using autonomous robotic

inspection and the assessment of progress using

computer vision and machine learning.

In this system, autonomous UAVs, equipped with high-

resolution cameras, inspect active locations at an indoor site, either specified by a construction

manager or auto-selected based on the available information in regard to as-is conditions. During

a flight, the UAVs capture high-quality videos and images of under-construction partitions, useful

for documentation and assessment of construction progress. These visual resources are accessible

both in real-time through the flying robots’ self-generated Wi-Fi networks and after the inspection

is completed. The images and videos are automatically processed using computer vision

algorithms developed herein to assess the state of progress for the inspected under-construction

partitions. These algorithms enable computers to analyze an image and extract the details of

interest from it without any human intervention. Finally, the results are automatically incorporated

in as-designed 4D BIMs and construction schedules to better reflect the current conditions. The

proposed system, InPRO, provides project team members with low-cost and accurate information

regarding the actual progress in real time, which significantly facilitates informed decision

makings and situation awareness.

The proposed system (Figure 1.2) comprises four stages: 1) inspection planning; 2) automated

UAV-based data collection; 3) automated detection of state of progress; and 4) updating and

creating as-built BIMs. The contributions presented in this thesis address the second and third stage

of InPRO (Fig. 1.2b&c), automated detection of state of progress in images captured at indoor

sites.

Figure 1.1. InPRO: an automated intelligent

robotic solution for indoor progress

monitoring.

Page 14: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

4

1.3 Thesis Structure

This is a publication-based thesis. Chapter 1 provides the background and motivation for this

research work, and the contributions of the papers. Since a detailed review of related works is

available in both papers (Chapters 2 and 3), only a brief explanation of the research context is

provided to depict the overall trends toward automated progress monitoring.

Chapter 2 contains the first publication, a conference paper, in which the challenges and gaps

associated with indoor progress monitoring systems are identified. It discusses the fundamental

methods used for the visual detection, with a focus on the inherent characteristics of indoor project-

related objects that can be exploited for computer vision techniques. Then, it proposes a two-step

computer vision-based solution for automated visual detection of structural elements and project-

related objects in images of indoor construction site environments. This work employs a generic

approach toward detection of building elements that makes it applicable to other components.

Chapter 3 contains the second publication, a journal paper. This paper tailors the general approach

introduced in Chapter 2 to detect specific components of under-construction partitions. It

introduces a set of computer vision modules tailored to components of under-construction

partitions, providing a more specific approach compared to the first paper. The proposed

algorithms address the visual detection of four categories of objects, two of which rely on the use

of machine learning techniques. Also, the proposed methods are evaluated in an unmanned aerial

vehicle (UAV)-based inspection context. Furthermore, this study provides an underlying analysis

Figure 1.2. The four stages of InPRO.

Page 15: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

5

of the performance of computer vision-based techniques in a UAV-based monitoring scheme such

as InPRO. This is vital because almost all of the studies in the field focused on images captured

by static cameras. Hence, the journal paper analyzes the issue from a dynamic perspective, an

inherent characteristic of a robotic inspection tool. These contributions provide a basis for future

work on the use of computer vision-based techniques on UAV-captured digital images and videos,

enable robust and accurate visual detection of objects at indoor sites, and facilitate automated

progress assessment and quality control.

Both papers address the third stage of InPRO (Fig. 1.2c), providing a basis for future studies on

2D image processing-based BIM updating, model-driven robotic inspection, and image-based

quality control at indoor sites. Hopefully, future studies will be further directed in the field of

indoor progress monitoring, an area that is far from maturity.

Page 16: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

6

Chapter 2

Paper I

Automated Visual Recognition of Indoor Project-Related Objects: Challenges and Solutions1

Hesam HAMLEDARI1 and Brenda MCCABE2

1 Graduate Research Assistant, Department of Civil Engineering, University of Toronto, 35 St.

George Street, Toronto; email: [email protected]

2 Associate Professor, Department of Civil Engineering, University of Toronto, 35 St. George

Street, Toronto; PH (416) 946-3505; FAX (416) 978-6813; email: [email protected]

2.1 Abstract

Previous research has proved the applications of vision-based methods for automated recognition

and tracking of project related entities at construction sites to be very promising. Nevertheless, the

applications of vision-based methods for indoor construction sites have not been explored

sufficiently. Automated visual recognition of indoor project-related objects can provide both

essential information about the current state of progress and also provide semantic information for

model based approaches. There are a large number of challenges associated with indoor visual

recognition such as illumination and change in viewpoint that significantly reduce the accuracy of

existing methods. In this paper, a novel methodology is used which takes an integrated color and

shape-based approach toward recognition of different project-related objects including structural

elements and interior walls under moderate to extreme illumination conditions and in different

viewpoints. The novel method is validated using a comprehensive library of indoor digital images

1 To appear in Construction Research Congress 2016 proceedings

Page 17: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

7

in different illumination conditions and viewpoints. The results indicate the applicability of the

proposed method for visual recognition of indoor project-related objects for further use in

automated progress monitoring and providing as-built 3D models with supplementary semantic

information.

2.2 Introduction

Continuous and regular monitoring of construction progress is necessary to avoid delays and

reduce cost overruns. To monitor and document the state of progress, construction managers and

site superintendents manually inspect different locations at the site on a regular basis. The current

conditions are then compared to as-planned progress to determine whether there are any schedule

discrepancies. As a result, monitoring processes are costly, error prone, and labor-intensive (Yang

et al. 2015). During the last decade, there has been a shift toward automating these processes to

enable construction managers to make informed and corrective decisions at the right time.

Research streams have focused on the application of a variety of methods including radio

frequency identification (RFID), ultra-wideband (UWB), laser scanning, and computer vision

techniques (Patraucean et al. 2015). Previous research has proven the application of vision-

techniques on 2D images to be a cost-effective, accurate, and easy to implement due to the access

to abundance of digital images that are captured daily at a construction sites either via fixed

cameras or cell phones.

Most vision-based studies have aimed to automate monitoring processes of outdoor construction

activities (Teizer 2015) because of a number of factors that hamper the robust detection of objects

in indoor images. These factors include the achromatic appearance of most indoor components

such as steel studs and concrete, cluttered indoor scenes, limited views as construction progresses,

and the dramatic lighting conditions and illumination patterns found indoors. Further, there has

been little research on the application of vision-based methods for the detailed detection of

components of interior partitions such as steel studs, insulation, electrical outlets, and drywall.

Current vision-based solutions studied in the construction context do not appear to provide a robust

solution for these components. Steel studs, for example, have slender and easily cluttered

structures, are achromatic to other components of indoor scenes, and appear in large numbers,

different configurations, and in multiple layers within one image. Electrical boxes, on the other

Page 18: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

8

hand, pose additional challenges due to their relatively small size and changed appearance before

and after installing the sockets.

To address these limitations, this paper aims to identify and discuss the challenges for the

application of vision-based techniques at indoor construction sites to provide a better

understanding of the barriers to their full implementation. These challenges can be categorized

into inefficiencies of current manual data collection procedures and limitations of existing vision-

based solutions. A set of vision-based solutions are then provided to be tailored to different indoor

project-related objects. In general, the proposed method aims to exploit the inherent distinctive

features of each component, using an integrated shape and color-based approach. In this paper, the

proposed solutions are demonstrated for steel studs and electrical outlets.

2.3 Background

Recently, the easy and affordable use of high-resolution cameras has provided construction

managers with access to better visual resources such as videos, still, and time-lapse images (Bohn

and Teizer 2010). This has initiated new streams of research on the use of these resources to

facilitate automated tracking, documenting, and communicating the state of progress at

construction sites, thereby reducing the need for manual labor-intensive inspections (Yang et al.

2015). The large body of research on automated vision-based monitoring of progress and detection

of project-related objects has primarily focused on image-based 3D modeling solutions and 2D

image processing techniques.

Image-based 3D modeling solutions are centered on the application of computer vision techniques

for creating as-built point clouds using 2D images captured daily at a construction site (Golparvar-

Fard et al. 2012). This process was made feasible by automating the camera registration in building

information models (BIMs) using structure from motion (SFM) technique (Golparvar-fard et al.

2009). In a series of pioneering works, the expected progress was extracted for any given time

using the as-planned BIMs, and the schedule discrepancies were determined by comparing the as-

planned progress and as-is conditions. These discrepancies were then visualized using augmented

reality technology, with elements color-coded in red and green, indicating behind and ahead of or

on schedule, respectively (Golparvar-Fard et al. 2012). In another study, the use of 4-dimensional

augmented reality-based techniques was studied for indoor construction sites by introducing a 3D

walk-through model (Roh et al. 2011). This study used both color and patterns to visualize the

Page 19: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

9

state of progress for interior objects. However, it required manual entry of spatio-temporal

information for each image, making it labor-intensive and semi-automated. On the other hand, the

model-based approaches are challenged to efficiently track the state of progress for objects that

are not modeled in BIMs, a major limitation when working with BIMs having low level of

development. As a result, a series of studies have investigated the use of 2D image processing

techniques to provide 3D as-built models with semantic information through reasoning on the

frequency with which materials appear in images (Dimitrov and Golparvar-Fard 2014; Han and

Golparvar-Fard 2014; Han and Golparvar-Fard 2015).

The use of 2D image processing techniques has been studied for extraction of both semantic

information and detection of project-related objects at construction sites (Teizer 2015; Yang et al.

2015). In one study, a novel object extraction methodology was demonstrated and tested for

concrete columns. The 2D images containing columns were pre-processed to reduce the level of

noise and enhance the lighting. The images were then processed using both Canny edge detector

and watershed transformation to achieve two binary images containing columns in the form of

white foreground. The outputs of both processes were filtered using image masks created in 3D

CAD views calibrated to have the same view point as the fixed camera. The final results of both

Canny edge detector and watershed transformation were fused to form the final binary image

containing the recognized columns in the white foreground. In another work (Zhu and Brilakis

2010), Hough transform was applied on the edge map of the images, created using Canny edge

detector, to detect the lines shaping the concrete columns in images, eliminating the need for

manual creation of masks. Bounding boxes were then created for each pair of vertical lines and

the material inside the boxes was passed to a classifier already trained using positive and negative

image patches of concrete. This study provided an accurate way of detecting these structural

elements in images. However, it had limitations detecting multiple instances of columns in the

same image and detecting the columns far away from the camera.

Only a few studies have addressed the detection of project-related objects at indoor construction

sites. To predict delays in finishing work (Kropp et al. 2012), an automated state of progress

detection was developed for drywall sheets (Kropp et al. 2014). Edge distribution and histograms

of pixel intensities were used to train cascaded support vector machine (SVM) classifiers and

differentiate states of progress for drywall sheets. Although promising, the parameters and

thresholds need further optimization to provide a robust and consistent solution. The methods

Page 20: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

10

should also be tested on more comprehensive databases containing various types of drywall and

illumination patterns.

Considering the current vision-based solutions, there are still many research gaps in the automated

detection of project-related objects at indoor sites. For example, the detection of slender objects

such as steel studs and small objects such as electrical boxes in high clutter indoor scenes remains

challenging due to their slender structures, achromatic characteristics, and multiple instances in

the same image. In the following sections, the common challenges associated with the use of 2D

image processing techniques at indoor sites are identified. A two-step solution is then presented to

be tailored to different components of the interior environment. The solution is discussed and

illustrated for steel studs.

2.4 Challenges

Considering the state-of-the-art studies on the application of vision-based techniques for use at

indoor construction sites, current limitations regarding the use of computer vision techniques can

be grouped as 1) manual data collection methods and 2) limitations of existing vision-based

solutions.

2.4.1 Manual Data Collection Methods

Although most studies on the application of vision-based techniques for outdoor sites have used

images captured by fixed cameras, this option is not available for use at most indoor construction

sites because fixed cameras lose their line of sight and require constant relocation as interior walls

are erected. The constant need for relocation makes fixed cameras inefficient, logistically

challenging, and cost-prohibitive indoor solutions. It is recognized that fixed digital cameras are

most effective in the early, outdoor construction phases (Bohn and Teizer 2010).

The use of smartphones can be one solution to this problem, enabling the construction managers

to dynamically capture progress at different stages and locations at a site. However, it is labor-

intensive as it requires constant manual inspections of complex indoor sites. Currently, the existing

Page 21: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

11

solutions for indoor sites require manual entry of view point, time, and location of each captured

frame.

Rotary-wing unmanned aerial vehicles (UAVs), programmed to perform daily inspections of

interior work, have great potential to automate the manual data collection. Even off-the-shelf

UAVs are equipped with high-resolution on-board cameras and precise sensors that enable

accurate documentation of state of progress in terms of digital images and videos. Due to the

availability of their flight information, the flight path can be registered in BIMs, providing a robust

solution for locating the images in models. However, their use at sites that are constantly

undergoing physical changes has not been sufficiently studied.

2.4.2 Limitations of Existing Vision-Based Solutions

There are many factors associated with indoor environments that reduce the accuracy of current

object recognition methods and 2D image processing techniques. These include, but are not limited

to, the similarity of different objects in terms of color and shape, variant illumination patterns,

extreme lighting conditions, and high complexity of cluttered indoor scenes.

Studs, for example, are very challenging to detect due to their slender shape and thin structures,

which are easily cluttered by even small amounts of visual noise in the images. As a result, the

application of color or texture methods alone will not provide a robust solution. In addition, the

use of keypoint feature descriptors, such as SIFT, is likely to result in low accuracy rates due to

the same reason (David and DeMenthon 2005). According to our experiments, steel studs do not

possess a distinctive value in the HSV hue color channel, making it challenging to differentiate

them from concrete. This challenge is compounded in complex indoor scenes where multiple

components of interior walls, equipment, and temporary materials are present in images. In

addition, existing vision-based solutions for detection of similarly shaped objects such as columns

did not show promising performance in the case of steel studs, having precision and recall rates of

lower than 10%.

Electrical boxes are also relatively small in highly cluttered and complex indoor scenes which

makes their recognition challenging. Furthermore, their appearance undergoes numerous changes

Page 22: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

12

after the electrical sockets are installed. Their accurate recognition in images can facilitate both

quality control processes and the detection of state of progress.

Illumination patterns also affect the performance of current solutions, and the parameters involved

in visual recognition algorithms need to be adjusted for each lighting condition. Non-uniform

illumination patterns also affect the extraction of shape and outline of construction elements due

to the higher intensity of pixels along the illuminated areas.

2.5 Proposed Visual Recognition Solutions

The proposed method for recognition of project-related objects at indoor construction sites consists

of two primary steps: 1) differentiating the objects from the background using color channel

intensities; and, 2) processing the extracted shape to remove false positives and localize the objects

of interest. Each of these two steps needs to be tailored to the object of interest. However, the

overall procedure follows the same logic. In the following sections, the two-step procedure is

explained and illustrated for steel studs.

2.5.1 Step 1: Extraction of the Object’s Approximate Outline

To differentiate steel studs from their background, the LAB color space, also known as CIELAB,

is used due to its superior performance compared to HSV and RGB when handling small color

differences (Schwarz et al. 1987). This is extremely important since objects at construction sites

tend to have achromatic characteristics, making the recognition task challenging and often

inaccurate. To better illustrate, Fig 2.1 shows the distribution of color intensities for a 680×380

pixels image containing only cropped instances of steel studs, in b and hue color channels (in LAB

and HSV color space, respectively).

Page 23: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

13

As shown in Fig. 2.1, the hue color intensities do not appear to have a specific value, and are

instead scattered over the range of 0 to 179. On the other hand, the b color intensities are

concentrated around 0 (b values can range from -127 to +127). This is because studs appear to

have a grayish color appearance which is modeled with b=0 and a=0 in LAB color space. As a

result, this color space provides a more robust solution for differentiating these objects from their

background.

We aim to extract the studs’ approximate outline using simple thresholding algorithms, taking

advantage of color and illumination differences between the studs and their background. To

achieve this, either the illumination channel (L) or color channels (a and b) can be used. However,

according to our experiments, the former results in higher accuracy rates due to the studs’ higher

reflective surfaces compared to components in the background such as concrete, insulation, and

wood. On the other hand, the use of a and b color channels does provide comparable accuracy rates

when studs are placed next to insulation, each having distinctively different color values. However,

if studs are placed against a concrete wall, the use of a and b color channels results in much lower

accuracy rates due to steel and concrete having similar color characteristics.

To extract the approximate outline, the image (Fig. 2.2a) is first converted to LAB color space,

and its illumination channel (L) is extracted. The L channel is then thresholded using Otsu

threshold selection method. To remove noise in the processed image, either in the form of white

Figure 2.1. Distribution of pixel intensities for a 680×380 pixels image containing cropped

instances of steel studs: (a) LAB color space; and (b) HSV color space

Page 24: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

14

blobs in the foreground or openings in the background, morphological transformations are applied,

such as opening and closing (Fig. 2.2b).

2.5.2 Step 2: Shape Analysis and Object Localization

To extract the instances of studs in the output binary image (Fig. 2.2b), generated at the end of step

1, progressive probabilistic Hough transform is employed. First, the edge map of the binary image

is created using Canny edge detector, and the probabilistic Hough transform is applied on the

image to extract small line segments shaping the studs. This will result in the generation of large

number of segments with the length of just a few pixels (Fig. 2.3a). To integrate all these line

segments, each image is divided into 80-100 vertically-shaped regions, and the total number of

near-vertical line segments is counted, taking into consideration their length. A total vote is

calculated for each region, representing the likelihood of that region being an edge shaping the

stud. A threshold is set relative to the image’s height, and the centerline for all the line candidates

(regions) for which the vote is greater than the pre-specified threshold is drawn (Fig. 2.3b).

To better localize the studs in images, or to differentiate different layers of framing present in one

image, the minimum and maximum height observed for each line candidate can be stored and used

for further spatial reasoning.

Figure 2.2. Extraction of studs outline: (a) the input frame; (b) the output of the first step

displaying studs in white

Page 25: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

15

2.6 Implementation and Results

The proposed two-step methodology was implemented for different project-related objects

including components of interior partitions such as studs, electrical boxes, and insulation. Even

though the algorithms need to be tailored to each object, the overall procedure follows the two-

step algorithm illustrated for studs. The approximate outline is extracted using simple thresholding

methods, and the shapes are analyzed to remove the false positives and localize the objects. Fig.

2.4 and Fig. 2.5, respectively, illustrate the examples of recognized studs and electrical boxes in

digital images.

Figure 2.3. The second step, shape analysis and object localization: (a) the line segments generated

by progressive probabilistic Hough; and (b) the final drawn lines, shaping the studs

Figure 2.4. Examples of detected steel studs in images: (a) the first example, 15 true positives, 1

false negative, and 1 false positive; and (b) the second example, 9 true positives, 1 false positive,

and 1 false negative

Page 26: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

16

The experiments show that the use of L channel results in higher precision and recall rates,

compared to a and b color channels. This seems to be partly because of the similar appearances of

studs and concrete in terms of color, both having near zero pixel intensities in a and b color channel.

Furthermore, the algorithms were implemented and tested on comprehensive image databases of

indoor construction sites under different lighting conditions. The results of testing the algorithms

for studs and electrical outlets, the size of image databases used for testing, and the run times are

listed in Table 2.1.

Table 2.1. The results of testing the proposed algorithms

Object Number of

images

Image size

(pixels)

Precision

(%)

Recall

(%)

Run time

(s)

Stud 330 1920x1080 91.08 87.47 0.44

Electrical

outlet 300 1920x1080 86.32 93.43 0.38

2.7 Conclusion

Computer vision algorithms have been studied for the visual recognition of equipment, materials,

and project-related objects at outdoor construction sites. Their application indoors, however, still

needs to development. This paper identifies the challenges with the application of vision-based

methods at indoor sites and proposes a two-step solution for the visual recognition of project-

Figure 2.5. Examples of detected electrical boxes in images: (a) the input frame; (b) the output of

the first step, containing the approximate shapes; and (c) the output of the second step, shape

analysis and object localization

Page 27: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

17

related objects. In general, the limitation regarding the application of computer vision algorithms

at indoor sites can be categorized as inefficiencies associated with manual data collection

procedures and limitations of current vision-based solutions. Achromatic characteristics of most

indoor objects such as steel studs and concrete component, variant illumination patterns, similarity

of objects in terms of appearance, and highly cluttered and complex scenes are only some of the

challenges associated with visual recognition of objects at indoor sites. A two-step image

processing method is presented and illustrated for steel studs. In the first step, the object’s

approximate outline is extracted using the object’s color and illumination differences with its

background; the second step analyzes object’s shape to localize its instances in the image. The

visual recognition of project-related objects in images can provide model-based progress

monitoring solutions with supplementary information and facilitate quality control.

2.8 Acknowledgement

The Authors would like to thank Steve Miszuk and Adrienne De Francesco from University of

Toronto; Tom Finan, PMX construction’s principal; Fernando Tito, SKYGRiD construction

president; and Teresa Marsico, SKYGRiD construction’s project administrator for their great help

and support during data collection and site visits.

Page 28: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

Chapter 3

Paper II

Automated Vision-based Progress Monitoring of Under-

Construction Indoor Partitions Using Unmanned Aerial Vehicles

Hesam Hamledari, Brenda McCabe, Shakiba Davari

Department of Civil Engineering, University of Toronto, Toronto, ON, Canada, M5S 1A4

3.1 Abstract

Computer vision techniques have already shown great promise to automate the monitoring of

construction progress; however, their use for complex indoor construction environments has not

been sufficiently studied. The images and videos required for their operation are still captured

manually, increasing the cost, hence hampering their practical use. Unmanned aerial vehicles

(UAVs) equipped with high-resolution cameras have the potential to leverage the use of digital

images for indoor progress monitoring. This paper, as a step towards UAV-based indoor progress

monitoring, presents a series of vision-based modules that aim to automate the detection of state

of progress in digital images of indoor construction sites. Four integrated shape and color-based

approaches are introduced for the detection of components of interior partitions, namely studs,

insulation, drywall, and electrical outlets. The proposed algorithms were validated using three

image databases of indoor construction sites captured by a quadcopter, a smartphone, and collected

from publically available sources on the internet. The average precision and recall rates obtained

for studs (91%, 87%), insulation (91%, 94%), drywall (89%, 91%), and electrical outlets (86%,

93%); their real-time performance; and ability to operate without a priori information are highly

indicative of their promising performance.

To appear in the journal of Automation in Construction

Page 29: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

19

3.2 Keywords

Progress monitoring, UAV, computer vision, interior construction, machine learning, data

collection

3.3 Introduction

Constant monitoring of progress at construction sites is crucial for reducing cost and schedule

overruns and enhancing quality control, documentation, and communication at construction sites

(Yang et al. 2015). Accordingly, there has been extensive research on automating monitoring

processes to ensure that construction managers can make informed decisions and timely corrective

measures. State-of-the-art technologies for automated progress monitoring consist primarily of

radio frequency identification devices (RFID), ultra-wideband (UWB), and three-dimensional

(3D) model-based approaches, such as laser scanning and vision-based augmented reality.

These methods have been typically studied in the context of monitoring outdoor progress or have

not proved to be sufficiently effective indoors. Since RFID and UWB technologies require

installation of tags on objects, these radio-based approaches do not seem to provide a robust and

cost-effective solution for indoor progress monitoring. Laser scanning does not perform well

indoors due to low accuracy of point clouds at edges and highly reflective materials (Kiziltas et al.

2008). Vision-based augmented reality methods, though very promising, have not been sufficiently

studied for indoor environments. Currently, their application indoors is semi-automated as it

requires the manual entry of spatial-temporal information (Roh et al. 2011). Moreover, model-

based methods cannot efficiently monitor the progress of details and objects that are not included

in 3D and building information models (BIMs). In fact, the 3D as-planned models required for the

operation of these methods are seldom developed for smaller projects or are not sufficiently

detailed to support tracking the progress of components such as studs, insulation, electrical outlets,

and different states of drywall. One solution to this problem can be the extraction of semantic

information and state of progress from 2D images (Yang et al. 2015) using computer vision

Page 30: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

20

methods. Unfortunately, there has been little study on such methods for automated visual detection

of indoor construction objects.

The digital images and videos needed for the operation of vision-based methods for indoor

construction are typically captured manually . This is necessary because fixed cameras lose their

effectiveness indoors as walls start to obstruct the camera’s line of sight (Bohn and Teizer 2010).

Continually relocating cameras can be an expensive exercise. Unmanned aerial vehicles (UAV)

equipped with sensors and on-board cameras can be a viable solution as they can provide fast,

easy, and cost-effective access to high-resolution images of complex indoor spaces from multiple

locations. The use of UAVs has the potential to facilitate the data collection phase, reduce the need

for regular, labor-intensive manual inspections, and eliminate costly non-value-added processes.

This paper aims to overcome some of the existing challenges for automated visual recognition of

state of progress of indoor construction sites as the first step toward fully automated UAV-based

indoor progress (InPro) monitoring. Four vision-based module algorithms have been proposed to

detect the components of interior partitions, namely studs, insulation, electrical outlets, and

different states of drywall. Because the evaluation of the proposed visual recognition modules

would be incomplete without considering the context in which they operate, they are validated

using UAV-based data. Also provided is a brief study of the UAV’s reliability and potential for

providing the digital images and videos required for the operation of the proposed modules.

3.4 Indoor progress monitoring and related work

Tracking objects using RFID technology (Akinci et al. 2006; Ergen and Akinci 2007; Ergen et al.

2007; Kiziltas et al. 2008) requires the installation of tags on objects, which will later be scanned

using a reader that is located within a certain range. The use of UWB provides a wider range of

coverage and has recently been studied for use indoors (Shahi et al. 2012; Shahi et al. 2013). Even

though these radio-based technologies have shown promise for tracking materials and components

in the dynamic environment of construction sites (Akinci et al. 2003; Ergen et al. 2007), their

practical use for indoor progress monitoring becomes quite labor-intensive as they require constant

installation, scanning, and maintenance . Additional challenges occur when attaching tags to many

indoor building materials, such as studs, drywall, and insulation and then attempting to use them

to indicate progress of partially completed (Kiziltas et al. 2008) or operation-level tasks.

Page 31: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

21

Laser scanning (Akinci et al. 2006; Bosché 2010; Bosche and Haas 2008; Brilakis et al. 2010;

Tang et al. 2010; Turkan et al. 2012; Zhang and Arditi 2013) involves merging several 3D point

clouds generated by the scanners into one as-built model. That model is then aligned to and

compared with as-planned BIMs or CAD models to detect deviations. Although applicable to

indoor sites, laser scanners are expensive, time consuming, and require expert operators (Kiziltas

et al. 2008). Furthermore, the mixed pixel phenomenon, a technical limitation occurring at spatial

discontinuities, results in data noise, data loss, and low accuracy of edges in 3D models (Kiziltas

et al. 2008). Laser scanners do not generate accurate point clouds for reflective materials

(Golparvar-fard et al. 2009; Kiziltas et al. 2008) such as metal studs or pipes. Finally, they cannot

provide semantic information for 3D models (Golparvar-fard et al. 2009). Due to numerous edges,

reflective materials, and project-related objects at indoor sites, laser scanners do not appear at this

time to provide a robust, convenient, or cost-effective solution for monitoring indoor progress.

Vision-based augmented reality methods (Golparvar-fard and Peña-mora 2007; Golparvar-fard et

al. 2009; Golparvar-fard et al. 2009; Golparvar-Fard et al. 2012) for progress monitoring were first

introduced as 4D augmented reality (D4AR) (Golparvar-fard and Peña-mora 2007; Golparvar-fard

et al. 2009), where 3D as-planned models were superimposed over unordered daily photographs

to visualize deviations from schedule (Golparvar-fard et al. 2009). A structure from motion (sfm)

technique was employed to automatically register camera viewpoints in an existing 3D model,

which significantly automated progress monitoring. Even though the augmented reality-based

methods have shown great promise, their application for indoor construction sites has not been

adequately advanced. Due to their object-based approach, they cannot detect deviations in

processes and objects that are not modeled in BIMs. Also, the labor-intensive nature of creating

and updating BIMs for projects hampers the practical use of model-based methods (Brilakis et al.

2010). Most importantly, they cannot operate if 4D as-planned models do not exist.

Several promising studies have been conducted (Dimitrov and Golparvar-Fard 2014; Han and

Golparvar-Fard 2015) to solve some of the problems, including an appearance-based approach

(Han and Golparvar-Fard 2015) in which materials were extracted from patches of 2D images, and

the operation-level progress was detected using frequency diagrams of materials in the images.

This method, even though accurate and robust, has several drawbacks. It has a high computational

cost, requires a comprehensive library of materials for training classifiers, and has not been

specifically applied indoors, where the material selection is significantly greater.

Page 32: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

22

Even though outdoor progress monitoring is well studied in the body of research, only a few studies

address indoor progress monitoring using vision-based methods (Kropp et al. 2012; Roh et al.

2011). The augmented reality-based method first presented as D4AR (Golparvar-fard et al. 2009)

was tailored for indoor progress monitoring by introducing an object-based 3D walk-through

model (Roh et al. 2011). This work improved the visualization of progress for indoor construction

sites, provided construction managers with a realistic view of the progress, and was the first step

toward the application of augmented reality-based methods for indoors. Nevertheless, this study

had several limitations. It required the user to manually enter spatial-temporal information

including the time, location and viewpoint for each photograph, which made it quite labor intensive

and hampered its practical implementation.

Thus, there is a need for an easy, robust, and cost-effective means of indoor progress monitoring

that can either operate as a standalone method or be integrated with the existing model-based

approaches.

3.4.1 Automated computer vision methods

In recent years, there has been a dramatic increase in the number of digital photos that are captured

daily at a construction site (Yang et al. 2015). The easy, economical, and real-time access to these

images initiated a stream of research on data collection and monitoring of construction sites using

different forms of these media. Videos, still and time-lapse images enable project managers to

monitor construction sites with less effort (Abeid et al. 2003), improve the communication between

different groups engaged in a project, and better document the progress (Bohn and Teizer 2010).

Furthermore, development and introduction of computer vision techniques for automating the

extraction of project-related information from digital images opened up many opportunities to

leverage the use of visual resources. As a result, there is a large body of research conducted on the

automated vision-based methods in the context of the construction industry. These studies have

addressed research problems including, but not limited to, automated recognition and tracking of

resources (Teizer 2015) such as workers (Brilakis et al. 2011; Memarzadeh et al. 2013; Park and

Brilakis 2012; Teizer and Vela 2009) and equipment (Brilakis et al. 2011; Memarzadeh et al. 2013;

Rezazadeh Azar and McCabe 2012; Rezazadeh Azar and McCabe 2012; Zou and Kim 2007),

classification of materials (Dimitrov and Golparvar-Fard 2014; Son et al. 2014; Son et al. 2012;

Zhu and Brilakis 2010), productivity analysis (Gong and Caldas 2009; Rezazadeh Azar et al. 2013;

Page 33: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

23

Zou and Kim 2007), recognition of structural elements (Abeid and Arditi 2002; Wu et al. 2010;

Zhu and Brilakis 2010), generation of 3D and 4D as-built models (Brilakis et al. 2011; Brilakis et

al. 2010; Kim et al. 2013), and automated monitoring and visualization of construction progress

(Dimitrov and Golparvar-Fard 2014; Golparvar-fard et al. 2009; Han and Golparvar-Fard 2015;

Kropp et al. 2014; Kropp et al. 2012; Roh et al. 2011; Yang et al. 2015).

Nonetheless, only a few studies (Kropp et al. 2014; Kropp et al. 2012; Roh et al. 2011) investigated

the automated visual recognition of indoor objects, due in part because they can operate only when

a priori information (e.g. as-planned 4D BIMs) exists. Consequently, automated visual recognition

of interior wall elements such as studs, insulation, electrical outlets, and different states of drywall

has not been sufficiently studied. There are numerous challenges associated with the application

of vision-based methods to indoor situations such as frequent changes in viewpoint, occlusion,

highly cluttered scenes, and importantly, extreme lighting conditions and illumination patterns.

These challenges either reduce the accuracy of existing solutions or make them inapplicable for

indoor sites. Unlike open sites, the use of fixed cameras is not technically feasible indoors due to

the obstruction of line of sight as walls are erected (Bohn and Teizer 2010), and the interference

the cameras create for workers as interior finishes progress. Existing vision-based methods,

therefore, require data capture by a person equipped with a camera (Kropp et al. 2014; Kropp et

al. 2012; Roh et al. 2011) and manual camera registration. Due to these drawbacks, cost is one of

the major barriers for practical implementation of cameras at indoor sites (Bohn and Teizer 2010).

In conclusion, the limitations of the state-of-the-art vision-based techniques for indoor

construction sites mainly fall into two categories: 1) lack of image processing methods tailored to

the indoor environment; and, 2) manual data collection processes that make current solutions

costly, semi-automated, and labor-intensive.

3.4.2 Unmanned aerial vehicles and their applications

Unmanned aerial vehicles (UAVs), also referred to as unpiloted aerial vehicles, were formerly

known for their military applications, but their development paved the way for the rapid expansion

of civilian aviation applications such as monitoring agricultural fields (Neale et al. 2011) and

forest fires (Beard et al. 2004; Casbeer et al. 2005), search and rescue (Goodrich et al. 2008), post-

disaster assessment and management (Adams and Friedland 2011), and surveillance (M. Kontitsis

et al. 2004; M.Quigley et al. 2005). Further improvement in their technical capabilities introduced

Page 34: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

24

the possibility of using high-resolution on-board cameras to capture aerial images for

photogrammetry and 3D mapping purposes (Nex and Remondino 2013).

The quick, safe, and inexpensive access to high-resolution images providing spatial information

about hazardous geographical locations has gradually attracted interest in the use of UAVs in civil

engineering. Consequently, some researchers have contributed to the technical knowledge of

UAVs in performing 3D measurements of road surfaces (Zhang and Elaksher 2012), monitoring

and inspecting buildings (Hallermann and Morgenthal 2012; Roca et al. 2013), bridges (Ellenberg

et al. 2014; Metni and Hamel 2007), and locally linear structures (Rathinam et al. 2005). Others

have focused on identifying the operational requirements of UAVs, such as those used in

transportation (P.Karan et al. 2014).

Recently, the advantages of UAVs in the construction industry have been investigated to facilitate

safety inspections (Gheisari et al. 2014; Irizarry et al. 2012) and quality control (Wang et al. 2014).

For example, an autonomous UAV was specially designed to survey and obtain 3D maps of

earthwork projects (Siebert and Teizer 2014). In photogrammetry-based 3D models produced

using aerial images, accuracy rates comparable to that for conventional GPS-based methods were

achieved in nearly 60% less time (Siebert and Teizer 2014). UAVs have also been equipped with

RFID tag readers for tracking materials at construction sites (Hubbard et al. 2015). Most studies

have integrated off-the-shelf products, which are cost-effective and robust (Colomina and Molina

2014).

Despite the challenges associated with using UAVs, including wind, obstacles, limited battery life,

area of coverage (Siebert and Teizer 2014), and potential legal liabilities and restrictions, there’s

a growing interest in applying UAVs for remote sensing. The fast pace of technological advances

in light UAVs opens up the possibility of fully exploiting UAV technology for automating data

collection and monitoring construction sites. Although UAVs can be customized to meet the needs

of a specific project, cost-effective, off-the-shelf UAVs equipped with high-resolution cameras

and high-precision sensors easily overcome the limitations of fixed cameras. Therefore, this paper

investigates using an off-the-shelf quadcopter, a rotary wing UAV, for collection of images

required for the vision-based indoor progress monitoring methods introduced herein.

Page 35: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

25

3.5 The research context

Our proposed modules for automated visual recognition of indoor project-related objects are part

of a larger study that aims to automate the monitoring of indoor construction site progress using

UAVs. It consists primarily of four stages: inspection planning, automated UAV-based data

collection, automated vision-based detection of state of progress (the main contribution of this

paper), and updating and creating as-built BIMs (Fig. 3.1).

The vision, shown in Fig. 3.1, involves a quadcopter (or any rotary-wing UAV capable of vertical

takeoff and landing (VTOL)) programmed to perform an autonomous flight to inspect specific

locations of an indoor construction site and to capture and store videos. The videos will be

processed by the proposed automated visual recognition modules to detect the components and

progress of indoor partitions, either in real time or after the inspection is completed (Fig. 3.1c).

The detected states of progress will then be used to update BIMs and provide them with semantic

information. Detailed study of the other three stages (Fig. 3.1a, b, d) and their interrelations will

be conducted in future works.

3.6 Automated Visual Recognition of Interior Partitions

The main contribution and focus of this paper is the automated visual recognition of components

of interior partitions: studs, electrical outlets, insulation, and three states of progress for drywall

sheets (installed, plastered, and painted). To provide a rigorous and independent solution, it is

assumed that the proposed method should operate without a priori information regarding the

Figure 3.1. An overview of the research context

Page 36: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

26

previously known state of progress or 3D/4D BIMs. In the future, it is expected that an integrated

system would have knowledge of a previous state of progress upon which it could base a decision

of what states it should expect.

These methods automate the detection of project-related objects in images of indoor scenes. The

state of progress can be inferred from reasoning on the presence of objects and states detected by

these modules; however, the study of the integration of the four vision-based modules, necessary

for detection of overall progress, is not within the scope of this paper. Future work needs to address

the way in which the modules should be placed to optimize their overall performance and inference

of progress.

3.6.1 The stud module

Studs are challenging to detect due to their slender structure, multiple occurrences in images, and

similarity to floor, ceiling, and background in terms of color and appearance. The existing color

and texture-based methods, alone, do not provide reliable results for slender objects (David and

DeMenthon 2005) due to the high level of noise present in the thin image regions representing the

objects. Methods based on keypoint features are also prone to the same limitation since the

distinctive features associated with studs lie mainly on edges that tend to be visually noisy and

cluttered. In our experiments, the application of histogram of oriented gradients (HOG) and color-

based methods also did not show promise in the detection of studs, with accuracy rates below 30%

in the best cases. Although designed for the detection of concrete columns, an existing approach

(Zhu and Brilakis 2010) was tested because of the similarity between columns and studs in terms

of shape and color. Limitations to the method had already been reported regarding the detection of

multiple occurrences of columns in one image. This was confirmed in our tests with studs.

The algorithm proposed herein employs an integrated shape and color-based approach, enabling

rapid and accurate detection of studs. A 5×5 bilateral filter is employed for image smoothing due

to its edge-preserving nature (Fig. 3.2). The input frame is then converted to LAB color space, and

the lightness channel (L) is thresholded using Otsu cluster-based image thresholding (Otsu 1975)

(Fig. 3.3b), thereby avoiding the need for manual adjustment of threshold to different lighting

conditions. The use of L channel ensures the robust performance of the module as it helps

differentiate the studs from their background, exploiting their significantly higher reflectance.

Page 37: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

27

After thresholding the L channel, the resulting binary image is used to extract the lines shaping the

studs. Noise is removed using morphological transformations such as opening and closing

(Dougherty et al. 2003), and the edge map is calculated to reduce the computation time for the next

step. The progressive probabilistic Hough transform (Kiryati et al. 1991; Matas et al. 2000) is then

Figure 3.2. The output of different stages of stud detection module; (a) the input image; (b) thresholded L

channel; (c) line segments generated by probabilistic Hough transform (d) output lines

Figure 3.3. The output of different stages of stud detection module; (a) the input image; (b) thresholded L

channel; (c) line segments generated by probabilistic Hough transform (d) output lines

Page 38: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

28

applied to extract the vertical line segments shaping the studs (Fig. 3.3c). Even though generalized

Hough transform can be employed to extract line segments, it didn’t show promise in the case of

slender, thin, and highly cluttered studs in our experiments. The progressive probabilistic Hough

transform, however, detects subsets of longer lines, reducing computation and resulting in more

but smaller segments that eventually need to be integrated. To replace the large number of line

segments scattered across the image with straight lines that shape the studs, the image is divided

into vertically-shaped regions having widths of a few pixels. In each region, a weighted voting

system assigns each segment a value (vote) relative to its length. The total vote is calculated for

each region using only near-vertical line segments. Additionally, the maximum and minimum

observed pixel heights are calculated for each region (line candidates) to better localize the studs

vertically in the image. Finally, vertical lines are drawn for each region if its votes exceed a pre-

specified value (Fig. 3.3d) relative to the image’s height, and bounding boxes are drawn for lines

in proximity.

The algorithm introduced herein could also be tailored to detect horizontal studs using near-

horizontal line segments. At this time, however, it has only been designed and tested on vertical

studs. Using a database of image patches of materials, the area inside the bounding boxes could be

identified as either wood or steel. The module is not currently designed to identify different types

and shapes of studs.

3.6.2 The Insulation module

Many types of insulation are available for interior partitions including batt, blown-in, and sprayed

foam. Our proposed module for automated detection of insulation is demonstrated for fiberglass

batt insulation (batt and roll) as it is the most widely available. However, with some minor

modifications, it can be adjusted to other types. The algorithm is mainly based on k-means

clustering, an unsupervised machine learning method used for image segmentation (Fig. 3.4).

Page 39: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

29

Given a set of n observations (x1, x2, …, xn), the aim of k-means clustering algorithm is to

categorize all xi into k clusters through an iterative process that involves reallocating points to the

closest cluster in each cycle to finally minimize an objective function (Eq. (1)). Where 𝑥𝑖(𝑗)

denotes

the intensities of a pixel (in A and B color channel of LAB color space), 𝑐𝑗 is the center of the jth

cluster, and ‖𝑥𝑖(𝑗)

− 𝑐𝑗‖2

is a distance measure between pixel intensities and that of the cluster

center (Forsyth and Ponce 2003). Finally, pixels are classified into k clusters, providing a means

of separating the pixels belonging to insulation regions from the background by segmenting the

image.

𝐽 = ∑ ∑ ‖𝑥𝑖(𝑗)

− 𝑐𝑗‖2

𝑛𝑖=1

𝑘𝑗=1 (1)

The algorithm requires the input frame (Fig. 3.5a) along with one sample image patch that contains

only the insulation material; here we used a 200×500 pixels sample patch randomly extracted from

an image (Fig. 3.5b). This patch will be generated once in the beginning of the process and will be

used for analyzing all the input frames throughout the monitoring period. The input frame and

sample patch are smoothed using a 5×5 Gaussian filter (Fig. 3.4) and converted to LAB color space

due to its superior performance in differentiating small variations of color (Schwarz et al. 1987),

which is common with indoor objects. Next, two-dimensional vectors, containing the pixel

intensities in A and B color channels as features, are formed for the input and sample patches.

These two 2-D feature vectors are then concatenated to form a single feature vector, which is

passed to the k-means clustering algorithm. This single feature vector contains an insulation

Figure 3.4. The algorithm for visual detection of insulation in images

Page 40: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

30

feature vector as known, and an input image feature vector as unknown. The known part helps

analyze the output of the k-means clustering algorithm to automatically detect insulation in images.

The resulting output label vector is the same size as the input image vector and contains labels for

each pixel indicating the cluster to which it belongs. Fig. 3.5c and 5d show the sample patch and

input image, color-coded according to their clusters. The k-means clustering algorithm labels the

clusters randomly, so the part of the label vector corresponding to the sample patch should provide

the label representative of insulation. To achieve this, the most frequent label over the patch label

vector is selected (color-coded in Fig. 3.5c). A binary image the size of the input frame is created

in which the foreground comprises the pixels of the input frame that have the same label as the

insulation (Fig. 3.5e). The mask is further processed using morphological transformation to

remove foreground and background noise. Finally, using bitwise operators, the mask is

superimposed on the input image to visualize the extracted insulation regions (Fig. 3.5f).

Page 41: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

31

The choice of the number of color clusters (k) is extremely important as it directly affects the color

segmentation of the image and the allocation of pixels. High values of k result in the allocation of

pixels belonging to same material to several clusters. Low values fit several materials along with

insulation in the same cluster. Based on numerous experiments conducted on complex indoor

images, k=3 was selected. The use of LAB color space (or CIELAB) is also vital to achieving high

accuracy rates as the experiments conducted on indoor objects demonstrated that it can provide a

more robust means of working with small color differences than Hue-Saturation-Value (HSV) or

Red-Green-Blue (RGB) color spaces. This is mainly because LAB-based systems are typically

more accurate than HSV when working on Chroma (Schwarz et al. 1987).

Figure 3.5. The detection of insulation in images: (a) the input image; (b) the sample patch used in this example; (c)

the color-coded sample path according to the results of k means clustering; (d) the color-coded input image; (e) the

binary mask generated after label matching; (d) extracted insulation blankets

Page 42: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

32

3.6.3 The drywall module

The main states of progress associated with drywall (gypsum board) are installed, plastered, and

painted. The automated detection of these states is challenging because the uniformly colored

surfaces of drywall sheets do not provide distinctive patterns, and plastered regions usually have

the same appearance as the drywall in terms of color.

The detection of the state of progress for a drywall sheet starts from the second state (plastered)

using the inherent reflectance of materials (Fig. 3.6). However, because of the non-uniform

illumination patterns observed indoors, these differences usually exist when materials are

compared locally. To amplify them, the input frames are first converted to grayscale, and contrast

limited adaptive histogram equalization (CLAHE) (Zuiderveld 1994) is used for the input frame.

The next step aims to extract the plastered regions of the wall as a binary image using image

thresholding. However, it is impossible to use one threshold to extract most plastered regions in

the input image (Fig. 3.7b) as each image is captured under a different illumination condition,

thereby challenging the selection of a global threshold. To overcome this problem, a series of

equally spaced thresholds are selected, and the image is correspondingly thresholded multiple

times. The resulted binary images are reassembled by giving a weight to pixels every time they

are present in one of the binary images. The final image is created by filtering out the pixels that

were not present in at least 70% of the binary images (Fig. 3.7c). This process helps overcome the

non-uniform lighting conditions and eliminate the need for manual selection of a global threshold.

The final binary image (Fig. 3.7c) is divided into a series of vertical bins, and a histogram is

developed with the value of each bin representing the number of non-zero (white) pixels present

Figure 3.6. The flowchart of the algorithm for detection of three states of progress for drywall Sheets

Page 43: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

33

in it. A histogram with an oscillating pattern representing plastered areas is significantly different

from the more random histogram of an installed or painted wall. The histogram is normalized and

passed to a support vector machine (SVM) classifier, which was trained with 120 positive and 500

negative sample images.

If the wall is not classified as being in the plastered state (Fig. 3.8a), the distinctive edge features

of an installed drywall sheet are extracted to distinguish an installed drywall from a uniformly

colored painted wall. The image is convolved with a Sobel kernel to extract the vertical edges (Fig.

3.8b). The lines are detected using progressive probabilistic Hough transform on the edge map of

Figure 3.7. Extraction of plastered regions: (a) input frames; (b) two examples of

single thresholding; (c) the results of the proposed method

Page 44: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

34

the convolved image (Fig. 3.8c) following the same procedure explained in the stud detection

module.

3.6.4 Electrical outlet module

Small sizes, different appearances, and numerous occurrences of electrical outlets in images make

their automated visual detection challenging. The complex and highly cluttered nature of indoor

scenes compounds this difficulty. The method proposed herein exploits the amount of reflected

light from the surfaces of different materials. For example, pixels belonging to electrical boxes cut

in the drywall tend to have lower intensities in lightness channel of LAB color space compared to

the materials in their vicinity. However, after installing the sockets, the pixels will have higher

intensities relative to their proximity pixels because of the highly reflective surfaces of sockets.

Figure 3.8. The identification of installed drywall sheets: (a) input images; (b) the vertical edges extracted

Page 45: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

35

To detect the electrical boxes cut in drywall, the input frame (Fig. 3.10a) is smoothed using a 5×5

Gaussian filter, converted to LAB color space, and its L channel is extracted. To enhance the local

contrast between the electrical boxes and their vicinity, CLAHE is used. Compared to global

histogram equalization, CLAHE better redistributes the lightness using several histograms

corresponding to different image regions, while limiting the enhancement of contrast to prevent

an increase in noise. To eliminate the need for manual thresholding, the normalized cumulative

distribution function (CDF) histogram is calculated and the threshold is calculated so that its

normalized CDF value is between 0.10 and 0.14. This range has been obtained by examining a

large image database of indoor images under various illumination conditions. Because of the use

of normalized CDF, the thresholds are automatically adjusted for each lighting condition and

image resolution, hence do not fail in potentially extreme lighting scenarios. The image is

correspondingly thresholded using inverted binary thresholding, and the noise in the background

and foreground is reduced using morphological transformations (Fig. 3.10b). All the image

contours are extracted, resulting in many detected instances of blobs either in the form of an

opening (in the foreground) or white blobs (in the background). To extract the electrical boxes and

eliminate the false positives, three constraints are imposed so that each eliminates a portion of the

false positives (Fig. 3.10c). The factors considered for removing the false positives are the area of

the blobs, aspect ratio, and solidity, which is defined as the ratio of contour’s area to that of its

bounding rectangle. Finally, a bitwise AND operator is used to eliminate the black openings that

have been misclassified as boxes (Fig. 3.10d), using the image containing all the contours (Fig.

3.10b) and filtered image (Fig. 3.10c).

Figure 3.9. The algorithm for detection of electrical outlets in images

Page 46: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

36

The only difference in the detection of electrical sockets (Fig. 3.10e-h), the other sub-task of

electrical outlet installation, is in the thresholding stage in which binary thresholding is employed,

and the threshold is calculated so that the corresponding CDF value falls between 0.75 and 0.85.

The use of both binary and inverted binary thresholding (Fig. 3.9) helps identify the sub-tasks of

electrical outlet installation, providing an understanding of the number of installed sockets and

boxes in a partition.

3.7 Validation and results

To validate our proposed approaches, three databases were created containing digital images and

videos of indoor construction sites. To consider various lighting conditions and illumination

patterns for different states of progress, an indoor construction site was repeatedly visited over a

three month span to acquire the input images and videos. During these visits, the images were

captured using both a smartphone and a quadcopter (a four-rotor UAV).

The quadcopter, a Bebop designed by Parrot™ (Fig. 3.11), was chosen for some key features.

First, the high stability of its on-board camera is a crucial factor for ensuring the robust

Figure 3.10. The detection of electrical outlets in images: (a) the input image containing four electrical boxes; (b)

the result of thresholding; (c) filtered blobs before using bitwise operator; (d) the detected boxes; (e) the input

image containing four electrical sockets; (f) the result of thresholding; (g) the filtered results; (h) the detected

sockets

Page 47: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

37

performance of the vision-based methods. The high-quality video recordings are accessible in real

time, providing valuable support for project managers. The powerful on-board computers and

sensors are almost eight times more powerful than its predecessor, Parrot.AR., which has been

extensively used in research (Krajnik et al. 2011). Finally, its affordable cost and available

software developer kit provide essential support for designing autonomous navigation systems. If

the proposed vision-based methods can operate with an off-the-shelf device, they are very likely

to function on specially designed quadcopters (or other rotary wing UAVs) tailored to indoor sites.

The specifications of this device are summarized in Table 3.1.

Table 3.3.1. The specifications of the Bebop quadcopter

CPU Dual-core ARM Cortex-A9, with quad-core GPU

Memory 8 GB (internal) and micro USB (extended)

Operating system Linux

Wi-Fi 802.11a/b/g/n/ac

Wi-Fi antennas MIMO dual-band (2.4 and 5 GHz)

Camera CMOS 14 Megapixel

Fish-eye lens 180° 1/2,2"

Video definition 1920×1080p (30 fps)

Video encoding H264

Photo definition 3800×3188 pixels

Photo file format JPEG, RAW, DNG

Battery Lithium Polymer 1200 mAh (11 minutes flight time)

Geo-Location GNSS (GPS+ GLONASS)

Dimensions 28×32×3.6 cm (without the hull)

33×38×3.6 cm (with the hull)

Weight 400g (without the hull), 420g (with the hull)

Sensors 3-axes magnetometer

3-axes gyroscope

3-axes accelerometer

Optical flow sensor: vertical stabilization

Ultrasound sensor

Pressure sensor

Page 48: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

38

Over several visits, the quadcopter was flown ten times, providing a total of one hour of recorded

video. Since the main focus of this paper is the vision-based methods and not the study of the

flight, the quadcopter was controlled manually using a smartphone connected to the device’s self-

generated Wi-Fi network. The detailed study of the quadcopter’s autonomous flight will be

conducted in future research.

The first database was created by extracting frames from the videos captured by the quadcopter;

the second database comprises images captured using a smartphone at the same location. To

provide a more valid comparison, images taken by the smart phone were as close as possible to

the same view point and time as the images taken by the quadcopter. The third database was created

by collecting images from publically available online resources to test the robustness of the vision-

based methods for different indoor sites. Table 3.2 shows the distribution of images in terms of

Figure 3.11. The Bebop quadcopter used for this study

Page 49: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

39

state of progress for each database. Examples of the UAV- and smartphone-captured images are

included in Fig. 3.12. Due to the restrictions on image copyright, the samples of the third database

could not be provided.

Table 3.3.2. The distribution of images in each database

Categories First

database

(UAV)

Second database

(Smartphone)

Third

database

(online)

Resolution

(pixels)

1920×1080 3264×2448 640×480

Studs 330 230 60

Insulation 130 110 60

Drywall 450 200 60

Electrical outlets 300 70 60

3.7.1 Validation metrics and results

All the methods were implemented using OpenCV 2.4.10 (Intel® Open Source Computer Vision

Library) and Python 2.7 and were tested on a Windows 32 bit platform with 2.53 GHz core i5 CPU

and 4 GB of memory. Performance was evaluated using three metrics: precision, recall, and run

time. The first two are calculated as:

Figure 3.12. The examples of images in the UAV (first row) and smartphone database (second row); the

images are scaled down, and their aspect ratio and relative sizes are not preserved

Page 50: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

40

Precision= (TP

TP+FP) (2)

Recall= (TP

TP+FN) (3)

Where TP (true positives) and FP (false positives) are the number of objects correctly and

incorrectly predicted, respectively, as the object of interest. Similarly, TN (true negatives) and FN

(false negatives) are the number of objects correctly and incorrectly recognized as background.

For example, for stud and electrical outlet detection, a TP is a correctly recognized instance of the

object (a stud or an outlet). In insulation detection, a TP is a correctly classified pixel, and in the

case of drywall, a TP is a correctly classified state. After testing, each output image was examined,

and detected objects (for studs and electrical outlets), classified pixels (for insulation), and detected

state of progress (for drywall) were compared to the ground truth, which were manually extracted

from the images. Table 3.3 shows the promising results of testing the modules on the three

databases.

Table 3.3.3. The results of testing the proposed algorithms on the three image databases

First database (UAV)

Second database

(Smartphone)

Third database

(Online)

Precision

(%) Recall (%)

Precision

(%) Recall (%)

Precision

(%)

Recall

(%)

Studs 91.08 87.47 90.04 86.12 92.06 83.92

Insulation 91.10 94.47 89.26 91.35 88.34 80.50

Drywall 89.61 91.24 77.67 75.53 72.25 70.36

Electrical

outlets 86.32 93.43 86.64 89.06 80.21 87.85

For the multi-class classification of the drywall module, a 3×3 confusion matrix with three classes

or states (installed, plastered, and painted) was used. The precision and recall rates were calculated

for each class, and then averaged over the three classes, giving weight to the number of input

images in each class. For example, when calculating the metrics for the class entitled “painted”, a

TN is an input image that is either the installed or plastered state and has been classified as “not

painted”. Similarly, for the other two classes, the classification outcome could be “installed” or

“not installed” and “plastered” or “not plastered”.

Page 51: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

41

To provide a more informative measure for this multi-class module, the overall accuracy was

calculated to indicate the percentage of images correctly classified into their true class. The overall

accuracy rates were 89.91%, 76.23%, and 71.53%, respectively for the UAV, smartphone, and

online database.

To assess the performance in terms of providing a real-time solution, the run times were measured

and averaged over the databases (Table 3.4). To provide a real-time means of progress monitoring,

the vision-based methods are not necessarily required to operate on every frame or every second.

For instance, to fully detect the state of progress for a room, the algorithms need to be applied to

only a few dozen frames during the quadcopter’s flight. In fact, during the inspection, the

quadcopter needs to occasionally hover relatively still to ensure high stability and quality of

images, leaving a sufficiently wide time margin for the detection system to operate. Nevertheless,

image processing can also be performed after the UAV returns to its docking station in a matter of

minutes, providing a quasi-real-time solution.

Table 3.3.4. The average obtained run times for the proposed methods, for each category and image resolution

Average run time (s)

640×480

(pixels)

800×600

(pixels)

1920×1080

(pixels)

3264×2448

(pixels)

Studs 0.22 0.26 0.44 1.12

Insulation 0.41 0.58 1.82 4.11

Drywall 0.23 0.36 0.67 0.98

Electrical

outlets 0.11 0.13 0.38 0.96

Finally, the trade-off between the precision, recall, and run time is illustrated by plotting them

against different image sizes for each detection module using the UAV database (Fig. 3.13). This

comparison is noteworthy as occasionally the input frames can be resized to lower resolution,

significantly reducing the run time while preserving the same precision, and recall rate.

Understanding this trade-off also helps evaluate the feasibility of using the modules on commercial

or custom-designed UAVs. As shown in Fig. 3.11, the modules have the capability of achieving

nearly the same level of performance (in some cases, even higher) when tested on lower image

Page 52: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

42

sizes, which ensures their robustness when employed on other off-the-shelf or custom-designed

UAV.

This analysis also helps justify the slightly better performance of the modules on the UAV

database, at first unexpected because UAV images are subject to motion blur. According to Fig.

3.13, the performance on images with higher resolution does not always result in higher

performance. Here, the significantly higher image size of smartphone database (3264×2448 pixels)

compared to UAV (1920×1080 pixels) seems to have negatively affected module’s performance,

compensating for the blur effect (Table 3.3). For example, in the insulation module, the

Figure 3.13. Precision, recall, and run time plotted against different image resolutions: (a) studs; (b) insulation;

(c) drywall; (d) electrical outlets

Page 53: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

43

performance rates slightly drop due to significantly higher cluster sizes. This problem can be

tackled by increasing the number of initial labeling and iterations for high resolution images. Here,

the testing conditions were kept identical for testing all the databases. Increased image size can

also directly affect the segmentation of images in thresholding stages, increasing the foreground

details requiring processing.

3.7.2 Selection of threshold and other input parameters

For easy implementation and higher performance, it is necessary to examine and recommend

optimum threshold parameters for each of the modules. It is important to note that the parameters

do not need to be adjusted for each inspection and are set only once at the start of the monitoring

process. The module parameters do not require any human intervention during their operation

since they are automatically calculated and adjusted for the images. The results acquired in Table

3.3 were achieved using these optimum parameters.

3.7.2.1 Stud module

After testing the images in the three databases, the best result was achieved by dividing the input

frame into 90-110 vertically-shaped regions. Due to the use of image segmentation on the L

channel, the accuracy rates appeared insensitive to the selection of parameters in the Canny edge

detector. Here, 30 and 150 were chosen as the hysteresis threshold parameters. As expected, the

pre-specified weighted vote threshold needed to be correlated with the parameters in the

progressive probabilistic Hough transform, as an increase in the number of generated line segments

increases the weighted vote threshold. Accordingly, the suggested parameters for the probabilistic

Hough transform are 20 (minimum required vote), 20 (minimum number of line segment), and 5

(maximum allowed gap between segments); the weighted vote threshold can vary between 100

and 175 for a 1920×1080 pixels image. Fig. 3.14 shows the results of the stud detection module.

Page 54: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

44

3.7.2.2 Insulation module

After testing a variety of configurations for the k-means classifier, the recommended features are

A and 0.5×B, and its recommended parameters are 10 for the number of iterations for each initial

labeling and 5 for the number of initial labeling. The tests conducted using sample image patches

with sizes ranging from 100×200 pixels to 300×500 pixels extracted from images captured under

various lighting conditions suggest that the precision and recall rates are not significantly

dependent on the size of the patch as the differences in performance were negligible. Fig. 3.15

shows examples of detected insulation batts in images.

3.7.2.3 Drywall module

Factors affecting the performance of this module are primarily related to the SVM classifier and

the integration of multiple binary images into one. The SVM classifier has a radial basis function

Figure 3.14. Two examples of the detected studs in images: (a) the first example, eleven correctly detected, one

FN, and one FP; (b) the second example, all studs correctly detected

Figure 3.15. Two examples of detected insulation blankets in images

Page 55: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

45

(RBF) kernel type, and its parameters were optimized using automatic training. The integration of

binary images is crucial to the extraction of the distinctive pattern of a plastered drywall sheet.

Two parameters that affect its performance are the range of thresholds used for the production of

the binary images from grayscale images, and the number of bins used for the histogram. The

number of thresholds did not significantly affect the performance of the module as long as the

range of thresholds was not skewed to the extremes of 0 or 255 i.e., keeping the selection of

threshold values approximately centered around 125 will perform well. According to experiments,

the number of bins in the histogram (feature vector) greatly affects the performance because using

very few or very many bins does not form the repetitive pattern of a plastered drywall sheet. Using

between 20 and 40 bins performed best.

3.7.2.4 Electrical outlet module

The performance of this module is highly depend on the selection of the threshold, a challenging

task due to varying illumination patterns and lighting conditions. The use of a normalized CDF

histogram for calculating the optimal threshold provides a robust solution, thereby automating the

process. After examining the image databases, it was found that the optimal values for thresholds

should be chosen so that their corresponding normalized CDF values range from 0.1 to 0.14 and

0.75 to 0.85 for electrical boxes and sockets respectively. Examples of detected electrical outlets

are illustrated in Fig. 3.16.

3.7.3 Validation of vision-based methods used on UAVs

To better evaluate the videos captured by the quadcopter’s on-board camera, a more

comprehensive experiment was conducted to examine some of the factors affecting module

performance, including the quality and stability of video frames and the level of noise present.

Figure 3.16. Examples of the electrical outlets detected: (a) the first example, six correctly detected and one FN;

(b) the second example, all correctly detected

Page 56: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

46

In contrast to fixed cameras, the images captured by the quadcopter’s on-board cameras are taken

at different velocities, and are therefore subject to motion blur. To examine this effect on image

quality and module accuracy, the flight statistics from a 7-minute flight video were accessed

through the graphical user interface of the quadcopter (Fig. 3.17). . Four 4-second videos were

manually extracted from the 7-minute video; one for each detection module. Next, the videos were

broken down into frames, providing 120 frames for each module. The images were processed by

their respective modules, and the precision and recall rates were plotted against the velocity at

which the frame was captured (Fig. 3.18).

As the results suggest, in-flight speeds below 1-1.5 m/s do not appear to affect module

performance, i.e. the UAV does not need to be fully stationary when inspecting locations of interest

to achieve high detection accuracy. This is important because the algorithms can be applied to any

indoor flight, without imposing constraints on the flight and the process of capturing images. To

optimize the use of battery life, the quadcopter can travel at higher velocities when not capturing

images and slow down for image capture.

The 4-second videos were chosen to contain the worst conditions: high velocity angular turns and

changes between low and high flight velocity. Closer examination revealed that most of the FNs

and FPs were on images with rotational motion blur, a factor that can easily be controlled by

limiting the angular velocity of the quadcopter. The performance of the modules on the UAV

database did not suffer from rotational motion blur to the same degree (Table 3.3) because the

UAV database contains images extracted every few seconds to increase diversity and provide a

database that better represents the duration of flight and variety of locations visited. Hence, normal

use of the images and modules is less subject to extreme cases of motion blur.

Page 57: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

47

Figure 3.17. Two screenshot of flight statistics available through the device’s

user graphical interface: (a) the whole flight; (b) a portion of the flight, in

more details

Page 58: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

48

In the last two detection modules (drywall and electrical outlets), the performance is also affected

by the UAV’s velocity and drastic changes in viewpoint. Especially in electrical outlet module,

high angular velocities can drastically reduce the performance since motion blur directly affects

the formation of the binary image from which the relatively small electrical outlets are extracted.

Again, this can be mitigated by limiting angular turn velocities.

Finally, the image exposure was studied for challenging indoor lighting conditions. The

combination of ISO number, shutter speed, and aperture size, the three factors known as the

exposure triangle, were examined for this quadcopter’s on-board camera as they together control

the exposure and hence the formation and quality of images. A high ISO number indicates a higher

Figure 3.18. The variations of precision and recall against the velocity: (a) stud; (b) insulation; (c) drywall;

(d) electrical outlet

Page 59: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

49

sensitivity to incoming light, which can be valuable for low-light scenes common indoors.

However, increased noise may reduce the accuracy of vision-based methods. In our tests, high ISO

speeds and short exposure times caused surprisingly high levels of random types of noise. Overall,

however, the level of noise was not high enough to jeopardize the accuracy of the modules. This

ensures robust algorithm performance when using off-the-shelf UAVs and cameras.

3.8 Conclusions and Future Work

Indoor progress monitoring still has many limitations in terms of existing solutions and their

efficiency. This paper, as a part of a bigger research study on UAV-based indoor progress

monitoring (InPro), proposed a series of automated vision-based algorithms for detection of indoor

project-related objects, namely studs, insulation, electrical outlets, and three states for drywall. The

proposed modules were validated using three image databases of indoor construction sites captured

by a quadcopter, a smartphone, and collected from publically available sources on the internet.

High precision and recall rates for studs (91%, 87%), insulation (91%, 94%), drywall (89%, 91%),

and electrical outlets (86%, 93%) in addition to their real-time performance and ability to operate

without a priori information are indicative of their promising performance. More detailed study of

these modules on a quadcopter demonstrated the potential for their adoption in an automated UAV-

based data collection context. The low computation costs associated with these algorithms enable

construction managers to monitor the state of progress of indoor sites in real time (Table 3.4).

Furthermore, a distinguishing characteristic of this work lies in the fact that it can operate without

a priori information due to its low dependence on learning. As a result, the introduced techniques

act robustly and do not need comprehensive image libraries for training.

Finally, the use of an off-the-shelf quadcopter in an indoor construction environment demonstrates

the feasibility of using a UAV’s on-board camera for vision-based methods, resulting in

comparable performance to that of the high-resolution smartphones or fixed cameras. The results

of experiments on different image sizes also suggest that they can be applied on images captured

using a variety of off-the-shelf UAVs, preserving the same level of precision, recall, and real-time

performance (Fig. 3.13).

Considering the dynamic nature of indoor sites, it is suggested that the UAV-based monitoring

system be used either after hours or during work breaks to minimize obstructions in images and

detrimental impacts on the work or the workers. Those detrimental impacts may include safety

Page 60: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

50

hazards resulting from UAV interactions with the workers, UAV-caused worker distraction, and

interruptions to work flow. While the application of computer vision techniques on the UAV’s

camera has the potential to fully automate the monitoring process, from a safety perspective it is

essential to have a human control component in the system to oversee path planning (Fig. 3.1a),

autonomous flight (Fig. 3.1b), and UAV deployment.

3.8.1 Future work

Future work is proposed in three main areas: develop visual recognition methods for detection of

other project-related objects such as metallic electrical boxes (Fig. 3.19) and MEP components;

integrate the detected state of progress with BIMs; and investigate reliable, robust, and accurate

autonomous flight systems tailored to indoor construction sites.

Autonomous flight and path planning. Part of the open research challenges are related to the design

of accurate, safe, and robust autonomous flight systems tailored to indoor construction sites. This

is important as indoor sites undergo numerous daily changes in terms of layout and the location of

temporary resources that dramatically affect the path planning process due to newly blocked or

opened pathways. Accurate path planning is vital for robust performance of UAVs, optimal use of

the limited battery life, and automating UAV-based progress monitoring systems to a significant

extent.

Figure 3.19. Examples of metallic electrical boxes in challenging scenes of indoor environment

Page 61: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

51

Another stream of future research, regarding the use of UAVs, needs to address their potential

liabilities, imposed restrictions, safety concerns, and impacts of interaction with humans at

construction sites.

Image processing techniques. Even though this paper investigated automated detection of

components of interior partitions, there are still many other project-related objects that need to be

properly studied such as the mechanical, electrical, and plumbing (MEP) components.

Furthermore, the current proposed vision-based methods have limitations that need to be overcome

in future including the detection of metallic electrical boxes; identification of various types of

insulation and studs; and identification of multiple stud and insulation layers.

An important matter to take into consideration is the accuracy required for achieving a fully

automated detection system. For example, it may suffice to roughly approximate what states and

how much progress are present, not the exact number of studs or area of the wall covered in

drywall. On the other hand, the detection of details may be essential for quality control. The

algorithms designed herein do not identify details such as stud type, insulation thickness or type

of wall board. Future research should study the visual detection of these details to facilitate

conformance checking and control.

Although the proposed algorithms are tailored for complex indoor scenes, it is necessary in future

research to investigate the use of multiple images for more accurate assessment of progress. For

example, the performance of the electrical outlet module degrades if the objects are obstructed;

however, the integration of several images from different perspectives that avoid the obstruction

could results in more accurate results. The other three modules, however, can detect the overall

state of progress for components with acceptable accuracy as long as the partition is not completely

obstructed. The images in all three databases contain partitions partially obstructed by equipment

or plants (Fig. 3.12).

Finally, future work needs to study the integration of the four introduced vision-based modules to

automate the inference of overall progress. Currently, the modules act as a set of tools for detection

of project-related objects, while their interrelation and integration have not yet been studied.

Investigation into the order in which the modules run should optimize their overall run time,

performance, and robustness to challenging indoor scenes.

Page 62: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

52

Integration with BIM. To fully exploit, document, and communicate the detected state of progress,

it is necessary to integrate the results with BIMs. In addition, 4D BIMs can provide the UAV-

based indoor progress monitoring systems with as-planned expected progress and as-is conditions.

This is valuable as the computation time for vision-based approaches will be significantly reduced,

and the system can operate more robustly, having an expectation of as-is and as-planned

workloads. Finally, the proposed vision-based algorithms can provide BIMs with supplementary

information, leveraging their use for indoor construction sites.

3.9 Acknowledgement

The authors would like to thank Adrienne De Francesco and Steve Miszuk from University of

Toronto; Tom Finan, PMX construction’s principal; Emily C. Penn, PMX construction’s project

manager; and Steve Di Santo, Eastern construction’s project coordinator for their great help and

support during the data collection stage and site visits. The authors are also grateful for the

financial support of the Natural Science and Engineering Research Council grant number 203368-

2012. Any opinion, findings, and conclusions expressed in this paper are those of the authors and

do not necessarily reflect the views of the individuals mentioned above.

Page 63: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

53

References

"Parrot Bebop Drone." <www.parrot.com/ca/products/bebop-drone/>.

Abeid, J., Allouche, E., Arditi, D., and Hayman, M. (2003). "PHOTO-NET II: a computer-based

monitoring system applied to project management." Automation in Construction, 12(5),

603-616.

Abeid, J. N., and Arditi, D. (2002). "Using Colors to Detect Structural Componenets in Digital

Pictures." Computer-Aided Civil and Infrastructure Engineering, 17, 61-67.

Adams, S. M., and Friedland, C. J. "A Survey of Unmanned Aerial Vehicle (UAV) Usage for

Imagery Collection in Disaster Research and Management." Proc., 9th International

Workshop on Remote Sensing for Disaster Response.

Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C., and Park, K. (2006). "A formalism

for utilization of sensor systems and integrated project models for active construction

quality control." Automation in Construction, 15(2), 124-138.

Akinci, B., Kiziltas, S., Ergen, E., Karaesmen, I. Z., and Keceli, F. (2006). "Modeling and

Analyzing the Impact of Technology on Data Capture and Transfer Processes at

Construction Sites : A Case Study." Journal of construction engineering and

management(November ), 1148-1157.

Akinci, B., Patton, M., and Ergen, E. (2003). "Utilizing radio frequency identification on precast

concrete components-supplier's perspective." NIST SPECIAL PUBLICATION SP, 381-

386.

Beard, R., Casbeer, D., Kingston, D., W.McLain, T., Li, S.-M., and Mehra, R. (2004).

"Cooperative Forest Fire Surveillance Using a Team of Small Unmanned Air Vehicles."

International Journal of Systems Science.

Bohn, J. S., and Teizer, J. (2010). "Benefits and Barriers of Construction Project Monitoring

Using High-Resolution Automated Cameras." Journal of construction engineering and

management, 136(June), 632-640.

Bosché, F. (2010). "Automated recognition of 3D CAD model objects in laser scans and

calculation of as-built dimensions for dimensional compliance control in construction."

Advanced engineering informatics, 24(1), 107-118.

Bosche, F., and Haas, C. T. (2008). "Automated retrieval of 3D CAD model objects in

construction range images." Automation in Construction, 17, 499-512.

Brilakis, I., Fathi, H., and Rashidi, A. (2011). "Progressive 3D reconstruction of infrastructure

with videogrammetry." Automation in Construction, 20(7), 884-895.

Page 64: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

54

Brilakis, I., Lourakis, M., Sacks, R., Savarese, S., Christodoulou, S., Teizer, J., and Makhmalbaf,

A. (2010). "Toward automated generation of parametric BIMs based on hybrid video and

laser scanning data." Advanced Engineering Informatics, 24(4), 456-465.

Brilakis, I., Park, M.-W., and Jog, G. (2011). "Automated vision tracking of project related

entities." Advanced Engineering Informatics, 25(4), 713-724.

Casbeer, D. W., Beard, R. W., McLain, T. W., Li, S.-M., and Mehra, R. K. "Forest fire

monitoring with multiple small UAVs." Proc., American Control Conference, 3530-

3535.

Colomina, I., and Molina, P. (2014). "Unmanned aerial systems for photogrammetry and remote

sensing: A review." ISPRS Journal of Photogrammetry and Remote Sensing, 92, 79-97.

David, P., and DeMenthon, D. "Object recognition in high clutter images using line features."

Proc., Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on,

IEEE, 1581-1588.

Dimitrov, A., and Golparvar-Fard, M. (2014). "Vision-based material recognition for automated

monitoring of construction progress and generating building information modeling from

unordered site image collections." Advanced Engineering Informatics, 28(1), 37-49.

Dougherty, E. R., Lotufo, R. A., and SPIE, T. I. S. f. O. E. (2003). Hands-on morphological

image processing, SPIE press Bellingham.

Ellenberg, A., Branco, L., Krick, A., Bartoli, I., and Kontsos, A. (2014). "Use of Unmanned

Aerial Vehicle for Quantitative Infrastructure Evaluation." Journal of inftastructure

systems.

Ergen, E., and Akinci, B. "An overview of approaches for utilizing RFID in construction

industry." Proc., RFID Eurasia, 2007 1st Annual, IEEE, 1-5.

Ergen, E., Akinci, B., and Sacks, R. (2007). "Tracking and locating components in a precast

storage yard utilizing radio frequency identification technology and GPS." Automation in

Construction, 16(3), 354-367.

Forsyth, D. A., and Ponce, J. (2003). Computer Vision: A Modern Approach

Gheisari, M., Irizarry, J., and Walker, B. N. "UAS4SAFETY: The Potential of Unmanned Aerial

Systems for Construction Safety Applications." Proc., Construction research congress,

1801-1810.

Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., and Peña-Mora, F. (2011). "Evaluation of

image-based modeling and laser scanning accuracy for emerging automated performance

monitoring techniques." Automation in Construction, 20(8), 1143-1155.

Page 65: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

55

Golparvar-fard, M., and Peña-mora, F. "Applications of Visualization Techniques for

Construction Progress Monitoring." Proc., Computing in civil engineering, 216-223.

Golparvar-fard, M., Peña-mora, F., Arboleda, C. A., and Lee, S. (2009). "Visualization of

Construction Progress Monitoring with 4D Simulation Model Overlaid on Time-Lapsed

Photographs." Journal of Computing in Civil Engineering(December), 391-404.

Golparvar-fard, M., Peña-mora, F., and Savarese, S. (2009). "D4AR – A 4-Dimensional

Augmented Reality Model for Automating Construction Progress Monitoring Data

Collection , Processing and Communication." Journal of information technology in

construction, 14(June), 129-153.

Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2012). "Automated Progress Monitoring

Using Unordered Daily Construction Photographs and IFC-Based Building Information

Models." Journal of Computing in Civil Engineering, 29(1), 04014025.

Gong, J., and Caldas, C. H. (2009). "Computer vision-based video interpretation model for

automated productivity analysis of construction operations." Journal of Computing in

Civil Engineering.

Goodrich, M. A., Morse, B. S., Gerhardt, D., Cooper, J. L., Quigley, M., Adams, J. A., and

Humphrey, C. (2008). "Supporting wilderness search and rescue using a camera-

equipped mini UAV." Journal of Field Robotics, 25(1-2), 89-110.

Hallermann, N., and Morgenthal, G. "The Application of Unmanned Aerial Vehicles for the

Inspection of Structures." Proc., PLSE.

Han, K. K., and Golparvar-Fard, M. "Automated monitoring of operation-level construction

progress using 4D BIM and daily site photologs." Proc., Construction Research

Congress, 1033-1042.

Han, K. K., and Golparvar-Fard, M. (2015). "Appearance-based material classification for

monitoring of operation-level construction progress using 4D BIM and site photologs."

Automation in Construction, 53, 44-57.

Hubbard, B., Wang, H., Leasure, M., Ropp, T., Lofton, T., and Hubbard, S. "Feasibility Study of

UAV use for RFID Material Tracking on Construction Sites." Proc., ASCE Annual

International conference, 669-676.

Irizarry, J., Gheisari, M., and Walker, B. N. (2012). "Usability assessment of drone technology

as safety inspection tools." Journal of information technology in construction,

17(September), 194-212.

Kim, C., Kim, B., and Kim, H. (2013). "4D CAD model updating using image processing-based

construction progress monitoring." Automation in Construction, 35, 44-52.

Kiryati, N., Eldar, Y., and Bruckstein, A. M. (1991). "A probabilistic Hough transform." Pattern

recognition, 24(4), 303-316.

Page 66: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

56

Kiziltas, S., Akinci, B., Ergen , E., and Pingbo, T. (2008). "Technological Assessment and

Process Implications of Field Data Capture Technologies for Construction and Facility /

Infrastructure Management." Journal of information technology in construction,

13(April), 134-154.

Krajnik, l., Vonasek, V., Fiser, D., and Faigl, J. "AR-Drone as a Platform for Robotic Research

and Education " Proc., Research and Education in Robotics: EUROBOT 2011, Springer.

Kropp, C., Koch, C., and König, M. "Drywall State Detection in Image Data for Automatic

Indoor Progress Monitoring." Proc., Computing in Civil and Building Engineering, 347-

354.

Kropp, C., Koch, C., König, M., and Brilakis, I. "A framework for automated delay prediction of

finishing works using video data and BIM-based construction simulation." Proc., 14th

international conference on computing in civil and building engineering.

M. Kontitsis, Valavanis, K. P., and Tsoweloudis, N. "A UAV Vision System for Airborne

Surveillance." Proc., IEEE International Conference on Robotics & Automation

M.Quigley, M.A.Goodrich, S.Griffiths, A.Eldredge, and R.W.Beard "Target Acquisition,

Localization, and Surveillance Using a Fixed-Wing Mini-UAV and Gimbaled Camera."

Proc., IEEE International Conference on Robotics and Automation.

Matas, J., Galambos, C., and Kittler, J. (2000). "Robust detection of lines using the progressive

probabilistic hough transform." Computer Vision and Image Understanding, 78(1), 119-

137.

McCabe, B., and Clarida, B. (2004). "Using Digital Images to Automate Construction Progress

Reporting." Paper.

Memarzadeh, M., Golparvar-Fard, M., and Niebles, J. C. (2013). "Automated 2D detection of

construction equipment and workers from site video streams using histograms of oriented

gradients and colors." Automation in Construction, 32, 24-37.

Metni, N., and Hamel, T. (2007). "A UAV for bridge inspection: Visual servoing control law

with orientation limits." Automation in Construction, 17(1), 3-10.

Navon, R., and Sacks, R. (2007). "Assessing research issues in automated project performance

control (APPC)." Automation in Construction, 16(4), 474-484.

Neale, C. M. U., Saari, H., Pellikka, I., Pesonen, L., Tuominen, S., Heikkilä, J., Holmlund, C.,

Mäkynen, J., Ojala, K., Antila, T., and Maltese, A. "Unmanned Aerial Vehicle (UAV)

operated spectral camera system for forest and agriculture applications." Proc., SPIE,

81740H-81740H-81715.

Nex, F., and Remondino, F. (2013). "UAV for 3D mapping applications: a review." Applied

Geomatics, 6(1), 1-15.

Page 67: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

57

Otsu, N. (1975). "A threshold selection method from gray-level histograms." Automatica,

11(285-296), 23-27.

P.Karan, E., Christmann, C., Gheisari, M., Irizarry, J., and N.Johnson, E. "A comprehensive

matrix of unmanned aerial systems requirement for potential applications within a

department of transportation." Proc., Construction research congress, 1478-1487.

Park, M.-W., and Brilakis, I. (2012). "Construction worker detection in video frames for

initializing vision trackers." Automation in Construction, 28, 15-25.

Patraucean, V., Armeni, I., Nahangi, M., Yeung, J., Brilakis, I., and Haas, C. (2015). "State of

Research in Automatic As-Built Modelling."

Rathinam, S., Kim, Z., Soghikian, A., and Sengupta, R. "Vision Based Following of Locally

Linear Structures using an Unmanned Aerial Vehicle." Proc., 44th IEEE Conference on

Decision and Control, 6085-6090.

Rezazadeh Azar, E., Dickinson, S., and McCabe, B. (2013). "Server-Customer Interaction

Tracker: Computer Vision–Based System to Estimate Dirt-Loading Cycles." Journal of

Construction Engineering and Management, 139(7), 785-794.

Rezazadeh Azar, E., and McCabe, B. (2012). "Automated Visual Recognition of Dump Trucks

in Construction Videos." Journal of Computing in Civil Engineering, 26(6), 769-781.

Rezazadeh Azar, E., and McCabe, B. (2012). "Part based model and spatial–temporal reasoning

to recognize hydraulic excavators in construction images and videos." Automation in

Construction, 24, 194-202.

Roca, D., Lagüela, S., Díaz-Vilariño, L., Armesto, J., and Arias, P. (2013). "Low-cost aerial unit

for outdoor inspection of building façades." Automation in Construction, 36, 128-135.

Roh, S., Aziz, Z., and Peña-Mora, F. (2011). "An object-based 3D walk-through model for

interior construction progress monitoring." Automation in Construction, 20(1), 66-75.

Schwarz, M. W., Cowan, W. B., and Beatty, J. C. (1987). "An experimental comparison of RGB,

YIQ, LAB, HSV, and opponent color models." ACM Transactions on Graphics (TOG),

6(2), 123-158.

Shahi, A., Aryan, A., West, J. S., Haas, C. T., and Haas, R. C. (2012). "Deterioration of UWB

positioning during construction." Automation in Construction, 24, 72-80.

Shahi, A., West, J. S., and Haas, C. T. (2013). "Onsite 3D marking for construction activity

tracking." Automation in Construction, 30, 136-143.

Siebert, S., and Teizer, J. (2014). "Mobile 3D mapping for surveying earthwork projects using an

Unmanned Aerial Vehicle (UAV) system." Automation in Construction, 41, 1-14.

Page 68: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

58

Son, H., Kim, C., Hwang, N., Kim, C., and Kang, Y. (2014). "Classification of major

construction materials in construction environments using ensemble classifiers."

Advanced Engineering Informatics, 28(1), 1-10.

Son, H., Kim, C., and Kim, C. (2012). "Automated Color Model–Based Concrete Detection in

Construction-Site Images by Using Machine Learning Algorithms." Journal of

Computing in Civil Engineering, 26(3), 421-433.

Tang, P., Huber, D., Akinci, B., Lipman, R., and Lytle, A. (2010). "Automatic reconstruction of

as-built building information models from laser-scanned point clouds: A review of

related techniques." Automation in construction, 19(7), 829-843.

Teizer, J. (2015). "Status quo and open challenges in vision-based sensing and tracking of

temporary resources on infrastructure construction sites." Advanced Engineering

Informatics, 29(2), 225-238.

Teizer, J., and Vela, P. A. (2009). "Personnel tracking on construction sites using video

cameras." Advanced Engineering Informatics, 23(4), 452-462.

Turkan, Y., Bosche, F., Haas, C. T., and Haas, R. (2012). "Automated progress tracking using

4D schedule and 3D sensing technologies." Automation in Construction, 22, 414-421.

Wang, J., Sun, W., Shou, W., Wang, X., Wu, C., Chong, H.-Y., Liu, Y., and Sun, C. (2014).

"Integrating BIM and LiDAR for Real-Time Construction Quality Control." Journal of

Intelligent & Robotic Systems.

Wu, Y., Kim, H., Kim, C., and Han, S. H. (2010). "Object Recognition in Construction-Site

Images Using 3D CAD-Based Filtering." Journal of Computing in Civil Engineering,

24(1), 56-64.

Yang, J., Park, M.-W., Vela, P. A., and Golparvar-Fard, M. (2015). "Construction performance

monitoring via still images, time-lapse photos, and video streams: Now, tomorrow, and

the future." Advanced Engineering Informatics, 29(2), 211-224.

Zhang, C., and Arditi, D. (2013). "Automated progress control using laser scanning technology."

Automation in Construction, 36, 108-116.

Zhang, C., and Elaksher, A. (2012). "An Unmanned Aerial Vehicle-Based Imaging System for

3D Measurement of Unpaved Road Surface Distresses." Computer-Aided Civil and

Infrastructure Engineering, 27(2), 118-129.

Zhu, Z., and Brilakis, I. (2010). "Concrete Column Recognition in Images and Videos." Journal

of Computing in Civil Engineering, 24(6), 478-487.

Zhu, Z., and Brilakis, I. (2010). "Parameter optimization for automated concrete detection in

image data." Automation in Construction, 19(7), 944-953.

Page 69: InPRO: Automated Indoor Construction Progress Monitoring ... · To achieve automated progress monitoring, as-is conditions at sites should be captured and reflected into building

59

Zou, J., and Kim, H. (2007). "Using Hue, Saturation, and Value Color Space for Hydraulic

Excavator Idle Time Analysis." Journal of Computing in Civil Engineering, 21(4), 238-

246.

Zuiderveld, K. "Contrast limited adaptive histogram equalization." Proc., Graphics gems IV,

Academic Press Professional, Inc., 474-485.