Final report F.A.

46
Electrical Engineering Department ELEG490 Final Report February 2016 Precise Landing of an Unmanned Aerial Vehicle The project report II is prepared in partial fulfilment for Bachelor of Electrical Engineering Team Members: Abdulla Abedrabbo Jaber ID: 5622 Fadi Al-Zir ID: 5261 Supervisors: Dr. Khaled Al-Wahedi & Dr. Mohammed Abou Khousa Submission date: February 25 th , 2015

Transcript of Final report F.A.

Page 1: Final report F.A.

Electrical Engineering Department

ELEG490

Final Report

February 2016

Precise Landing of an Unmanned Aerial Vehicle

The project report II is prepared in partial fulfilment for

Bachelor of Electrical Engineering

Team Members:

Abdulla Abedrabbo Jaber ID: 5622

Fadi Al-Zir ID: 5261

Supervisors: Dr. Khaled Al-Wahedi & Dr. Mohammed Abou Khousa

Submission date: February 25th, 2015

Page 2: Final report F.A.

ii

Declaration

We hereby declare that this work has been done by our team and no portion of the

work contained in this progress report has been submitted in support of any

application for any other degree or qualification of this or any other university or

institute of learning.

We also declare that we have not engaged in any unauthorised act of copying or

reproducing or attempt to copy / reproduce or cause to copy / reproduce or permit the

copying / reproducing or the sharing and / or downloading of any copyrighted

material or an attempt to do so whether by use of the University’s facilities or outside

networks / facilities whether in hard copy or soft copy format.

We hereby further declare that in the event of any infringement of the provisions of

the Act whether knowingly or unknowingly the University shall not be liable for the

same in any manner whatsoever and undertake to indemnify and keep indemnified the

University against all such claims and actions.

Student1:

Signature: __________________________ Name:

Student ID: Date:

Student2:

Signature: __________________________ Name:

Student ID: Date:

Page 3: Final report F.A.

iii

Acknowledgement

We would like to express our gratitude, first and foremost, to our mentors, Dr.Khaled

Al-Wahedi and Dr.Mohammed Abou Khousa, for their efforts in guiding and

supervising us through this project. Additionally, we would like to thank Mr. Iman

Prayudi for greatly contributing in all technical aspects of our work, and teaching us

new useful and practical skills which helped to complete this project. Finally, we want

to thank both of our course instructors, Dr.Mahmoud Meribout and Dr.Khalid

Alhammadi for helping us deliver our ideas and achievements in the best way

possible. We couldn’t have done this without you all.

Page 4: Final report F.A.

iv

Executive Summary

This is a final report for a senior design project, namely, precise landing of an

Unmanned Aerial Vehicle (aka UAV). The purpose of this project is to enable a

commercial UAV platform to autonomously perform a complex landing on a

predesignated slanted surface, by means of an on-board control system equipped with

suitable sensors and aids.

This report provides a detailed overview of all the work done in order to complete

the project.

The project has been developed to include the design of several sub-systems,

which are analyzed and discussed in this report, with the use of various decision

making techniques. Hence, the selection of the sub-systems and components is made

based on this evaluation, representing the most effective means to realize the required

functionality. A detailed description of the proposed design approach is provided.

Process of design implementation, experiments and system calibration is thoroughly

documented and explained. Finally, analysis of system performance and suggestions

on future development are provided.

Page 5: Final report F.A.

v

Table of Contents

Declaration ................................................................................................................................. ii

Acknowledgement ..................................................................................................................... iii

Executive Summary .................................................................................................................. iv

Table of Contents ....................................................................................................................... v

List of Figures ........................................................................................................................... vi

List of Tables ............................................................................................................................ vii

1. Introduction ........................................................................................................................... 1

2. Theoretical Background ........................................................................................................ 3

3. Design Details ....................................................................................................................... 6

3.1 Choice of microcontroller .................................................................................................... 6

3.2 Design of the sensing system .............................................................................................. 7

3.2.1 Radar ................................................................................................................................ 7

3.2.2 Ultrasonic (UT) sensors .................................................................................................... 9

3.2.3 Laser pointers and camera .............................................................................................. 10

3.3 Design analysis .................................................................................................................. 11

3.3.1 Pairwise comparison chart .............................................................................................. 11

3.3.1 Decision matrix .............................................................................................................. 12

3.4 Design of On-Board Module ............................................................................................. 13

3.5 Design of landing gear....................................................................................................... 15

3.6 Software design ................................................................................................................. 17

3.6.1 Image processing ............................................................................................................ 19

3.6.2 Flight .............................................................................................................................. 21

4. Conclusion ........................................................................................................................... 25

5. References ........................................................................................................................... 26

Appendices .............................................................................................................................. 27

Page 6: Final report F.A.

vi

List of Figures

Figure 1: IRIS+ Quad-copter ..................................................................................................... 3

Figure 2: Raspberry PI 2............................................................................................................ 3

Figure 3: Measuring slant angle using two distance measurements .......................................... 8

Figure 4: Transmitter Bandwidth vs. Resolution....................................................................... 8

Figure 5: Ultrasonic sensor system ............................................................................................ 9

Figure 6: Ultrasonic system Implementation............................................................................. 9

Figure 7: Laser Line generator ................................................................................................ 10

Figure 8: Raspberry Pi Camera ............................................................................................... 10

Figure 9: Principle of operation of optical system................................................................... 10

Figure 10: System casing (View 1) ......................................................................................... 14

Figure 11: System casing (View 2) ......................................................................................... 14

Figure 12: Assembled system casing (View 1) ....................................................................... 14

Figure 13: Assembled system casing (View 2) ....................................................................... 14

Figure 14: Generic landing gear .............................................................................................. 15

Figure 15: Stepper motor ......................................................................................................... 15

Figure 16: Motor driver board ................................................................................................ 16

Figure 17.1: Landing Gear ...................................................................................................... 16

Figure 17.2: Landing gear skids .............................................................................................. 16

Figure 17.3: Landing gear bearing bridges .............................................................................. 16

Figure 18: Static laser projection ............................................................................................. 16

Figure 19: Laser projection on a slanted surface ..................................................................... 16

Figure 20: Abstract connectivity of the landing system .......................................................... 17

Figure 21: Reference projection .............................................................................................. 18

Figure 22: +Y tilt projection .................................................................................................... 18

Figure 23: MATLAB Command Window .............................................................................. 18

Figure 24: LZ detection ........................................................................................................... 19

Figure 25: LZ detection in processing ..................................................................................... 19

Figure 26: Laser projection detection ...................................................................................... 20

Figure 27: Processing of laser projection ................................................................................ 20

Figure 28: Detection of black LZ ............................................................................................ 20

Figure 29: Processing of black LZ .......................................................................................... 20

Figure 30: UAV degrees of freedom ....................................................................................... 22

Figure 31: Raspberry Access Point ......................................................................................... 24

Figure 32: Monitoring the UAV wirelessly ............................................................................. 24

Page 7: Final report F.A.

vii

List of Tables

Table 1: Comparison of microcontroller specifications ............................................................ 6

Table 2: Pairwise comparison chart ....................................................................................... 11

Table 3: Basic decision matrix ................................................................................................ 12

Table 4: Final decision matrix ................................................................................................ 13

Page 8: Final report F.A.

1

1. Introduction

Nowadays, people in wide range of industries covering civilian as well as military

sectors alike seek to bring in as much automation as possible in all kinds of critical

tasks and processes. This is due to the great advantages that automation brings about

in these industries. For instance, removing the human factor from the process

completely (or at least partially), yields unmatched accuracy, precision, repeatability

and speed, in addition to promoting the ability to operate on demand while complying

to the highest process safety standards, and generate very high quality products at

lower operational costs.

The interest in autonomous mission critical systems is timeless. This interest

has been triggered by the significant need for robust robotic systems that are capable

of functioning independently in unknown environments. In our project we aim to

support this trend of automation, by designing an on-board control system for a

commercial Unmanned Aerial Vehicle (UAV) commonly referred to as Drone. The

UAV will be interfaced with suitable sensors and aids, which will enable the drone to

evaluate its surroundings, locate a predesignated marked zone and safely land on it.

The project objectives are:

Enable the drone to safely and autonomously land on a static marked

slanted surface.

Evaluate sub-systems design alternatives which encompass:

o Selecting the most suitable and effective controller device

o Determining and implementing the best algorithm for the controller

to tackle the task

o Choosing the safest and most efficient landing sequence to be

executed

o Identifying the suitable sensors to be interfaced with the controller

(including cameras, GPS etc.)

Design and build a testing rig to evaluate the performance of the slant

angle estimation algorithm (e.g., an adjustable-angle slanted surface)

Design, build, and integrate with the drone’s autopilot, an add-on on-board

module containing the selected controller and the sensing sub-system.

If time permits, the team will also attempt to enhance the system so that it would

be able to locate and land on a moving slanted surface1.

1Initially, this was one of the project objectives when the team had three student members.

However, due to the fact that the team was reduced to two students after one member dropped

the course, this objective became optional for the team to tackle if time permits.

Page 9: Final report F.A.

2

Furthermore, in development of this project, the following constraints were

considered:

The landing system is required to be completely autonomous (i.e. with no

human intervention)

The slanted landing zone must have an absolute minimum inclination angle of

30 degrees in any direction

The slanted landing zone must have a maximum area of 1.5m x 1.5m

The UAV is required to recognize the uniquely marked landing zone from a

minimum distance of 30m

The project budget is limited to 5000 AED

Page 10: Final report F.A.

3

2. Theoretical Background

The Unmanned Aerial Vehicle (later referred to as UAV), first massively developed

and introduced solely for military purposes dating back to late 70’s, answering the

need for air reconnaissance, that wouldn’t incur any unnecessary human losses. [1]

Later on, because of the tremendous success of the endeavor, the focus of

development of the UAVs started to shift to real combat applications, where they

could be equipped with guns, missiles etc. thus attempting to further reduce the

human factor in aerial combat and reconnaissance. However, even then, UAVs still

had to be controlled by a pilot from the ground, using on-board cameras.

As it could have been expected, what once was a strictly military idea, nowadays

came to be a widely used asset in all imaginable kinds civil of applications, ranging

from the usual RC aircraft toys to NASA research drones. As a recent example, early

in 2015, a widely-known online shopping resource “Amazon” announced that it is

developing a so called “Prime Air” delivery service, where quad rotor UAVs will be

used to deliver goods in small packages to customers in less than 30 minutes, as the

company claims. Finally, we come to the same issue discussed previously, the need

for automation. Human-controlled UAV’s are popular and widely used, and now is

the time to take this device to the next step, essentially upgrading it into a robot that

could complete certain assigned tasks autonomously (i.e. with minimal human input).

Of course, such upgrading must be done in certain steps, for example, some modern

commercial UAVs are already fitted with GPS trackers (e.g. 3DR IRIS+), and are able

to fly on a path defined by user on his handheld device, furthermore, one of the recent

developments is the so called “Follow me” function (e.g. Walkera Scout X4

Quadcopter) which essentially makes the drone follow the user and focus its on-board

camera on him. In this project, we will focus on developing a system that will allow

such UAV to automatically perform an accurate and safe complex landing on a

slanted, uneven or a moving surface, thus taking the UAV automation progress a step

further.

This project focuses on developing a control system and correctly interfacing it

with sensory systems, to enable the UAV to perform a complex landing. It is

important to mention that a commercial UAV has been provided by the supervisors. A

quad-rotor drone IRIS+ (Figure 1) produced by 3D Robotics. Furthermore, a

Raspberry PI controller (Figure 2) was provided to us as prototyping and development

Figure 1: IRIS+ Quad-copter

Figure 3: IRIS+ Quad-copter

Figure 2: Raspberry PI 2

Figure 1: IRIS+ Quad-

copterFigure 2: Raspberry

PI 2

Page 11: Final report F.A.

4

processor to achieve our project’s objectives.

IRIS+ is equipped with an onboard 3DR Pixhawk microcontroller, as well as a

3DR uBlox GPS system (Appendix A-B), both of these are open source hardware,

meaning that the source codes are available for user modification. Having this in

mind, we will be able to setup communication between the drone’s native control

system and the one that is to be designed using Raspberry PI 2. The latter, running on

Raspbian OS, a favorite choice of hobbyists, though having the capacity to perfectly

suit professional needs: it has an ARMv7 900 Mhz quad-core processor, with 1GB

RAM, various I/O ports etc. it is essentially a mini-computer [2]. (Appendix I)

Furthermore, it is capable of running the full range of ARM GNU/Linux distributions,

essentially enabling it to be able to be programmed using Python language. Finally, as

per the IRIS+ specifications, the maximum payload of the UAV, excluding the

Pixhawk and GPS, is 400g, whereas our chosen controller, weighing slightly above

45g and being roughly 80x50 mm, would fit and be easily carried by the IRIS+.

Furthermore, it would leave plenty of space to arm it with all required sensors and

auxiliary systems.

Picking up from the most recent development of such a UAV landing system [3],

where the drone is fitted with as much as eight laser modules and a CMOS camera, all

interfaced to Gumstix Overo Air COM controller and mounted underneath the

quadrotor. The latter chip runs on Texas Instruments processor, offers a slightly lower

capabilities in terms of speed, RAM and available ports, and is worth 200$ as

compared to Raspberry’s 35$. Nevertheless, in the approach proposed, the drone

centers on the predefined slant, the camera captures the positions of the laser beams

and the controller then calculates the distance to the slant and the required angle using

nonlinear least squares estimation in 3D. Afterwards, the appropriate signals are fed to

the motors, thus tilting the UAV and landing it on the surface. The constraint of the

project is such that the slanted surface must be inclined between 0 and 30 degrees [3],

while our objective is to enable the drone to land on an incline higher than 30 degrees.

However, in [4], the best bet is on the correct vision-based beforehand trajectory

calculation, which allows the quadcopter to accurately perform even the most extreme

maneuvers, including landing on inclined surfaces. However, the later was achieved

partially by modifying the landing gear to perch on a slant, using various “Velcro”

materials.

In another exemplary approach [5], a Simultaneous Localization And Mapping

algorithm (SLAM) was applied on an RC helicopter, using only a monocular camera.

Furthermore, aerial gripping was introduced as a tool to stabilize the helicopter while

airborne. This was achieved by adding a secondary camera, used to estimate the 3D

location of the object, while an under-actuated and passively compliant manipulator

was designed for effective gripping under uncertainty.

Page 12: Final report F.A.

5

Additionally, other types of sensing that may be used to measure distance between

the UAV and the landing zone include [6] (but not limited to):

Ultrasonic - these sensors are sensitive to the sound absorption of the material

of the targeted surface.

Infrared - these sensors have limited range and are nearly useless outdoors

under direct sunlight.

Laser rangefinders – these send out a laser pulse and produce measurements

based on the time of flight principle (measuring how fast the pulse reflected

off the target and returned to the sending end), most applicable for our project

by far as their usage is not environment limited and LRFs normally operate on

distances further than the 30m project constraint.

To summarize, there are many possible angles to approach the issue of implementing

a sensing system, and this, among other sub-systems, is discussed in the following

section.

Page 13: Final report F.A.

6

3. Design Details

3.1 Choice of microcontroller

As mentioned before, a commercial “Raspberry PI 2” microcontroller was provided

for the design by the supervisors. This microcontroller perfectly suits our needs,

however it is necessary to ensure that we are working with the best option out there,

and to justify our choice.

In order to do that, we will compare Raspberry with two other

microcontrollers available on the market, which are potentially able to meet the

design criteria: Arduino Mega2560 and BeagleBone Black.

Table 1: Comparison of microcontroller specifications

Attribute Raspberry PI 2 Arduino Mega2560 BeagleBone Black

Price $35 $37 $60

CPU ARM Cortex-A7 ATMega 2560 ARM Cortex-A8

Cores 4 1 1

GPU Broadcom

VideoCore IV N/A PowerVR SGX530

Clock Speed 900 MHz 16 MHz 1 GHz

RAM 1GB 8 KB 512 MB

USB Ports 4 1 1

Flash Memory N/A 256 KB 2 GB

Storage MicroSD N/A MicroSD

GPIO 40 54 2x 46

Video output HDMI/Composite N/A Micro HDMI

Audio output HDMI/3.5 mm N/A HDMI/3.5 mm

Ethernet port 10/100 N/A 10/100

Power Supply 5V 5 V 5V

Size 85 mm x 56 mm 101 mm x 53 mm 86 mm x 53 mm

As we can see in Table 1, the BeagleBone Black microprocessor is a close

competitor for the Raspberry PI, having a single-core 1GHz CPU, which surpasses

the latter by our criteria, even though it is a quad-core CPU. The reason being that,

even though multi-core CPU’s allow for extensive multitasking, they are not as

effective at tackling single tasks which simply require more processing power.

Additionally, utilizing the quad-core benefits is an unnecessary complication for our

project, as our algorithms can potentially be successfully processed by a single-core

CPU. On the other hand, however, the BeagleBone Black has half as much RAM as

the Raspberry PI 2 does, which may constrain us in choice of algorithm. Finally, the

former is worth 25$ more, and that point settles our choice here, because it is clearly

Page 14: Final report F.A.

7

not sensible to increase the cost of our project, while potentially constraining our

capabilities, even if it may appear safe to do so in these specification ranges. Finally

comparing the Arduino Mega2560 to the provided microcontroller, it is quite clear

from the specifications that this is a lot less capable IC, having the same price as the

Raspberry. Hence, Raspberry PI is ultimately the best choice of microcontroller in

our circumstances, as all other IC available in this range are either less powerful, or

have a 100$+ price.

3.2 Design of the sensing system

3.2.1 Radar

One of the possible solutions of informing the UAV about the surface underneath is

equipping it with a radar system (i.e., Radar altimeter). A radar altimeter can be built

based on frequency modulated continuous wave (FMCW) or pulse radar concepts. For

illustration purposes, we will assume a pulse radar concept whereby the radar emits a

pulse towards the surface and subsequently receives the reflected pulse off the

surface. The unknown distance to the surface is determined from the time it takes the

pulse to reach the surface and come back (round-trip time) and speed of propagation

(e.g., speed of light). In general, the resolution in the distance measurement is related

to the pulse rise time, (which is related to the sounding bandwidth by

.

In our application, we need two distance measurements in order to estimate

the slant angle of 1D surface or plane (See Figure 3). The simplest realization option

is to use two on-board radar altimeters interspaced by distance ( ) in the UAV plane.

With such system, knowing the distance between radars ( ) would allow us to

calculate the height of incline ( ) by determining ( & ) for the radar distance

measurements (Figure 3). In this case, the slant angle can be estimated as:

(

) (

)

As mentioned above, the radar would operate by transmitting microwave

signals to the surface through the transmitter and measure the time it takes for the

microwave to be reflected to the receiver. Keeping in mind that the microwave signals

travel at the speed of light, we can calculate that distance to the surface as follows

(Eq.2):

[

]

Page 15: Final report F.A.

8

However, the issue with using radars as a range finder for small distances is the range

resolution which allows the radar to distinguish between distances with small

differences. The range resolution is calculated using the following equation (Eq.3):

The graphical relation between the pulse bandwidth and the resolution can be seen in

Figure 4. Thus, in order to get a reasonable resolution in millimeters, as per the

requirements of the project, we will need a pulse with a bandwidth in GHz range. For

example, using Eq.3, a resolution of 5mm will be achieved with a pulse bandwidth of

30 GHz. This in itself, makes the radar not suitable for our application, because we

can neither afford nor fit a radar with such a high bandwidth on our drone.

(Eq. 3)

0 5 10 15 20 25 30 35 40 45 500

10

20

30

40

50

60

70

80

90

100

110

120

130

140

150

Resolution (mm)

Ban

dw

idth

(G

Hz)

Figure 3: Measuring slant angle using two

distance measurements.

Figure 4: Transmitter Bandwidth vs. Resolution.

Landing Surface

Landing Surface

Datum level

Datum level

Page 16: Final report F.A.

9

Figure 5: Ultrasonic sensor system

Figure 4: Laser Line generator

Figure 5: Ultrasonic sensor system

Figure 6: Ultrasonic system Implementation

Figure 5: Ultrasonic sensor system

Figure 6: Ultrasonic system Implementation

3.2.2 Ultrasonic (UT) sensors

Our second design option is to use multiple ultrasonic sensors. These would operate

similarly to a radar, however, emitting ultrasound pulses and receiving the reflections

of these sound pulses from the surface below, measuring the time the sound waves

take to be reflected back (Eq. 4).

This would enable us to calculate the difference in height of the incline (delta

z) at different points, provided that the distance between the sensors (x) is known

(Figure 5). However, this approach would have its difficulties. First, it is important to

know that ultrasound is highly attenuated when transmitted in air or any other low

density medium, and thus our receiver would most likely present a noisy signal.

Furthermore, as our landing surface is not parallel to the ultrasound emitter, the waves

would not reflect back to the receiver at the correct angle (Figure 6), and thus,

supplying us with completely useless information about the slanted surface. To this

end, ultrasonic sensors are much more effective in liquid or solid mediums, rather

than air, and it would be quite challenging, if possible at all, to set them up to use in

our UAV landing system.

It is worth mentioning that, for both Radar and UT approaches, performing

accurate localized measurement such as the one needed in this project would not be

possible without a narrow beam sensing. This in turn requires additional hardware

(e.g., array) to be implemented which further complicates the design and increases the

payload and cost. However, since that is possible in theory, these two options were

considered and discussed.

where:

= speed of sound [m/s]

t = measured running time [s]

d = distance to the surface [m] (Eq. 4)

Figure

6:

Ultras

onic

system

Imple

mentat

ion(Eq

. 4)

Figure

6:

Ultraso

nic

system

Implem

entatio

n

(Eq. 4)

Figure

6:

Ultras

onic

system

Imple

mentat

ion(Eq

. 4)

Page 17: Final report F.A.

10

Figure 7: Laser Line generator

Figure 5: Raspberry Pi Camera

Figure 6: Laser Line generator

Figure 8: Raspberry Pi Camera

Figure 7: Principle of operation

of optical systemFigure 8:

Raspberry Pi Camera

Figure 9: Principle of operation of

optical system

(Eq. 5)

Figure

6:

Ultraso

nic

system

Implem

entatio

n(Eq.

4)

3.2.3 Laser pointers and camera

Finally, we approach our third possible sensing system, which would consist of two

components: four laser line generators (3 mW) (Figure 7) and a Raspberry PI camera

(Figure 8)

In this system (Figure 9), the laser line generators would be placed in a way so

as to intersect and generate a laser square directly below the quadrotor. The camera

will be video-streaming this image of a square projection on the surface, to the

microcontroller, where the distance (L), which will visually decrease as the angle of

the slant increases (when viewed from the camera, e.g. top-down view), will be

calculated using image processing algorithms. Here, we assume that the distance

between the laser generators (d) is known, and that when the surface is flat, the

distance (L) is the direct projection, and thus is equal to (d). On the other hand, when

we have an inclined surface, from the camera’s top-down point of view, the distance

(L) is going to change visually, decreasing and becoming less than the distance (d).

Hence, using (L) we can accurately calculate the angle (θ) of the slanted

landing surface (Eq.5) and execute the landing sequence based on that information.

This system seems to be most feasible, however, due to the presence of laser

pointers, the issues may arise during experiments under direct sunlight, where the

laser line traces on the surface would lose brightness and could become hard to locate

during image processing phase. Furthermore, this would constrain the whole system

to only being able to safely land on a surface that has a high contrast with the color of

the laser (e.g. red, green or blue lasers are available)

Nonetheless, the laser approach would provide the narrow beam required for

the localized sensing needed in the project. This approach has been demonstrated in

the past for similar application using 8 laser pointers [6]. In this project, we propose to

use only four laser line generators (i.e. a pointer with a lens/filter) which would

decrease the cost and computational complexity.

Page 18: Final report F.A.

11

(Eq. 6)

(Eq. 7)

3.3 Design analysis

Finally, at this point, we are ready to evaluate the decisive part of our design, the

UAV sensing system, by comparing the three possible designs discussed above.

In order to do that, we’ve used a standardized concept evaluation approach using a

PWC chart, and consequently, a decision matrix. Detailed evaluation process is

presented in the following sections

3.3.1 Pairwise comparison chart

To initialize our comparison process, we needed to devise a PWC chart, with

distribution of weights for each of the design criteria. For this purpose, we chose the

following:

Accuracy – as a measure of how accurately will the system be able to

determine the angle of the slanted surface and its orientation

Safety – measuring how safe will the UAV perform the landing, both for itself

and for the user/bystanders/environment – this is somewhat dependent on the

previous criterion

Feasibility – a measure of how easy or convenient would implementing such

system be

Cost – obviously, the measure of how expensive would implementing such a

system be

Hence, all of the above are included in the PWC chart in Table 2.

Table 2: Pairwise comparison chart

Safety Accuracy Feasibility Cost

Geometric

mean Weights

Safety 1 0.5 3 4 2.45 0.29

Accuracy 2 1 3 5 5.48 0.65

Feasibility 0.33 0.33 1.00 0.50 0.236 0.0278

Cost 0.25 0.20 2.00 1.00 0.316 0.0373

The last two columns of Table 2 are calculated as per (Eq. 6-7). In this

fashion, the criterion in the left-most column is compared to the criterion in the upper

row using a number. For example, accuracy is related to safety by a factor of 2,

meaning that we consider accuracy of our system twice as important as safety is,

mostly because the most accurate system will surely be safer in our case, while the

safest system may or may not be accurate.

Page 19: Final report F.A.

12

Such distribution is justified by our project objectives and constraints. As we

aim to design a safe and autonomous system that would enable the UAV to land on a

1.5 meter by 1.5 meter landing zone, which is to be detected from a minimum

distance of 30 meters, we value accuracy above all. Second best criterion is safety,

which, as previously discussed, depends on the accuracy of our system. The cost

criterion weight is then justified by our project budget constraint of 5000 AED.

Finally, the feasibility of the system is the least important criterion, because it greatly

depends on the level of our creativeness, and less – on the actual complexity of the

system implementation.

3.3.1 Decision matrix

Consequently, the next logical step is to compare our three designs based on the

chosen criteria (Table 3).

Table 3: Basic decision matrix

Radar

Ultrasonic

sensor

Optical

system

Safety 5 3 5

Accuracy 5 3 5

Feasibility 2 3 4

Cost 1 4 5

Total 13 13 19

The numbers in the table above represent approximate measures for each design:

Safety (5-safest, 0 – unsafe)

Accuracy (5-most accurate 0-not accurate)

Feasibility (5-easy and very simple to do , 0 – impossible to do)

Cost (5 – very cheap , 0 – over the total budget)

Here, as follows from section 3.2, and assuming that we are able to implement

narrow beam sensing (NBS) for Radar and UT systems, we can state that UT would

be the least accurate system, because sound waves will still experience some

attenuation when transmitted through air, thus producing a noisy signal. The Radar,

however, will become as accurate as the optical system is. Consequently, same values

are carried to the safety criterion in Table 3. On the other hand, Radar system is a lot

harder to implement, because of its considerable size, required for a high bandwidth

transmitter. Ultrasonic system is similar to Optical in terms of feasibility, but the

design of the former is more complex because of the NBS. Finally, the cost of the

Radar system is the greatest because of the high bandwidth requirement, while UT

and Optical systems can easily fit in our budget, with UT being slightly more

expensive.

Page 20: Final report F.A.

13

In order to correctly perform the decision, we now need to combine the two tables

by multiplying each cell from Table 3 by the corresponding weight of the criterion,

determined in the last column of Table 2. We will get the following:

Table 4: Final decision matrix

Radar

Ultrasonic

sensor

Optical

system

Safety 1.44 0.87 1.44

Accuracy 3.23 1.94 3.23

Feasibility 0.06 0.08 0.11

Cost 0.04 0.15 0.19

Total 4.77 3.04 4.97

Finally, we can determine that our third design, the optical system made up of

laser line generators and a camera, is the most suitable approach for our project. It is

important to note that working with lasers requires protective glasses, as direct

exposure of eyes to laser light may be harmful.

3.4 Design of On-Board Module

As we were provided with a ready-made commercial UAV, we had to work with what

we have to try and implement a physical casing which would be used to attach the

control and sensory systems to IRIS+.

This proved to be a challenge because the drone only had two holes in its

lower body part, which were meant for an optional gimbal accessory. These two holes

are designed for standard M5 screws, which is why we could design a base which

would carry the microcontroller (also fixed to it by M2 screws), that would attach to

the UAV using two hands. The latter are made in a way so as to account for the

design of the case of the UAV, and have variable fixing distance, because the actual

distance between the two gimbal holes in the UAV body is not supplied from the

manufacturer, and measuring it by hand resulted in uncertainties (Figure 10).

Going further, we needed to attach the laser pointers to the carrying base in

such a way that the laser projection would produce a full square on the surface

beneath the drone. In order to accomplish this task, we used hammer-like

construction, with the handle attached deep under the bottom of the casing by an M2

screw, and additionally fixed using a plastic rectangular protrusion, which fits into a

pre-designated hollow rectangle (with a +0.5m on all dimensions of the hollow as

compared to the protrusion, in order for the latter to fit) placed next to the screw-hole

on the bottom part of the base.

Finally, the bottom of the casing has an extension, a smaller base for the

Raspberry PI camera, designed in a way so as to center the camera exactly in the

Page 21: Final report F.A.

14

Figure 11: System casing (View 2) Figure 10: System casing (View 1)

middle between the laser pointers, in order for it to pick up the image from the surface

correctly (Figure 11).

All of the above was designed and built using “SolidWorks” software, and

was recently manufactured (3D-printed) in PI Mechanical Workshop. At this stage,

the module is fully assembled together with the microcontroller, camera and lasers,

and is attachable to the UAV. Lightweight nylon screws were used for attaching the

parts together, to avoid decreasing the available payload of the UAV (Figure 12-13).

Furthermore, as a part of hardware development phase, we are developing an

adjustable landing gear module, thus modifying or replacing the existing UAV

landing gear. Such approach is necessary because of the aerodynamic features of the

quad-copter. As we can see from the previous developments in this area [6], in order

to land the UAV on a slant, the UAV needs to tilt its body in a way so as to become

parallel to the landing surface. On the other hand, however, whenever a UAV tilts – it

starts to move in the direction it is tilted in, thus making the landing inaccurate and

the approach itself - ineffective. Theoretically, and as described in [6] this issue can

be tackled by first making the UAV descend on the landing zone vertically, until the

distance between the UAV and the landing zone becomes relatively small, and then

tilting the drone and switching off the quad motors. This results in a short free-fall

period, and makes the UAV land further down the incline, at a considerable distance

from the initial landing zone sector. Such approach in itself cannot be considered

neither accurate nor safe, as far as our project is concerned, which is why we need to

develop an adjustable landing gear, which will be adjusted by the microcontroller,

according to the determined incline angle. This will improve the safety of the landing

and will diminish the importance of the drone aerodynamics issue.

Figure 13: Assembled system casing (View 2) Figure 12: Assembled system casing (View 1)

Page 22: Final report F.A.

15

Figure 15: Stepper motor Figure 14: Generic landing gear

3.5 Design of landing gear

As our project involves a “variation” of a helicopter, we decided do draw inspiration

from the well-known designs (Figure 14), thus developing our “variation” of a

landing gear. However, we had certain design specific challenges in coming up with a

suitable design: the landing gear should be able to rotate in mid-air and firmly support

the UAV during landing, while not obstructing the camera’s field of view, and not

exceeding the maximum possible payload, with 200g of it left at this point.

To tackle the latter, we designed our landing gear to be adjusted by a small 5V

stepper motor (Figure 15) which weighs 37g (Appendix E). Additionally, we will

need to upgrade our controller with a motor driver board (Appendix E), which will

enable Raspberry to operate the stepper motor and accurately adjust the landing gear

based on the determined slant angle.

We made use of the additional screw slots on the bottom of the UAV’s body

by designing two carrying bridges, which will provide a base for the system, and will

be connected to extensions, which will hold the motor and the end-bearing. The motor

will be placed one side of the UAV, connected to a moving shaft using a motor

coupling. This coupling is of helical construction and is made of a flexible material,

which provides means for fully compensating for any shaft misalignments, which are,

in turn, inevitable in our system, where misalignments may result from motor

vibrations (including the UAV rotors) or wind blows (an important system-specific

effect). The central shaft runs through two linear bearings, placed on both bridges, to

provide suitable support. The shaft also has two shaft collars, which provide means

for connecting the landing skids to the shaft, and, consequently, moving the skids by

moving the shaft. Finally, a linear bearing cap is placed on the second bridge, to hold

the shaft in place. Hence, we have determined all the parts needed for the

implementation of this idea, and made a prototype 3D model (Figure 16). It is

important to note that all of the parts in the drawing, except the landing skids and the

bearing bridges, were acquired from the market, and are made of lightweight

materials. While the former was manufactured by 3-D printing, as it was done with

the system casing discussed before (Figures 17.1, 17.2, 17.3).

Page 23: Final report F.A.

16

Figure 17.1: Assembled Landing Gear Figure 16: Landing gear

Figure 19: Laser projection on a slanted surface Figure 18: Static laser projection

Figure 17.3: Landing gear bearing bridges Figure 17.2: Landing gear skids

Finally, with all our setup in place, we managed to take pictures of the laser projection

with the Raspberry camera. As can be seen from the results (Figure 18-19), tilting the

surface underneath the UAV changes the projection dimensions, proving our

theoretical assumptions. Here, we can clearly see that the static projection is a

rectangle, while a projection on a slanted surface becomes closer to a square. (Python

script used to take pictures with lasers enabled, is presented in Appendix G)

Page 24: Final report F.A.

17

Figure 20: Abstract connectivity of the landing system

3.6 Software design

As we have discussed previously, an optical sensing system was selected for the

purpose of analyzing the landing zone and providing means for calculating the incline

angle. In the principle of operation described in Figure 9, we know the distance

between the laser line generators, and we can find the projection distance (L) on the

inclined surface using image processing techniques, which will provide the means to

calculate the incline angle as per Eq.5.

Figure 20 depicts the abstract connectivity of our system. Here, a video stream

from the camera is passed to the Raspberry, where it is analyzed in order to determine

the location of the landing zone. After the landing zone is located, required commands

are communicated by the microcontroller to the Pixhawk autopilot, which in turn

operates the quadrotor motors in order to move the UAV to hover above the landing

zone. After the UAV is centered, Raspberry extracts the laser generator projection

from the video stream and calculates the incline angle of the landing surface. Based

on this angle, the landing gear receives a signal to adjust accordingly. Finally, the

autopilot receives the command to start landing the UAV.

The Raspberry PI will be programmed using OpenCV Python programming

language, because it has an application-programming interface (API) developed by

the company that created our drone, 3DRobotics. This API allows interfacing

Raspberry with the open-source Pixhawk autopilot, and provides means for

communication between the two.

Our first step in developing the image processing algorithm which will be used

to determine the incline angle is simulating it in MATLAB (Appendix C). In order to

avoid unnecessary complexity at this stage, it is assumed that the camera has already

captured the red laser projection, and the controller has executed both grayscale

thresholding and edge detection algorithms. Furthermore, input image is assumed to

be of resolution 100x100 pixels, thus producing the following top view:

Page 25: Final report F.A.

18

(Eq. 5)

Figure 21: Reference projection Figure 22: +Y tilt projection

Figure 23: MATLAB Command Window

Here, Figure 21 is the default projection, produced when the landing surface is

flat (i.e. reference projection). If, for instance, we tilt the landing surface in positive y

direction, the projection will change as follows: the top-most horizontal line will

move closer to the bottom horizontal line, while the latter will not change its location

(i.e. distance A will decrease) (Figure 22). The MATLAB output for our algorithm

(Figure 23) displays one of the 4 possible directions of the incline along with the slant

angle. Similarly, in case the surface is tilted in either direction of x axis, one of the

vertical sides of the rectangle will move closer to the other, visually, and thus the

distance B will decrease, while distance A will stay constant.

It is important to mention that since our project requires that the UAV would

be able to land on a surface inclined in any direction (i.e. including diagonal), we will

have to further enhance our algorithm to include uni-directional approach.

Nevertheless, such simulation proves that using the optical sensing system is

reasonable, and that it is able to produce very accurate results, especially considering

the fact that the resolution of the Raspberry camera can be set up to 1920x1080 pixels,

which is a lot bigger than the 100x100 used in the simulation, thus giving a promise

of best results in practical execution of such algorithm.

B

A x

y

Page 26: Final report F.A.

19

Figure 25: LZ detection in processing Figure 24: LZ detection

3.6.1 Image processing

This step is fundamental to our project: here we establish a software link

between the Raspberry PI microcontroller and the Pi Camera, enabling the former to

be able to extract essential information from a captured image.

The first step, where image processing is involved, is determining the location of the

landing zone (referred to as LZ further on), its dimensions and its center. In order to

do that, the image has to be prepared for analysis, i.e. “translated” to a representation,

more common to “computer vision”. To mention, a solid red rectangle of 50x70cm

was used in these experiments. (Figure 22) Initially, the captured image is converted

to from a general RGB (Red, Green, Blue) to an HSV (Hue, Saturation, Value) model.

Then, as we want to locate a solid red rectangle, we define a range of HSV

combinations, which encompass a range of possible shades of red. Afterwards, we

mask the image to only show segments of itself containing a variation of the red color

in the pre-defined HSV range. Then, a slight Gaussian blur mask is applied to get rid

of small artifacts (single pixels recognized to be of red color, which can cause

unnecessary errors) and then we threshold the image, converting the variations of red

to variations of white (different in intensity, corresponding to the intensity of red), and

leaving the rest of the image black. (Figure 23) Finally, we run a contour search, by

defining an approximate size of the contour we expect to find (an approximate size of

the LZ, corresponding to the total area captured by the camera, easily scale-able

variable) and then looping over all contours available in the image, and finding the

best match. Then the center of the selected contour is located, and a corresponding

shape is drawn on the edges of the shape, in a bright green color (Figure 22), also

providing the user with the dimensions of the contour in meters. To mention, as per

the red color of the laser line generators, our final LZ will be a solid white rectangle,

but the image processing routine will stay the same, except that in this case we will

mask the HSV image for white color. Afterwards, the aforementioned process enables

us to, for example, calculate the distance from the present UAV location, to the

location of the center of the contour. This distance is provided in a set of x and y

distances in meters, which can then be easily passed on to the autopilot. This specific

case of image processing will also be discussed in later sections.

Page 27: Final report F.A.

20

Figure 27: Processing of laser projection Figure 26: Laser projection detection

Figure 29: Processing of black LZ Figure 28: Detection of Black LZ

Furthermore, the same image processing algorithm is applicable in determining the

incline of a slanted LZ, except that in that case we will compare the dimensions of the

contour of the laser projection on the slanted surface, to the pre-loaded contour of this

projection on a flat LZ.

However, in this case we also add a function to compensate for projection

misalignment, by first determining the rectangular contour of the projection (green in

Figure 26), and then creating a rotated contour overlay (blue in Figure 26), to simplify

the calculations of the projection dimensions. (Appendix G) Thus, the aforementioned

function provides the dimensions of the rectangle projected on a slanted surface,

which are in turn compared to a previously stored dimensions of a normal projection

on a flat surface, and thus the incline angle is determined (Refer to section 3.2.3)

Additionally, we were able to upgrade our image processing algorithm to

fluently differentiate between specific shapes and colors of the landing zone. For

example, we can use a black rectangle as the landing zone (Figure 28) and place our

previously used red rectangle next to it, the UAV will detect the black landing zone,

and disregard any other findings. Additionally, as we can see from the figure, we were

also able to determine the center of the landing zone, with a black dot. Finally, the

microcontroller then easily determines the distance from the center of the camera’s

field of view to the center of the landing zone (also determines its dimensions, by

correlating its own position in 3-D space to the image dimension properties.

Page 28: Final report F.A.

21

3.6.2 Flight

As it was mentioned before, the Pixhawk API gives Raspberry full access to

the drone controls and GPS data, and one of the aims of this project is to create an

autonomous system. This can be implemented by developing several Python scripts,

which are then uploaded in the Raspberry. In detail, these scripts are a series of

commands, communicated to the Autopilot, in order for the latter to move the UAV.

These scripts are executed based on the feedback from the GPS and the sensory

systems, in essence allowing the Raspberry to choose the needed path of action

depending on the situation.

For the purpose of the project, a flight mission was defined and divided into

several phases (Appendix D). Successful implementation of this mission can be

considered as the most important milestone of the project, and would allow exhibiting

all important aspects of our work in one flight. Furthermore, all phases of the mission,

each representing a separate sub-script, will be executed in a defined sequence, which

may be adjusted in real-time by the microcontroller based on the feedback from the

sensing system (camera, GPS etc.). For example, once the UAV is up in surveillance

mode, locates a target and starts the approach, it may lose the target from its field of

view, in which case the sequence is aborted, the UAV returns to initial position,

recaptures the target and modifies it’s approach (slower speed, different approach

angle etc.) (For more details see Appendix D). In order to complete this mission, it is

important to conduct a series of experiments, where all unique movements and actions

from each step are tested separately, to study the UAV’s response to specific

commands and scripts. At this point, separate scripts are executed by the Raspberry,

and each successive command is passed to the autopilot manually (this way of

operation is discussed later in the report):

Mission I: Simple Movement

Here, we tested the ability of the UAV to recognize and respond to simple

commands, passed to it by the onboard controller we developed

Start (A preset API command that arms and starts the UAV’s rotors)

Ascend vertically by 1 meter

Move forward by 2 meters

Land (A preset API command that gradually draws power from the

rotors)

Results: The UAV was able to fly in a straight line, and the GPS successfully

provides precise and accurate location data. Here, the resolution of the height

measurement is up to 1mm. Vertical movement accuracy was independently

measured to be ±10cm, however, the autopilot manages to correct for the uncertainty

and accurately stabilize itself at the initial input height in several seconds.

The script used to implement mission I, can be examined in Appendix G

Page 29: Final report F.A.

22

Mission II: Yaw rotation

Here, we examined the UAV’s ability to respond to “turn by θ degrees about

z-axis” commands (examine Figure 28 for reference)

Start

Ascend vertically by 1 meter

Turn by 90 degrees

Land (A preset API command that gradually draws power from the

rotors)

Results: The UAV was able to turn about its z-axis.

Figure 30: UAV degrees of freedom

Mission III: Square Movement along prescribed closed path

In this experiment we examine the UAVs automated movement in closed path,

i.e. square, in the x-y plane (note, movement orientation is in top-down view)

Start

Ascend vertically by 2 meters

Move forward by 2 meters

Move right by 2 meters

Move backward by 2 meters

Move left by 2 meters

Land

Results: The UAV can be programmed to change the direction of its movement.

Here, it was found out that the autopilot is set up in such a way, so that the UAV turns

in the direction of its movement before executing it (i.e. it always moves headfirst).

Furthermore, it was found out that the UAV needs 1-2 seconds to stabilize itself in the

air between the movements to be parallel to the earth. Skipping the pause introduces

uncertainty in its positioning. Finally, the horizontal positioning accuracy of the UAV

was measured to be of ±1m at maximum, imposed by the resolution limits of the GPS

and the aforementioned stabilization issue. This, however, is a topic for further

development, and will not be covered in our project, as this would require us to

intrude in the autopilot structure. In the end, our landing zone, as mentioned before, is

limited to 1.5x1.5m, which is considerably greater than the imposed uncertainties, and

thus does not pose a threat to the successful completion of this project.

Page 30: Final report F.A.

23

Mission VI: Locating a landing zone and landing

In this mission we examine the performance of the on-board module in

automatically finding a flat, marked landing zone using the Pi Camera, and

landing on it.

Start

Ascend vertically by 10 meters

Take a picture of landscape below the UAV

Perform Image processing routines (refer to 3.6.1)

Locate the landing zone (LZ) and its center

Determine the distance to the center of the LZ (in an x&y distance set)

Move to the center of the LZ (based on determined distances)

Land

The script used to implement mission VI, can be examined in Appendix G

Results: The UAV on-board module successfully performed an integral step of the

final mission: it was able to locate the flat landing zone at different locations and

distances from the starting point, move to it, and land on it. An experimental landing

zone of 50x50cm was used, and the UAV repeatedly landed in various points

encompassed by a circular area of 1m in radius (from the center of LZ), thus

providing a good accuracy as related to the final LZ dimensions. Even though our

final objective states that the UAV should find the landing zone from a height of 30m,

it is now simply a matter of passing a different height to the UAV, and scaling the

image accordingly. Additionally, we need to take into account the fact that previous

developments on the 3DR Robotics platform, which included programming and

scripted control, noted that this specific UAV, and more importantly, the Pixhawk

autopilot’s operation is heavily affected by the wind, and may experience random

drifts during flight in this case, and that these effects scale exponentially with the

increase in the vertical distance. [7]

Mission V: Landing on a slanted surface

In this mission, we test the full capabilities of the on-board module, by employing the

camera, lasers, and the landing gear, to try and land the UAV a slanted surface.

Start, ascend, find the LZ and move to hover over it

Descend to a height of 2 meters above ground

Switch on the laser-line generators

Take a picture of the projection

Perform Image processing routines (refer to 3.6.1)

Based on the determined angle, adjust the landing gear accordingly

Land

Results: UAV was able to successfully determine the slant angle and land on the

inclined landing zone, thus concluding the series of experimental missions. However,

Page 31: Final report F.A.

24

Figure 32: Monitoring the UAV wirelessly Figure 31: Raspberry Access Point

it is important to note that the implemented landing gear was measured to adjust the

angle with an accuracy of ±5 degrees. On the other hand, this uncertainty is

compensated by the fact that once the landing gear touches the inclined surface, and if

it is turned to either extremity of the uncertainty range, it re-adjusts itself to the

landing zone inclination under the weight of the UAV. 2

It is important to mention that Raspberry PI is a mini-PC by itself, and all the

information, including the executable scripts, is stored on a small SD memory card,

inserted in the microcontroller. Thus, to execute a certain script in the Raspberry PI,

one could use two ways. The first way is by setting the script to run as soon as the

raspberry is rebooted, which could be used for our final demonstration. The other way

is to run the script remotely, which we choose to use at the moment, because it's safer

and gives us more control over the Raspberry. In this fashion, we made the Raspberry

to act as an access point, providing means to connect to it through an SSH protocol

(Secure Shell protocol, allows remote login and other network services to operate

securely over an unsecured network) which will enable us to send commands to the

microcontroller through any smart device. This allows us to keep real-time monitoring

of the execution of the codes and the flight parameters, and enables us to control the

Raspberry remotely.

We set the Raspberry up as an access point using a D-link transmitter connected to it.

Hence, we can connect our PC to the Raspberry PI access point through LAN and

control the raspberry PI using the Remote Desktop Connection that is available in PCs

running Windows. In other words, we connect from any Windows device to the

Raspberry as we would connect to any Wi-Fi hotspot (“abdullaPI” is the Raspberry

access point in Figure 22), and use the screen of our device to operate the Raspberry

(Figure 23). This connection also allows us to receive a real-time video feed,

essentially being always able to see what the camera sees, which will be very helpful

in later developments of more advanced applications. Furthermore, this system will

then need to be upgraded with a Wi-Fi signal extension module, which will be placed

on the ground and will ensure a good connection between the operator and the UAV

on long distances, as of current state, a stable connection may only be reached at

distances, not longer than 10 meters.

2 Some of the above experiments were filmed and can be viewed at any time by following the

hyperlinks, found in Appendix F

Page 32: Final report F.A.

25

4. Conclusion

To summarize, we have met all of the project objectives and our system does

not violate any of the set constraints. All integral system components were designed,

built and tested separately in the lab environment. Hence there is absolutely no

apparent reason for the assembled system to fail. The full system performance will be

documented and demonstrated.

Furthermore, we have set up a solid foundation for further development of the

system functionality. Specifically, enabling the system to be able to land on a moving

slanted surface is now simply a matter of changing and tweaking several properties in

the mission script. Additionally the accuracy of the system can be upgraded by re-

programming the functions of the autopilot itself, where the issue of random drifting

during script execution can be addressed. Finally, the image processing functionality

may be further explored to employ the integrated GPS, to improve the accuracy of

determined distances and dimensions. All in all, the UAV platform itself is now a

subject of endless possibilities, because we have thoroughly learned how to enable it

to operate autonomously, based on the uploaded mission script.

Page 33: Final report F.A.

26

5. References

[1] Wagner, William. "Lightning Bugs and other Reconnaissance Drones; The can-do

story of Ryan's unmanned spy planes". Armed Forces Journal International, in

cooperation with Aero Publishers, Inc., p.208, 1982.

[2] J. Brodkin, “Raspberry Pi 2 arrives with quad-core CPU, 1GB RAM, same $35

price”,Ars Technica, 2010. [Online]. Available: http://arstechnica.com/information-

technology/2015/02/raspberry-pi-2-arrives-with-quad-core-cpu-1gb-ram-same-35-

price/. [Accessed: 01- Oct- 2015].

[3] J. Dougherty, “Laser-Guided Autonomous Landing of a Quadrotor UAV on an

Inclined Surface”, B.S., The George Washington University, May 2013.

[4] D. Mellinger, N. Michael, and V. Kumar, “Trajectory generation and control for

precise aggressive maneuvers with quadrotors", International Journal Of Robotics

Research, vol. 31, no. 5, pp. 664-674, 2012.

[5] V. Ghadiok, J. Goldin, and W. Ren, “On the design and development of attitude

stabilization, vision-based navigation, and aerial gripping for a low-cost quadrotor",

Autonomous Robots, vol. 33, pp. 41-68, 2012.

[6] J. Dougherty, D. Lee and T. Lee, “Laser-based Guidance of a Quadrotor UAV for

Precise Landing on an Inclined Surface”, The George Washington University, 2014.

[7] D.Apperloo , T.Ziebarth and J.Chow , “Autonomous Guide Drone” University of

Victoria, British Columbia, Canada, p.31, 2015.

Page 34: Final report F.A.

27

Appendices

Appendix A

Hardware specifications

IRIS+

Performance Electronics

Maximum payload: 400 g (.8 lb) payload

capacity

Flight time: 16-22 minutes depending on

payload

Height: 100 mm

Motor-to-motor dimension: 550 mm

Weight with battery: 1282 g

Autopilot hardware: Next generation

32-bit Pixhawk with Cortex M4

processor

GPS: uBlox GPS with integrated

magnetometer

Controller: Any PPM compatible RC

unit, Preconfigured FlySky FS-TH9x RC

Telemetry: 3DR Radio 915mHz or

433mHz

Components

Propellers: (x2) 9.5 x 4.5 tiger motor multi-rotor self-tightening counterclockwise

rotation, (x2) 9.5 x 4.5 tiger motor multi-rotor self-tightening clockwise rotation

Motors: 950 kV

GPS: uBlox GPS with integrated magnetometer

Mounts: Integrated GoPro camera mount with vibration dampener

Optional: Tarot brushless gimbal with custom IRIS+ mounting kit

Battery: 5100 mAh 3S

Raspberry PI 2 Model B

• SoC: Broadcom BCM2836 (CPU, GPU,

DSP, SDRAM)

• CPU: 900 MHz quad-core ARM Cortex

A7 (ARMv7 instruction set)

• GPU: Broadcom VideoCore IV @ 250

MHz

• More GPU info: OpenGL ES 2.0 (24

GFLOPS); 1080p30 MPEG-2 and VC-1

decoder (with license); 1080p30

h.264/MPEG-4 AVC high-profile decoder

and encoder

• Memory: 1 GB (shared with GPU)

• USB ports: 4

• Video input: 15-pin MIPI camera

interface (CSI) connector

• Video outputs: HDMI, composite video

(PAL and NTSC) via 3.5 mm jack

• Audio input: I²S

• Audio outputs: Analog via 3.5 mm

jack; digital via HDMI and I²S

• Storage: MicroSD

• Network: 10/100Mbps Ethernet

• Peripherals: 17 GPIO plus specific

functions, and HAT ID bus

• Power rating: 800 mA (4.0 W)

• Power source: 5 V via MicroUSB or

GPIO header

• Size: 85.60mm × 56.5mm

• Weight: 45g (1.6 oz)

Page 35: Final report F.A.

28

3DR Pixhawk

Microprocessor Sensors

• 32-bit STM32F427 Cortex M4 core with

FPU

• 168 MHz/256 KB RAM/2 MB Flash

• 32 bit STM32F103 failsafe co-processor

• ST Micro L3GD20 3-axis 16-bit

gyroscope

• ST Micro LSM303D 3-axis 14-bit

accelerometer / magnetometer

• Invensense MPU 6000 3-axis

accelerometer/gyroscope

• MEAS MS5611 barometer

Interfaces

• 5x UART (serial ports), one high-power capable, 2x with HW flow control

• 2x CAN

• Spektrum DSM / DSM2 / DSM-X® Satellite compatible input up to DX8 (DX9 and

above not supported)

• Futaba S.BUS® compatible input and output

• PPM sum signal

• RSSI (PWM or voltage) input

• I2C®

• SPI

• 3.3 and 6.6V ADC inputs

• External microUSB port

Power System Weight and dimensions

• Ideal diode controller with automatic

failover

• Servo rail high-power (7 V) and high-

current ready

• All peripheral outputs over-current

protected, all inputs ESD protected

• Weight: 38g (1.31oz)

• Width: 50mm (1.96")

• Thickness: 15.5mm (.613")

• Length: 81.5mm (3.21")

3DR uBlox GPS

• u-blox NEO-7 module

• 5 Hz update rate

• 25 x 25 x 4 mm ceramic patch antenna

• LNA and SAW filter

• Rechargeable 3V lithium backup battery

• Low noise 3.3V regulator

• I2C EEPROM for configuration

storage

• Power and fix indicator LEDs

• Protective case

• APM compatible 6-pin DF13

connector

• Exposed RX, TX, 5V and GND pad

• 38 x 38 x 8.5 mm total size, 16.8

grams.

Page 36: Final report F.A.

29

Appendix B

Pixhawk autopilot connectivity diagram

Page 37: Final report F.A.

30

Appendix C

Matlab image processing code

% Written by: Fadi Al-Zir & Abdulla Jaber % Electrical Engineering Department % Petroleum Institute, Abu Dhabi, U.A.E. % Simple simulation to confirm the theory of calculating the slant

angle %from the top-down shape changes of the surface

%% Create a possible projection of laser lines in a 100x100 blank

image blankimage = ones(100,100);

for n=25:75 blankimage(35,n) = 0; %horizontal top line blankimage(75,n) = 0; %horizontal bottom line

end

for n=35:75 blankimage(n,25) = 0; %vertical left line blankimage(n,75) = 0; %vertical right line end

imshow(blankimage) %show a square laser line projection for testing

%% %Note, to this point the image created above, is created to suit the

report example, %but the line coordinates can be easily changed and presented

algorithm %will still be applicable (algorithm only recognizes 4 possible

incline %orientation and will not work with diagonal inclines %% Determination of line coordinates horlines = find(blankimage(:,51) == 0); % find coordinates of X mids vertlines = find(blankimage(51,:) == 0); %coordinates of Y mids

Y1=horlines(1,:); Y2=horlines(2,:); %locations of horizontal lines X1=vertlines(:,1); X2=vertlines(:,2); %locations of vertical lines

VertDistance=abs(horlines(1,:)-horlines(2,:)); %distance between

horizontal lines HorDistance=abs(vertlines(:,1)-vertlines(:,2)); %distance between

vertical lines

basedist=50; %pre-determined reference distance, equal to distance

between two lasers

if VertDistance ~= 50 InclineAngle = acos(VertDistance/basedist); if horlines(2,:) ~= 75 disp('The landing zone is inclined in negative Y direction by') else disp('The landing zone is inclined in positive Y direction

by')

Page 38: Final report F.A.

31

end elseif HorDistance ~=50 InclineAngle = acos(HorDistance/basedist); if vertlines(:,2) ~= 75 disp('The landing zone is inclined in negative X direction by') else disp('The landing zone is inclined in positive X direction

by') end end %Conditional loop which determines the orientation of the incline

based on the fact that %when the surface is inclined, one of the defining lines preserves

it's %position

Angle=radtodeg(InclineAngle) %converting the angle to degrees and

displaying it

Page 39: Final report F.A.

32

Appendix D

Mission Plan

Page 40: Final report F.A.

33

Page 41: Final report F.A.

34

Page 42: Final report F.A.

35

Page 43: Final report F.A.

36

Appendix E

Stepper motor specifications

Unipolar stepper with 0.1" spaced 5-pin cable connector

513 steps per revolution

1/16.032 geared down reduction

5V DC suggested operation

Weight: 37 g.

Dimensions: 28mm diameter, 20mm tall not including 9mm shaft with 5mm diameter

9" / 23 cm long cable

Holding Torque: 150 gram-force*cm, 15 N*mm/ 2 oz-force*in

Shaft: 5mm diameter flattened

Approx 42 ohm DC impedence per winding

Motor driver shield

4 H-Bridges: TB6612 chipset provides 1.2A per bridge (3A peak) with thermal

shutdown protection, internal kickback protection diodes. Can run motors on

4.5VDC to 13.5VDC.

Up to 4 bi-directional DC motors with individual 8-bit speed selection (so, about

0.5% resolution)

Up to 2 stepper motors (unipolar or bipolar) with single coil, double coil,

interleaved or micro-stepping.

Big terminal block connectors to easily hook up wires (18-26AWG) and power

Polarity protected 2-pin terminal block and jumper to connect external 5-12VDC

power

Appendix F

Mission I

https://youtu.be/YITlsMUt7TM

Mission II and III

https://youtu.be/WjX8Izljl4M

Page 44: Final report F.A.

37

Appendix G

Mission I – Python script

Page 45: Final report F.A.

38

Script to switch on the laser generators and take a picture of the projection using

Raspberry camera

Page 46: Final report F.A.

39

Mission VI - Python script