Sensor Fusion for Miniature Aerial Vehicles

download Sensor Fusion for Miniature Aerial Vehicles

of 23

Transcript of Sensor Fusion for Miniature Aerial Vehicles

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    1/23

    Sensor Fusion

    for

    Miniature Aerial Vehicles

    Peter E. Madsen Hjalte B. L. Nygaard

    University of Southern Denmark

    January 21, 2012

    Abstract

    In the field of autonomous robotics, there is a desire to determine preciseattitude and position, utilizing cheap, and thus often noisy sensors. This paper

    surveys a range of existing methods for Micro Air Vehicle sensor fusion, aimed

    at this problem. Furthermore a partial implementation of a previously proposed

    three-stage extended Kalman filter is conducted on a newly developed autopilot

    board. The board is based on MEMS-technology, GPS and other sensors. Promis-

    ing results have been obtained for Pitch and Roll estimation by fusing gyroscope

    and accelerometer measurements. It is concluded that by investing time into sen-

    sor fusion, it is indeed possible to gain high quality results, even using cheap noisy

    sensors.

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    2/23

    Contents

    Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    2 Theory & methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    2.1 Filter types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2.2 Bayes Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2.3 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2.4 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    3.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    3.2 Three-stage Extended Kalman Filter . . . . . . . . . . . . . . . . . . 9

    3.2.1 Stage 1 - Pitch & Roll . . . . . . . . . . . . . . . . . . . . . 10

    3.2.2 Stage 2 - Heading . . . . . . . . . . . . . . . . . . . . . . . . 12

    3.2.3 Stage 3 - Position & Wind . . . . . . . . . . . . . . . . . . . 13

    4 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    4.1 Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    4.2 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    5 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    A Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    B Pitch-roll kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    C Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    2

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    3/23

    1 Introduction

    This paper is the result of a 5 ECTS self study conducted by two master students at

    the University of Southern Denmark, Odense. The paper aims to survey methods for

    fusing noisy sensor data, and implement a method deemed suitable to the task of es-

    timating attitude and position of a Micro Air Vehicle (MAV). The task of optimizing

    robot attitude and position estimation is of increasing interest to a growing commu-

    nity of robotisists. Turnkey solutions, in the form of Inertial Measurement Unit (IMU)

    sensor, with or without global positioning, are available commercially [3, 4]. However

    these sensor packages are relatively expensive. In this paper, it is shown that reliable

    results can be obtained using cheaper individual sensors, by the use of sensor fusion.

    Throughout this paper, the reader can refer to Figure 1 for a summary of the mea-

    sured and estimated variables. denotes accelerations measured from the on board

    accelerometer, denotes magnetic field strengths, measured from the on board magne-tometer and denotes angular velocities, measured from the on board rate gyroscope.For a more complete list of mathematical notation, Appendix A on page 18 contains a

    full list of notations and variables used throughout this paper. It should also be noted

    that the airframe is oriented nose-starboard-down, and that the right handed coordinate

    system is used.

    , x

    x, x, x

    , y

    y, y, y

    , z

    z, z, z

    Figure 1: Illustration of symbols and axis.

    The paper is split in four main sections; Section 2 covers the theory of surveyed sensor

    fusion methods. Section 3 on page 8 explains the method that has been implemented,

    and describes in detail the mathematics, such that reimplementation can be conducted.

    Section 4 on page 13 gives a brief overview of the tools that have been used to visu-

    alize experimental results, and finally Appendix A through C on page 18 and forward,

    contains material like derivations and graphs, considered too detailed to make it into

    the paper, but useful to the curious reader.

    2 Theory & methods

    A fundamental challenge of mobile robotics, is the effect of inaccurate measurements

    on position and attitude. The robots knowledge of its pose is vital for interacting

    with- and maneuvering in its environment. Two simple methods for localization and

    pose determination is feasible, as the robot can either 1) use internal sensors, which

    keeps track of its movements and thus pose, 2) use global landmarks to periodically

    determine its pose. However, problems arises when using either of the two methods, as

    3

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    4/23

    the former (referred to as dead-reckoning) will integrate even the slightest disturbance,

    into significant, irreversible errors. The latter can work excellent in known, engineeredenvironments, containing landmarks of adequate quantity and quality (e.g RFID-tags,

    coloured markings, ultrasonic beacons etc.). If, however, we want the robot to know its

    position in a less adapted environment, the position updates tends to have low update

    rates and imprecise measurements.

    No matter how expensive and precise sensing equipment that is build into the robots,

    these challenges might be reduced, but cannot be eliminated. However, by combining

    both approaches, the difficulties can be overcome. This chapter presents a number of

    methods for such sensor fusion.

    2.1 Filter types

    Four types of filtering have been found suitable for MAV sensor fusion:

    1. The Kalman filter, which is a recursive estimator. It uses parametrized Gaussian

    distributions to model the uncertainties in the system and a state space model

    to project the estimate forward in time. The filter uses least-squares curve-fit to

    optimize the state estimate.

    2. The Extended Kalman filter, uses a non-linear function to project the estimate

    forward, rather than the linear state space model used in the Kalman filter. The

    linear models, needed to project the covariance parameters, are found by lin-

    earizing the non-linear functions around the state estimate.

    3. Particle filters uses a cloud of particles in the state space to represent the prob-

    ability distribution, rather than the parametric Gaussians used by the Kalmanfilter. This effectively means that the filter is multi-modal, as it is not limited by

    the uni-modal nature of Gaussian representation.

    4. Complementary filters are the only non-Bayes filter discussed in this text. It

    works in the frequency domain rather than the time domain and does not inter-

    pret the probability of states or outputs. It uses two inputs, describing the state,

    assuming one responds fast and is accurate, but biased, and another is bias free,

    but might be slower in responds and noisy. By using a pair of complementary

    filters, one low-pass and the other high-pass, the two signals can be combined,

    with maximum yield of both signals. See Figure 2 for a basic block diagram.

    LPF

    HPFx = f(u)

    x = f(z)

    + x

    z

    u

    Figure 2: Basic complimentary filter.

    Particle filters have proven very strong in mobile robot localization (known in the field

    under the name Monte Carlo Localization [17, 18]) , where multiple hypothesis is

    needed. This is especially true for indoor Simultaneous localization and mapping, as

    4

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    5/23

    no good source for absolute position estimates are available. These filters are however

    fairly complex and computationally heavy. As an absolute position estimate can beacquired from GPS in the outdoor habitat of MAVs, the complexity of particle filters is

    not needed.

    Both the Kalman filter [6, 8, 13, 14] and Complimentary filters [10,12] have been used

    for state estimation of MAVs. The complementary filters is computationally lighter,

    compared to Kalman filters, but seems to respond slower. When the gyroscope is at

    rest, after a rapid change, it does not contribute to the state estimate. However, the

    accelerometer data does still to some extend reflect the old position due to the phase

    delay introduced by the low pass filter. The Kalman filter on the other hand does not

    operate on old data and hence it introduces no phase delay. In addition, the Kalman

    filter is capable of rejecting statistical outliers in the data stream.

    The Extended Kalman filter (EKF) is chosen for the later implementation. The follow-

    ing sections will give a overview of its theoretical background and formulation.

    2.2 Bayes Filters

    The Kalman filter is a Bayes filter, which means that it work under the assumption

    that each observation is independent of previous observations. I.e the true state, xk is

    assumed to be an Hidden Markov process and the observations zk are derived from this

    process. In other words, the true state can only be observed indirectly. Because of the

    Markov assumption, the probability of the current state, p(xk), can be derived fromjust the previous, p(xk1), and is independent to all prior states.

    p(xk|xk1, xk2, , x0) = p(xk|xk1) (2.1)

    Similarly, the probability of the current observation, p(zk) depends only on the currentstate, xk, and is independent of all prior states.

    p(zk|xk, xk1, , x0) = p(zk|xk) (2.2)

    This allows the Bayes filters to work recursively, as there is no need to keep track of

    the history of state estimates and observations, along with their probability distribu-

    tions.

    The filters are implemented in a two-step recursion:

    First, the previous state probability distribution, p(xk1), is projected into thenext time step with the system input, uk. While this is done, uncertainty is added

    to the estimate to reflect the system noise. This new estimate is the a priori esti-mate, p(xk).

    The second step includes a observation zk, which based on Bayes Theorem (seeeq. (2.3)) can provide a probability distribution of the state, if p(xk) and p(zk)are known. This sensor-based state estimate can then be combined with xk togive a better estimate, xk, of the state.

    p(xk|zk) =p(zk|xk) p(xk)

    p(zk)(2.3)

    5

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    6/23

    We wish not to go deeper into the mathematical background of Bayes filters, but refer

    curious readers to [11], [19], [7] and [5].

    To get a conceptual idea of the Bayes filters, imagine a simple robot, capable of mov-

    ing in a straight line, as illustrated in Figure 3. The robot is capable of measuring its

    position along the track, by measuring the distance to a wall at the end of the track.

    However, these measurements are noisy and occurs at a slow rate. As the robot moves,

    it utilizes dead reckoning to keep track of its position - But due to integration errors

    and external disturbances, this estimate soon grows unreliable. By accepting the fact

    that both measurements include uncertainties and modeling these, the two data sets can

    be combined in a beneficial way.

    As the movements of the robot projects the state estimate forward, the uncertainty of

    the estimate is increased. This estimate is called the a priori, as it is an estimate, prior

    to in-cooperation of an observation. When the robot receives a external measurement

    of its location (an observation), the certainty of this measurement is assessed and com-bined with the a priori to for the next location estimate. As the two are combined, the

    uncertainty is reduced, bringing forward a better estimate of the location, than either of

    the two sensors can provide by them self.

    Estimate

    Measurement

    A priori

    Pk1

    xk1

    Pk

    xk

    R

    zkPk1 +Q

    xk

    Figure 3: Conceptual model of the Kalman filter.

    2.3 Kalman Filter

    A

    +B H +u x zx

    w v

    Figure 4: Linear state space model.

    Kalman filters, first proposed by R. E. Klmn in 1960 [9] have proven very efficient in

    avionics, when used in the extended version. The first use of the filter was in the Apollo

    6

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    7/23

    space program, where it was used for trajectory estimation on the lunar missions [15].

    The Kalman filter uses the Gaussian distribution function for parametric probabilityrepresentation and assume Gaussian mean-zero noise models for both process- and

    observation noise. The filter is based around a linear state-space model (see Figure 4

    on the preceding page) of the system, which is used to project the old state estimates

    into the next one and estimate the observation:

    xk = A xk1 + B uk + wk (2.4)

    zk = H xk + vk (2.5)

    where w is the process noise, and v is the observation noise. The matrix A projects the

    state estimate ahead and B relates the system input into the state space.

    At the time update the a prioris of state estimate x and estimate covariance Pk is

    calculated. This is done by projecting the previous values with A and adding the systeminput B uk and process noise covariance Q respectively:

    xk = xk1 + A xk1 + B uk (2.6)

    Pk = A Pk1 AT + Q (2.7)

    In the measurement update xk and Pk, are corrected, based on an observation zk:

    Kk = Pk HT

    H Pk H

    T + R1

    (2.8)

    xk = xk + Kk

    zk H xk

    (2.9)

    Pk = (I K H) Pk (2.10)

    where R is the measurement noise covariance, H relates state to measurement. The

    matrix K is the Kalman gain, used to mix a priori estimate with the observation based

    state estimate.

    2.4 Extended Kalman Filter

    As the state space model is a linear model, the Kalman filter is limited to linear systems.

    In real-world application, only very few of such systems exists. The Kalman filter can

    however be extended [5, 7, 19] to work for non-linear systems. Rather than using the

    linear state space models, the two non-linear functions f(x, u, w) and h(x, v) are usedto project the state and estimate sensor output, so that:

    xk = f(xk1, uk, wk) (2.11)

    zk = h(xk, vk) (2.12)

    The Kalman filter needs the linear model matrices, A and H still, though. But thesecan be derived by linearizion of the f(x, u, w) and h(x, v), as their partial derivativeswith respect to the state vector.

    A[i,j](x, u) =f[i]x[j]

    (x, u, 0) (2.13)

    H[i,j](x) =h[i]x[j]

    (x, 0) (2.14)

    7

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    8/23

    Besides, two new Jacobians are needed to project the system noise w and sensor noise,

    v into respectively the input vector and the measurement vector. Theses can similarlybe derived by linearizion off(x, u, w) and h(x, v), as the partial derivatives with respectto the noise vectors.

    W[i,j](x) =f[i]

    w[j](x, u, w) (2.15)

    V[i,j](x) =h[i]v[j]

    (x, v) (2.16)

    In thetime update of the EKF, the projection of the estimated state is given by f(x, u, w).The estimate covariance is projected much like in the original filter. Wk terms areadded to project the process noise, Q covariance into state space.

    xk = xk1 + f(xk1, uk, 0) (2.17)

    Pk = Ak Pk1 ATk + Wk Q W

    Tk (2.18)

    Similarly, the measurement update is upgraded with projection of R using Vk and thesensor output is estimated by h(x, v).

    Kk = Pk HTk

    Hk Pk H

    Tk + Vk R V

    Tk

    1

    (2.19)

    xk = xk + Kk

    zk h

    xk, 0

    (2.20)

    Pk = (I Kk Hk) Pk (2.21)

    This concludes the theory covered in this paper.

    3 Implementation

    This section will discuss our implementation of a special 3-stage EKF, tailored for

    MAVs. An overview of the hardware developed to realize the sensor fusion is provided.

    This hardware includes both sensors used in this project and other sensors, not yet

    utilized, but deemed useful to make a completely autonomous MAV.

    3.1 Hardware

    For this and another project, an autopilot board, see Figure 5 on the next page, has

    been developed. The board is centered around an 600 MHz, 512MB RAM OMAP3

    based Gumstix. The purpose of the board is to sense, process and control everythingneeded to keep a MAV in the air, independent of any groundlinks. To do so, it has

    been equipped with a range of sensors, listed in Table 1 on the following page. Some

    of the sensors work standalone, where others have to be fused to provide useful mea-

    surements. All of the sensors, except the GPS, are connected to a 400 kHz I2C bus ofthe Gumstix. The GPS has a serial interface, and is thus connected to a UART of the

    Gumstix. An overview of the sensors fused in Section 3.2 on the next page can be seen

    in Figure 6 on page 10.

    Linux drivers for the sensors had to be developed, as it has been decided to run the

    Gumstix with a stripped down Ubuntu distribution. This is however not the focus of

    this project, thus interfacing details are not discussed in this paper.

    8

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    9/23

    Figure 5: The developed Gumstix autopilot expansion board

    Sensor Name Notes

    Gyroscope IGT3200 Triple axis. 2000 /s @ 0.0696(/s)/LSBAccelerometer ADXL345 Triple axis. 16g @ 3.9mg/LSB

    Magnetometer HMC5883L Triple axis. 8G @ 5mG/LSBGPS D2523T 50 Ch. Helical receiver, 4Hz, blox chipset.Airspeed MPXV7002 Differential press. sensor w. pitot tube. 2kP a.Altitude BMP085 Barometric Press. 300-1100 hPa @ 0.03 hPa/LSB

    Proximity LV-EZ4 Ultrasonic ground dist. 0-7m @ 2.5cm/LSB

    Table 1: Sensors available on the Autopilot board

    3.2 Three-stage Extended Kalman Filter

    In 2006, Andrew M. Eldredge wrote his Master thesis Improved state estimation for

    miniature air vehicles [8]. Here Eldredge proposed a new use of the EKF - a three

    stage filter, designed especially for Micro Air Vehicles (MAVs). This new approach,as opposed to other implementations [14] separates the complete position and orien-

    tation filter into three cascaded estimation steps: 1) Pitch & Roll, 2) Heading and 3)

    Position & wind, as seen in Figure 7 on the next page.

    9

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    10/23

    Figure 6: Autopilot board block diagram

    Attitude

    Estimator

    Heading

    Estimator

    Navigation

    Estimator

    pnpene

    yz

    xy

    z

    xyz

    vair

    xy

    z

    vair

    gpsngpse

    Figure 7: Three-stage state estimation illustrating the three stages, input and outputs.

    The diagram is as replication of [8, Fig. 2.9, p. 21]

    By separating the filter into three steps, the implementation is greatly simplified and the

    conceptual understanding is more convenient to grasp. A complete state-space model

    contains seven parameters: roll, pitch, yaw, north/east position and north/east wind

    speed - yielding rather large matrices if all is handled at once. Furthermore each step

    can be updated individually, which is very convenient, as our sensors have different

    update rates (the gyroscope has an internal sample rate of up to 8 kHz, whereas the

    GPS updates with a maximum of 4 Hz).

    We have successfully implemented Stage 1 of the proposed filter. The following three

    sections will explain our implementation and discuss how the last two steps will be

    implemented in the future.

    3.2.1 Stage 1 - Pitch & Roll

    In this first of three steps, the goal is to determine the state vector, x, composed of Pitch,

    , and Roll, . This is done utilizing the gyroscopes angular rates as system input, u,and with the accelerometers linear accelerations as observations, z:

    10

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    11/23

    x =

    T

    State vector

    u =

    x y z

    T

    System input vector

    z =

    x y z

    T

    Observation vector

    Figure 8: Stage 1 components

    Note that compared to the original implementation by Eldredge, total airspeed vair hasbeen neglected for now. With, x, u and z in place we are ready to proceed to define the

    first Time-update, or Predict stage.

    Time-update of Pitch & Roll

    Here we wish to propagate gyro angular rates to estimate change in Roll and Pitch.

    This is done using the estimated current state and kinematics as proposed in [16, p. 27]

    and to the readers convenience derived in Appendix B on page 20. Thus:

    f(x, u) =

    =

    x + t s y + t c zc y s z

    (3.1)

    Linearizing by partial derivatives, with respect to x, at the current estimate yields the

    Jacobian matrix A(x, u), please refer to Equations (B.7) through (B.10) on page 21 fora complete derivation:

    A[i,j](x, u) =f[i]x[j]

    (x, u) =

    y t c z ts

    ys+zcc2

    y s z c 0

    (3.2)

    That concludes the functions needed to compute the Time-update of an EKF, stated

    in equations (2.17) and (2.18) on page 8. Now the next goal is to calculated the

    Measurement-update or Correction.

    Measurement-update of Pitch & Roll

    In order to complete the Measurement-update, we are required to calculate the Kalman-

    gain. The gain depends on the accelerometer output prediction model h(x) and itsJacobian matrix H(x). The Sensor prediction model simply maps the gravity vectorinto the airframe, as derived in Equation (B.16):

    h(x) =

    axay

    az

    =

    ss cc c

    (3.3)

    The Jacobian matrix, H(x) is composed of the partial derivatives ofh(x), with respectto x, please refer to Equation (B.17) through (B.19) on page 22 for a complete deriva-

    tion:

    H[i,j](x) =h[i]x[j]

    (x) =

    0 cc c s s

    s c c s

    (3.4)

    Finally we only need to consider noise before we can proceed to the next stage.

    Noise considerations of Pitch & Roll

    V relates the measurement (accelerometer) noise, v, to the sensor estimate z, as V isthe partial derivative of h(x, v) (Eq. (3.3)) with respect to v. The complete derivation

    11

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    12/23

    of this Jacobian matrix can be found in Equations (B.20) through (B.22) on page 22

    where it is proved that V is given by the identity matrix:

    V(x) =

    1 0 00 1 0

    0 0 1

    (3.5)

    The derivation simply proves that the noise on the accelerometers maps directly to what

    we are measuring.

    R denotes the measurement noise covariance, and is hard to model, thus it is usuallyfound by trial and error tuning.

    W relates the system noise, w, to the state estimate x, as W is the partial derivatives

    of f(x, u, w) (Eq. (3.1)) with respect to w. The complete derivation of this Jacobianmatrix can be found in Equations (B.11) through (B.14) on page 21 where it is proved

    that W is given by:

    W(x) =

    1 t s t c0 c s

    (3.6)

    This shows that our gyroscope noise needs to be mapped into the system model.

    Q denotes the process noise covariance, and is, like R usually found by trial and errortuning.

    This concludes the measures necessary to compute an estimate of the Roll and Pitch

    angles of the airframe. Simulation results and 3D visualization of the implementation

    can be viewed in Sections 4.1 and 4.2 on page 13.The following two sections will cover how Heading, Position and Wind can be esti-

    mated in future work.

    3.2.2 Stage 2 - Heading

    In stage 2, the goal is to determine Heading, , from the Pitch and Roll angles foundin Stage 1, and gyroscope measurements combined with Magnetometer measurements

    , thus:

    x = State

    u =

    y zT

    System input vector

    z =

    x y zT

    Observation vector

    Figure 9: Stage 2 components

    The kinematics described in [8, Sec. 2.7], maps to current Pitch, Roll and angularrates. Furthermore a sensor model [8, Eq. 2.6] maps airframe Magnetometer measure-

    ments into the Earths magnetic field. With this data, and a map of the Earths magnetic

    field, the procedure for obtaining Heading, through Kalman filtering is similar tothat described in Stage 1 in Section 3.2.1 on page 10. A noteworthy property of this

    method for obtaining Heading is, that opposed to similar implementations, this method

    does not rely on GPS to find . This is desirable, as GPS updates are generally rela-tively infrequent and loss of satellite connection is not uncommon.

    12

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    13/23

    3.2.3 Stage 3 - Position & Wind

    Once the complete attitude

    T has been determined in stages 1 and 2, it

    is time to estimate the absolute position,pn pe

    T, and wind,

    n e

    T. This is

    accomplished using the GPS as our sensor and previously estimated Pitch, Heading

    and Airspeed:

    x =pn pe n e

    TState vector

    u =

    vairT

    System input vector

    z =

    gpsn gpseT

    Observation vector

    Figure 10: Stage 3 components

    The kinematic procedure suggested in [8, Sec. 2.8] deals with the dynamics, and again

    the EKF approach is applied to the kinematic model. Finally, after three cascaded steps

    the complete state vector is obtained:

    xcomplete =

    pn pe n eT

    (3.7)

    4 Visualization

    This section gives a short introduction to the procedures made to test and visualize

    sensor fusion algorithms.

    4.1 Octave

    All code is initially written in Octave. Octave allows for quick development and doesnot have the same stiffness as a pure C/C++ implementation. Thus all experimen-

    tation and development is done in Octave, and once a desired filter performance is

    obtained, it is ported to C/C++. C/C++ implementations are based on the Open-Source

    library KFilter [2], which provides an framework for the EKF algorithm.

    Real world sensor data has been logged to .csv files, and these are imported and ana-

    lyzed in Octave. An example of such an experiment is shown in Figures 13 and 14 on

    page 23

    4.2 3D

    In order to visualize the filter performance live, much like it is seen with many commer-

    cial products [3, 4], live communication between the Autopilot board and a computer

    with a monitor and 3D renderer is necessary. For this communication, a middlewarecalled Robot Operating System (ROS) is utilized. ROS allows for link transparent com-

    munication between two or more TCP/IP enabled units. Thus we use the Gumstix Wifi

    to connect to a visualizing computer. ROS also has a build in 3D render called RViz,

    this is used to visualize filter output as seen in Figure 11 on the next page.

    13

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    14/23

    Figure 11: The 3D model used in RViz for live visualization of filter output

    RViz with ROS allows for .dae graphics to be imported. Many 3D models are freely

    available in this format in the Google 3D Warehouse [1], thus an airplane is easily

    found and imported into RViz. The visualization gives a very intuitive feedback, as

    to how well the filter performs. Furthermore it allows for simultaneous comparison

    of different implementations, and the real world quantity of vibrations etc. are better

    visualized compared to static plots.

    5 Validation

    In order to asses the quality of the implemented filter, a reference experiment has been

    conducted. A commercial Attitude and Heading reference System (AHRS) [3] wasfixated to the autopilot board frame. Both systems estimated pitch and roll angles. In

    Figure 12a on the following page and 12b on the next page the two estimates are plotted

    together. As it is seen from the figures, the estimates angles are very similar, though it

    seems that the implemented filter responds a bit different under g-forces (t = 15:20s).

    This is expected to be a matter of filter tuning and g-force compensation.

    14

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    15/23

    (a) Pitch estimates.

    (b) Roll estimates.

    Figure 12: Estimated pitch and roll angles from conducted reference experiment.

    6 Conclusion

    Future work includes implementation of the last two stages of the cascaded extended

    Kalman filter, mentioned in Sections 3.2.2 and 3.2.3. Furthermore, these stages wouldneed to be validated. The global position and wind estimates are hard to evaluate.

    Position could be evaluated using a carefully defined path or another sensor known to

    provide a certain accuracy. The wind estimate is not as interesting as is its impact on

    the position estimate, thus, if it can be approximated to aid the position observer, that

    would suffice.

    An Extended Kalman filter have been implemented. Simulation, visualization and val-

    idation reflects that the implemented EKF is indeed capable of fusion gyroscope and

    accelerometer data, into a viable state estimate. The mathematical and theoretical back-

    grounds of the filter has been described. We would like to thank Lars-Peter Ellekilde

    for his patience and guidance throughout this project.

    15

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    16/23

    7 References

    [1] Google 3d warehouse, open source 3d models.

    http://sketchup.google.com/3dwarehouse/.

    [2] Kfilter - free c++ extended kalman filter library. http://kalman.sourceforge.net/.

    [3] Vectornav homepage, inertial measurement systems. http://www.vectornav.com.

    [4] Xsens homepage, inertial measurement systems. http://www.xsens.com.

    [5] A. L. Barker, D. E. Brown, and W. N. Martin. Bayesian estimation and the kalman

    filter. Computers Math. Applic, pages 5577, 1994.

    [6] R. Beard. State estimation for micro air vehicles. In Javaan Chahl, Lakhmi Jain,

    Akiko Mizutani, and Mika Sato-Ilic, editors, Innovations in Intelligent Machines

    - 1, volume 70 ofStudies in Computational Intelligence, pages 173199. Springer

    Berlin / Heidelberg, 2007. 10.1007/978-3-540-72696-8_7.

    [7] W. Burgard, D. Fox, and S. Thrun. Probabilistic robotics. Cambridge, MA: MIT

    Press, 2005.

    [8] A. M. Eldredge. Improved state estimation for micro air vehicles. Masters thesis,

    Brigham Young University, Dec 2006.

    [9] R. E. Kalman. A New Approach to Linear Filtering and Prediction Problems.

    Transactions of the ASME Journal of Basic Engineering, (82 (Series D)):35

    45, 1960.

    [10] R. Mahony, M. Euston, P. Coote, J. Kim, and T. Hamel. A complementary filterfor attitude estimation of a fixed-wing uav. In Intelligent Robots and Systems,

    2008. IROS 2008. IEEE/RSJ International Conference on, pages 340345, sept.

    2008.

    [11] E. C. Molina. Bayes theorem - an expository presentation. Bell system Technical

    Journal, pages 273283, 1931.

    [12] W. Premerlani and P. Bizard. Direction cosine matrix imu: Theory. DIY Drones

    community, May 2007.

    [13] S. Riaz and A. B. Asghar. Ins/gps based state estimation of micro air vehicles

    using inertial sensors. In Innovative Systems Design and Engineering Vol 2, No

    5, 2011, 2011.

    [14] S. Riaz and Dr. A. M. Malik. Single seven state discrete time extended kalman

    filter for micro air vehicle. In Proceedings of the World Congress on Engineering

    2010 Vol II, volume Proceedings of the World Congress on Engineering, 2010.

    [15] S. F. Schmidt. Kalman filter: Its recognition and development for aerospace

    applications. Journal of Guidance and Control, 4(1):47, 1981.

    [16] B. L. Stevens and F. L. Lewis. Aircraft Control and Simulation. John Wiley &

    Sons, 2nd edition, 2003. ISBN 978-0-471-37145-8.

    16

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    17/23

    [17] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Monte carlo localization: Efficient

    position estimation for mobile robots. In Proceedings of the Sixteenth NationalConference on Artificial Intelligence, pages 343349, July 1999.

    [18] S. Thrun, F. Dellaert, D. Fox, and W. Burgard. Robust Monte Carlo localization

    for mobile robots. In Artificial Intelligence, volume 128, pages 99141, 2000.

    [19] G. Welch and G. Bishop. An introduction to the kalman filter, 2001.

    17

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    18/23

    A Notations

    Notation Description

    A MatrixA[i,j] (i, j)

    th element ofAa vector

    a scalara estimatea rate of change / derivativea a prioria a priori estimate

    p(a) The probability of ap(a|b) The probability of a, given b

    s sin()c cos()t tan()

    Cx() Euler rotation matrix around x-axisCy() Euler rotation matrix around y-axisCz() Euler rotation matrix around z-axis

    Table 2: List of mathematical notations

    Symbol Description

    x Acceleration measured along airframe x-axis

    ms2

    y Acceleration measured along airframe y-axis ms2

    z Acceleration measured along airframe z-axisms2

    x Angular rate about airframe x-axis

    rads

    y Angular rate about airframe y-axis

    rads

    z Angular rate about airframe z-axis

    rads

    x Magnetic field strength measured along airframe x-axis [T]y Magnetic field strength measured along airframe y-axis [T]z Magnetic field strength measured along airframe z-axis [T]

    gpsn Position north, measured by GPSgpse Position east, measured by GPS

    Pitch: Euler rotation around x-axis [rad] Roll: Euler rotation around y-axis[rad]

    Yaw: Euler rotation around z-axis[rad]pn Position north in UTCpe Position east in UTCn Wind in north direction in UTCe Wind in east direction in UTC

    Table 3: List of variables and measurements.

    18

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    19/23

    Variable Description

    xk Actual state vector at time kxk Estimated state vector at step k

    xk A priori estimated state vector at step kuk System input vector at step k

    zk Measurement vector at step k

    zk Estimated measurement vector at step k

    w Process noise vector

    v Measurement noise vector

    f(x, u, w) = x State update functionh(x, v) = z Measurement prediction function

    A Jacobian matrix of partial derivatives of f(x, u, w) with respect to xW Jacobian matrix of partial derivatives off(x, u, w) with respect to wH Jacobian matrix of partial derivatives of h(x, v) with respect to x

    V Jacobian matrix of partial derivatives of h(x, v) with respect to vPk Estimate covariance at step kPk A priori estimate covariance at step kQ Process noise covarianceR Measurement noise covarianceK Kalman gain

    Table 4: List of EKF related variables.

    19

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    20/23

    B Pitch-roll kinematics

    This appendix serves to explain the equations, which governs the kinematics of the

    pitch / roll estimating Extended Kalman Filter (abbr. EKF). The EKF utilizes a three

    axis rate gyroscope for odometeter during the time update phase, and a three axis ac-

    cellerometer for correction during the measurement phase.

    The following equation relates the gyroscope readings x, y & z and associated

    noise, wx, wy & wzto the change in Euler angles, , & .Note that the change in Euler angles are introduces in their respective intermediate

    rotation frames, as Cx() denotes a rotation in y-z plane and Cy() in x-z plane.

    xy

    z

    +

    wxwy

    wz

    =

    0

    0

    + Cx()

    0

    0

    + Cy()

    00

    (B.1)

    x + wxy + wy

    z + wz

    =

    0

    0

    +

    1 0 00 c s

    0 s c

    0

    0

    +

    c 0 s0 1 0

    s 0 c

    00

    (B.2)

    x + wxy + wy

    z + wz

    =

    s c + s c s + c c

    =

    1 0 s0 c s c

    0 s c c

    (B.3)

    By inverse transformation, the airframe sensor values x, y & z , are mapped intoworld frame.

    =

    1 t s t c0 c s

    0 sc

    cc

    x + wxy + wy

    z + wz

    (B.4)

    Thus, the a priori estimate is given by the function f(x, u, w), which integrates x, y& z into the state space, x. The state vector, x, input vector, u and noise vector w isdefined as:

    x =

    u =

    xy

    z

    w =

    wxwy

    wz

    f(x, u, w) = x =

    1 t s t c0 c s

    x + wxy + wy

    z + wz

    (B.5)

    The function used to project state ahead, does not add any noise and is thus given by:

    f(x, u, 0) =

    x + t (s y + c z)

    c y s z

    (B.6)

    The Jacobian A(x, u) off(x, u, 0) needs to be computed in each time-update. It can bedescribed as the partial derivative of f(x, u) with respect to x.

    20

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    21/23

    A[i,j](x, u) =

    f[i]

    x[j] (x, u, 0) (B.7)

    A(x, u) =

    A[1,1](x, u) A[1,2](x, u)A[2,1](x, u) A[2,2](x, u)

    (B.8)

    A[1,1](x, u) =

    (x + t (s y + c z))

    A[1,2](x, u) =

    (x + t (s y + c z))A[2,1](x, u) =

    (c y s z)

    A[2,2](x, u) =

    (c y s z)

    (B.9)

    A(x, u) =

    t (y c z s)

    1c2

    (y s + z c)y s z c 0

    (B.10)

    The Jacobian W(x) off(x, u, w) needs to be computed in each time-update. It can bedescribed as the partial derivative of f(x, u, w) with respect to w.

    W[i,j](x) =f[i]

    w[j](x,u,w) (B.11)

    W(x) =

    W[1,1](x) W[1,2](x) W[1,3](x)W[2,1](x) W[2,2](x) W[2,3](x)

    (B.12)

    W[1,1](x) =

    wx( + (x + wx) + t (s (y + wy) + c (z + wz)))

    W[1,2](x) =

    wy( + (x + wx) + t (s (y + wy) + c (z + wz)))

    W[1,3](x) =

    wz( + (x + wx) + t (s (y + wy) + c (z + wz)))

    W[2,1](x) = wx ( + c (y + wy) s (z + wz))W[2,2](x) =

    wy

    ( + c (y + wy) s (z + wz))

    W[2,3](x) =

    wz( + c (y + wy) s (z + wz))

    (B.13)

    W(x) =

    1 t s t c0 c s

    (B.14)

    The sensor prediction model, h(x,v), estimates the accelerometer output for a given

    state. In this case, it is simply the gravity vector, rotated into the airframe. The model

    incooperates the sensor noise vector, v.

    h(x, v) = Cx() Cy() 001

    + v , v = vx

    vyvz

    (B.15)The final sensor prediction model does assume v = 0

    h(x, 0) =

    ss cc c

    (B.16)

    The Jacobian, H(x) of h(x, v) needs to be computed in each measurement-update. Itcan be described as the partial derivative of h(x, 0) with respect to x.

    21

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    22/23

    H[i,j](x) =

    h[i]

    x[j] (x, 0) (B.17)

    H(x) =

    (s)

    (s)

    (s c)

    (s c)

    (c c)

    (c c)

    (B.18)

    H(x) =

    0 cc c s s

    s c c s

    (B.19)

    The Jacobian, V(x) of h(x, v) needs to be computed in each measurement-update. Itcan be described as the partial derivative of h(x, v) with respect to v.

    V[i,j](x,w) =h[i]

    v[j](x,v) (B.20)

    V(x) =

    vx

    (s + vx)

    vy(s + vx)

    vz

    (s + vx)

    vx(s c + vy)

    vy

    (s c + vy)

    vz(s c + vy)

    vy

    (c c + vz)

    vx(c c + vz)

    vz

    (c c + vz)

    (B.21)

    V(x) =

    1 0 00 1 0

    0 0 1

    (B.22)

    22

  • 8/2/2019 Sensor Fusion for Miniature Aerial Vehicles

    23/23

    C Measurements

    Sample measurements and visualization of the board being picked up > Rolled back

    and forth > put down > picked up > Pitched back and forth > put down > picked up >

    moved in a figure 8 > put down.

    Roll

    Pitch

    Figure 8 motion

    Figure 13: Accelerometer x, y, and z axis respectively. Dots denote actual accelerom-

    eter output, whereas lines are predictions of accelerometer output. X-axis is time[s]and y-axis is acceleration[g]

    Roll Pitch

    Figure 8 motion

    Figure 14: Estimated pitch (greeen) and roll (blue) angles with angle[rad] on the

    y-axis and time[s] on the x-axis. Same data-set as Figure 13

    23