PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps...

12
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. INVITED PAPER The Impact of Human–Automation Collaboration in Decentralized Multiple Unmanned Vehicle Control By M. L. Cummings, Senior Member IEEE , Jonathan P. How, Senior Member IEEE , Andrew Whitten, and Olivier Toupet ABSTRACT | For future systems that require one or a small team of operators to supervise a network of automated agents, automated planners are critical since they are faster than hu- mans for path planning and resource allocation in multivariate, dynamic, time-pressured environments. However, such plan- ners can be brittle and unable to respond to emergent events. Human operators can aid such systems by bringing their knowledge-based reasoning and experience to bear. Given a decentralized task planner and a goal-based operator interface for a network of unmanned vehicles in a search, track, and neutralize mission, we demonstrate with a human-on-the-loop experiment that humans guiding these decentralized planners improved system performance by up to 50%. However, those tasks that required precise and rapid calculations were not significantly improved with human aid. Thus, there is a shared space in such complex missions for human–automation collaboration. KEYWORDS | Command and control; decentralized task plan- ning; decision support systems; human–automation interac- tion; human supervisory control; unmanned vehicles I. INTRODUCTION The use of unmanned vehicles (UVs) has recently revolu- tionized U.S. military operations. In the past year, the U.S. Air Force reached an important milestone in that it now has more unmanned aerial vehicles (UAVs) than manned aircraft. Other countries are following this lead, with re- cent significant advances in UAV heavy-lift capacity by Israel and the United Kingdom. These dramatic advances are not only limited to the military as the international civil sector also looks to such unmanned technologies to aid operations such as fighting forest fires, undersea ex- ploration, monitoring wildlife, inspecting bridges, and supporting first responders such as police and rescue organizations. UAV expenditures alone are predicted to more than double in the next ten years, and are expected to exceed $80 billion [1]. Accompanying these impressive advances in UV plat- forms are equally dramatic leaps in intelligent UV control Manuscript received July 16, 2010; revised June 7, 2011; accepted October 7, 2011. This work was supported by a U.S. Office of Naval Research STTR under the guidance of Aurora Flight Sciences. M. L. Cummings and J. P. How are with the Massachusetts Institute of Technology, Cambridge, MA 02139 USA (e-mail: [email protected]; [email protected]). A. Whitten is with the United States Air Force, Edwards Air Force Base (AFB), CA 93524 USA (e-mail: [email protected]). O. Toupet was with the Research and Development Center, Aurora Flight Sciences, Cambridge, MA 02142 USA (e-mail: [email protected]). Digital Object Identifier: 10.1109/JPROC.2011.2174104 | Proceedings of the IEEE 1 0018-9219/$26.00 Ó2011 IEEE

Transcript of PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps...

Page 1: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

INV ITEDP A P E R

The Impact ofHuman–AutomationCollaboration in DecentralizedMultiple UnmannedVehicle Control

ByM. L. Cummings, Senior Member IEEE, Jonathan P. How, Senior Member IEEE,

Andrew Whitten, and Olivier Toupet

ABSTRACT | For future systems that require one or a small

team of operators to supervise a network of automated agents,

automated planners are critical since they are faster than hu-

mans for path planning and resource allocation in multivariate,

dynamic, time-pressured environments. However, such plan-

ners can be brittle and unable to respond to emergent events.

Human operators can aid such systems by bringing their

knowledge-based reasoning and experience to bear. Given a

decentralized task planner and a goal-based operator interface

for a network of unmanned vehicles in a search, track, and

neutralize mission, we demonstrate with a human-on-the-loop

experiment that humans guiding these decentralized planners

improved system performance by up to 50%. However, those

tasks that required precise and rapid calculations were not

significantly improved with human aid. Thus, there is a shared

space in such complex missions for human–automation

collaboration.

KEYWORDS | Command and control; decentralized task plan-

ning; decision support systems; human–automation interac-

tion; human supervisory control; unmanned vehicles

I . INTRODUCTION

The use of unmanned vehicles (UVs) has recently revolu-

tionized U.S. military operations. In the past year, the U.S.

Air Force reached an important milestone in that it now

has more unmanned aerial vehicles (UAVs) than manned

aircraft. Other countries are following this lead, with re-

cent significant advances in UAV heavy-lift capacity byIsrael and the United Kingdom. These dramatic advances

are not only limited to the military as the international

civil sector also looks to such unmanned technologies to

aid operations such as fighting forest fires, undersea ex-

ploration, monitoring wildlife, inspecting bridges, and

supporting first responders such as police and rescue

organizations. UAV expenditures alone are predicted to

more than double in the next ten years, and are expected toexceed $80 billion [1].

Accompanying these impressive advances in UV plat-

forms are equally dramatic leaps in intelligent UV control

Manuscript received July 16, 2010; revised June 7, 2011; accepted October 7, 2011.

This work was supported by a U.S. Office of Naval Research STTR under the

guidance of Aurora Flight Sciences.

M. L. Cummings and J. P. How are with the Massachusetts Institute of Technology,

Cambridge, MA 02139 USA (e-mail: [email protected]; [email protected]).

A. Whitten is with the United States Air Force, Edwards Air Force Base (AFB),

CA 93524 USA (e-mail: [email protected]).

O. Toupet was with the Research and Development Center, Aurora Flight Sciences,

Cambridge, MA 02142 USA (e-mail: [email protected]).

Digital Object Identifier: 10.1109/JPROC.2011.2174104

| Proceedings of the IEEE 10018-9219/$26.00 �2011 IEEE

Page 2: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

systems. Point-and-click UAV operations are routine suchthat operators only have to click a place on a map to com-

mand a vehicle, and the onboard autonomy determines the

most appropriate control actions. Such advances over

legacy systems that require typical stick-and-rudder skills

have revolutionized military operations in that traditional

pilots are no longer needed to control such systems.

Indeed, while not yet operational, research and develop-

ment across a number of laboratories has shown thatnetworks of UAVs, unmanned ground vehicles, and even

other UV types such as unmanned underwater vehicles

and unmanned surface vessels can be controlled by a

single operator [2]–[5]. This single-operator, multiple-

unmanned-vehicle architecture is the crux of the Depart-

ment of Defense’s vision for network-centric operations,

where a network of intelligent UVs is able to semiauto-

nomously work with a small group of human operators toexecute dynamic missions in time-critical scenarios [6].

While all of the previously mentioned studies exam-

ined the ability of underlying automation (in the form of

planning and control algorithms) to control a network of

heterogeneous unmanned vehicles (UxVs), a significant

limitation of this work is a lack of investigation of critical

human–automation collaboration issues. In all future vi-

sions of operating networks of UVs, humans are expectedto assume some kind of supervisory role, through high-

level goal expressions which could include resource alloca-

tion, target designation, approval for weapons release, etc.

While research has shown that one operator can theoret-

ically control multiple UVs with varying levels of em-

bedded autonomy [7]–[10], there have been no previous

studies that examine how an operator interacts with de-

centralized UV planners. Moreover, whether such humaninteraction can improve, or possibly degrade, overall sys-

tem performance has not previously been addressed. This

paper fills this gap by detailing an experiment conducted to

investigate what impact a human operator has on the

overall performance of a decentralized UV network, and

how such interactions could be improved to enhance

overall human–system performance.

II . DECENTRALIZED PLANNING FORMULTIPLE UV MANAGEMENT

We developed the Onboard Planning System for UxVs

Supporting Expeditionary Reconnaissance and Surveil-

lance (OPS-USERS) system to provide a planning frame-

work for a team of autonomous agents under human

supervisory control participating in expeditionary mis-sions, which rely heavily on intelligence, surveillance, and

reconnaissance. The mission environment contains an

unknown number of mobile targets, each of which may be

friendly, hostile, or unknown. The mission scenario is

multiobjective, and includes finding as many targets as

possible, keeping accurate position estimates of hostile and

unknown targets, and neutralizing all hostile targets. It is

assumed that static features in the environment, such asterrain type, are known a priori, but dynamic features, such

as target locations, are not.

A. ArchitectureThe OPS-USERS system architecture is specifically de-

signed to meet the challenges associated with an auto-mated decision-making system integrated with a human

operator on the loop. Two key challenges are: 1) balancing

the roles and responsibilities of the human operator and

the automated planner, and 2) optimizing resource alloca-

tion to accomplish each of the mission objectives. The

system relies on the relative strengths of both humans and

automation in that a human operator provides valuable

judgment and field experience in recognizing patterns anddefining new goals as environmental changes dictate. A

significant benefit of automation is its ability to provide

raw computational power and rapid optimization capabil-

ity on the order of seconds for resource allocation prob-

lems that take human decision makers minutes or hours to

solve.

In OPS-USERS, decision making responsibility is la-

yered to promote goal-based reasoning such that the hu-man guides the autonomy, but the automation assumes the

bulk of the computation. The automated planner is respon-

sible for decisions requiring rapid calculations or optimi-

zation, and the human operator supervises the planner for

high-level goals such as where to focus the search and

which tasks should be included in the overall plan, as well

as those tasks that require strict human approval, such as

weapons release. Fig. 1 depicts the role allocation balancebetween the human and automation.

In this goal-based system, the operator is responsible

for strategic decision making that includes prioritizing

which tasks should be performed at the current stage in the

mission. The operator interface that allows such interac-

tions is discussed in detail in Section II-F. The automated

planner is responsible for tactical planning by deciding

which UxVs perform which task, and forming a taskschedule for each UxV. Together, the human operator and

the automation work jointly to define plans that meet

operational goals. For example, the automation helps the

Fig. 1. Decision making allocation between human operator and

automated planner.

Cummings et al. : The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

2 Proceedings of the IEEE |

Page 3: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

human by estimating when and where the UxVs shouldrevisit targets based on observed target behavior and sens-

ing capabilities onboard the UxVs, such as present day

motion target indicators. The human helps the automation

by guiding it to areas the operator believes are more likely

to contain undetected targets. Thus, this overall frame-

work allows the human to make global decisions regarding

locations and targets of interest, while the automated

planner optimizes the local trajectory of each UxV tomaximize searching and tracking.

The autonomy only manages uncertainty related to the

targets, which is modeled by spatial probability distribu-

tions and visually represented to operators as grayed out

regions on the interface. The operator aids in target uncer-

tainty management by reasoning about where unfound or

lost targets may hide and create search tasks to guide the

UxVs to these areas. The autonomy maintains these distri-butions over time, updating them as the UxVs gather new

information and locally optimizes the UxV paths (within a

limited planning horizon due to computational con-

straints). The uncertainty related to the predicted future

location of the targets is also managed by the autonomy

through periodic revisits. There is, however, no uncer-

tainty in the planning itself, i.e., the system does not

handle uncertainty in the UxVs’ states, health, or capabi-lities, nor in the execution of tasks.

B. Mission TasksThe first task type in OPS-USERS is to find targets,

called the search task, which is a collaborative effort be-

tween the human and the automation. In OPS-USERS, the

UVs are equipped with a variety of sensors for observing

and searching the environment, and a target may be de-tected if its position lies within an UxV’s sensor footprint.

The automation and the human operator work in a com-

plementary manner to accomplish efficient searching. This

architecture gives the system flexibility such that the auto-

nomy makes search decisions even in the absence of ope-

rator input, but also leverages their expertise to improve

the search process, as discussed in Section II-F.

The automation generates paths for the UxVs thatmaximize the likelihood of detecting new targets. How-

ever, the computational complexity associated with this

optimization process grows exponentially with the length

of the planned paths that limits the automation to calcu-

lating localized trajectories. The human can guide the

automation and prevent myopic behaviors by creating

search tasks in remote, unsearched areas that are likely to

contain new targets. The automated planner then assignsUxVs to the search areas the operator has identified and

optimizes the local trajectories of these UxVs to efficiently

search the areas of interest.

The second task type is tracking a found target, called

the track task, which is a task supervised by the human

operator, but executed by the automation. Tracking a

target for a period of time decreases the uncertainty

associated with that target’s state (i.e., position and velo-city). However, targets are typically not tracked continu-

ously in order to allow UxVs the flexibility of performing

more tasks. When a track task ends, the tracking UxV

instantiates a new track task so that the target will be

revisited in the future. The revisit time is calculated based

on the expected growth rate of the target’s position uncer-

tainty and the UxVs’ sensing capabilities.

If a target being tracked is known to be hostile, aneutralize hostile task is also created. This third task type

involves engaging an enemy target, and can only be per-

formed by a weaponized UAV (WUAV). The operator must

verify the classification of the enemy target as hostile and

approve the weapon release.

The final task is the refuel task, which requires that

each UxV return to base to refuel before its fuel supply is

depleted. Refuel times are also scheduled by the taskplanner to maximize the overall mission performance, and

are treated as hard constraints. UxVs may decide to refuel

early if that enables them to accomplish more tasks on

time. This health management task is handled entirely by

the automation to reduce the operator’s workload.

In order to allow the human and the automation to

collaborate for task execution, the basic system architec-

ture is divided into two major components, as shown inFig. 2. The first is the distributed tactical planner, which is

a network of onboard planning modules (OPMs) [11] that

provides coordinated autonomy between the UxVs. Each

UxV carries a processor that runs an instance of the OPM.

The second is the ground control station, which consists of

a centralized strategic planner called the central mission

manager (CMM), and the operator interface (OI). These

two components are discussed in turn.

C. The Distributed Tactical PlannerA decentralized implementation was chosen for the

tactical planner to allow rapid reaction to changes in the

environment. When appropriate, the decentralized task

planner may modify the task assignment without affecting

the overall plan quality (e.g., UxVs swap tasks), and it is

able to make these local repairs faster through inter-UxVcommunication than if it had to wait for the decision to

come from the human or a centralized planner. Further-

more, plans can be carried out even if the communication

link with the ground control station is intermittent or lost.

The architecture is scalable, since additional UxVs also add

computational capability, and the decentralized frame-

work is robust to a single point of failure, since no single

UxV is globally planning for the fleet.The decentralized task planner used in OPS-USERS is

the consensus-based bundle algorithm (CBBA), a decen-

tralized, polynomial-time, market-based protocol [12].

CBBA consists of two phases that alternate until the as-

signment converges. In the first phase, task selection, UxVsselect the set of tasks for which they receive the highest

reward. The UxVs place bids on the tasks they choose

Cummings et al.: The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

| Proceedings of the IEEE 3

Page 4: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

where the bids represent the marginal improvement in the

score of their plan. Thus, task planning only requires the

exchange of bids, which each UxV makes based on its own

state only (i.e., the UxVs only need to know how much the

other UxVs value the tasks they compete forVthe bidsVand not the locations or trajectories of the other UxVs). In

the second phase, conflict resolution, plan information is

exchanged between neighbors and tasks go to the highestbidder. CBBA is guaranteed to reach a conflict-free assign-

ment, given a strongly connected network.

One key advantage of CBBA is its ability to solve the

multiple assignment problem where each UxV is assigned a

set of tasks (a plan), as opposed to solving the single

assignment problem, where each UxV is only assigned to

their next task. Planning several tasks into the future im-

proves effectiveness in complex missions. An additionaladvantage is that CBBA runs in polynomial time with re-

spect to the number of UxVs and the number of tasks.

Under the implementation used for this experiment,

plans could be generated for four UxVs and ten tasks in

less than 1 s.

A successful distributed tactical planner must consider

both task assignment as well as path planning. In OPS-

USERS, the OPM generates paths in two ways. If the nexttask in an UxV’s plan is scheduled such that the UxV must

travel directly to the task location, the UxV is said to have

no spare time. In the case where an UxV has no spare time,

it calculates the minimum-time path to the task location

using Dijkstra’s algorithm [13]. The UxVs avoid both static

obstacles and dynamic obstacles (i.e., other UxVs) along

the way. It is assumed that UxVs are aware of static ob-

stacles a priori, and they have an estimate of dynamic ob-stacles based on communicated trajectories between UxVs.

In the case where the next task in an UxV’s plan is

scheduled such that the UxV has more than enough time to

travel to the task location, the UxV is said to have spare

time. In this case, the UxV implements a spare time strat-egy, which is chosen by the operator for each UxV before

the mission begins. If the spare time strategy is to loiter,

the UxV generates a trajectory to stay near their last task

location until they run out of spare time and need to travel

to their next task location. If the UxV’s spare time strategy

is to search, it generates a search trajectory that maximizes

the expected probability of detecting new targets along thepath. The UxV builds a breadth-first, depth-limited search

tree over a receding horizon and determines which cells

are observable to its sensor along each path. The UxV se-

lects the path that has the highest cumulative probability of

revealing new targets. This trajectory generation algorithm

scales exponentially with the depth of the search (i.e.,

number of waypoints in the trajectory) and the size of the

environment (i.e., number of cells in the world). For thisreason, the planning horizon must be chosen such that the

trajectory can be calculated at a rate at least as fast as the

planning loop. For this experiment, a planning horizon of

four waypoints was sufficient for the planning rate of 1 Hz.

The search effort is also coordinated across the fleet to

avoid redundant effort, which is known to improve search

performance [14]–[16]. UxVs communicate their trajecto-

ries to each other, and UxVs receive no additional rewardfor observing cells, which will already be observable along

another UxV’s planned path.

D. Maintaining Situational AwarenessEach autonomous UxV maintains situational awareness

by keeping an estimate of several mission parameters.

Targets that are yet to be found are called search targets,

and there are an unknown number of such targets at a

given time. The uncertainty associated with the location of

search targets is represented by a probability map [17]–

[19], which is a discretized map of the environment. Each

cell i in the map has some probability piðtÞ of containing at

Fig. 2. OPS-USERS system architecture.

Cummings et al. : The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

4 Proceedings of the IEEE |

Page 5: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

least one search target at time t. At each time step, theprobabilities are propagated according to a target motion

model. The target model quantifies the expected proba-

bility that a target moves from one cell to another cell in

one time step. This transition probability is given by

ptransij ¼vtgt dt

nfreeðiÞeij:

The target model assumes that each target is none-

vasive in that its motion is independent of the actions

taken by the UxVs. It is also assumed that the probability

that a target traveling from a cell i to a cell j is proportionalto vtgt, the target’s expected speed, and dt, the time be-

tween updates. It is assumed inversely proportional to

nfreeðiÞ, the number of nonobstructed cells adjacent to cell

i, and to eij, the length of the edge of cell i, in the direction

of cell j. vtgt is estimated in advance of a mission by a

mission planner, based on the expected target types, which

can include heterogeneous speeds. When a cell i is ob-

served by an UxV at time t, its probability of containing anew target is updated according to

piðtþ 1Þ ¼ piðtÞ � 1� fiðtÞð Þ

where fiðtÞ is the fraction of cell i that was observed at

time t.When a new target is found, it is designated a track

target. Position, velocity, and position uncertainty esti-

mates are kept for each track target in the environment. If

a track target is not observed by any UxV at time t, thetarget’s position uncertainty increases by a factor that is

proportional to the target’s last known velocity. The

growth rate of the position uncertainty of a given track

target is used by the planner to schedule the next desired

revisit time for that target. If the uncertainty for a given

target becomes too large, the target is considered lost, and

becomes a search target.

The distributed tactical planner runs at a rate of onecycle per second. Each time through the planning loop,

UxVs share information regarding their current position,

their current planned trajectory, and updates on target

position estimates. UxVs are able to update their own

situational awareness based on messages from other UxVs.

E. The Ground Control StationThe ground control station is remotely connected to

the UxV network and primarily consists of the CMM and

the OI. The CMM is a centralized computational aid, de-

signed to assist the operator in strategic decision making.

The CMM runs a local centralized version of the task

planner to determine the feasibility of assigning a given

subset of tasks. The operator is able to modify the set of

tasks under consideration, and investigate a variety ofBwhat if[ scenarios, i.e., the operator changes the assignedtasks and then evaluates the impact of the new schedule

before actually sending the new plan to the UxV team. The

operator must approve every automation-generated plan

before the tasks are released to the distributed tactical

planner.

This strategic level planning does not concern the ope-

rator with the mechanics of how the tasks will be carriedout (i.e., which UxV will perform which task), but allows

the operator to decide which tasks should be included in

the plan. This goal-based approach, as compared to indi-

vidual vehicle control, is critical for single-operator control

of multiple UVs since it substantially reduces operator

workload, and does not require them to use valuable cog-

nitive skills that are best reserved for knowledge-based

decisions [7].Quantitative information (i.e., UxV positions and fuel

levels, target position estimates) and candidate plans are

passed from the CMM to the OI, which converts this in-

formation into a visual form that is meaningful to the

operator (discussed in depth in the next section). The

operator can query the planner to determine the feasibility

of inserting a new task into the current task list. Once the

operator has decided on a plan, the approved task set issent to the distributed tactical planner via the CMM for

execution.

F. The Operator InterfaceIn order to allow a single operator the ability to control

multiple heterogeneous UVs given the automated planner

previously described, two interfaces were designed: the

map display and the schedule comparison tool, detailednext.

The map display (Fig. 3) represents the primary OI,

which shows both geospatial and temporal mission infor-

mation (i.e., a timeline of mission events). Icons represent

UxVs; low, medium, and high priority targets; and search

tasks. The symbology is consistent with MIL-STD 2525

[20]. This interface also supports an instant messaging

communication tool, which is a commonly used interfacein military operations. This Bchat[ tool simulates an in-

stant messaging tool connected to a command center that

provides high-level direction and intelligence about targets

in a designated area. In this experiment, this chat tool was

used to communicate permission to fire orders and target

priority changes, as well as issued queries to operators for

mission status updates.

In the interface depicted in Fig. 3, operators’ primarytasks are to identify targets found by the network of UxVs

and approve weapon launches. Once a target is found,

the operator is alerted to perform a target designation task

(i.e., hostile, unknown, or friendly), along with assigning

an associated priority level (i.e., high, medium, low). In

addition, while the UxVs are capable of determining and

negotiating their own search patterns, if operators are

Cummings et al.: The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

| Proceedings of the IEEE 5

Page 6: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

unhappy with the UxV-determined search patterns, they

can create new search tasks. This occurs by operators right-

clicking on those areas they feel need more attention,

which brings up a menu that allows the operators to

prioritize the tasks and give windows of desired coverage.

This insertion of a search task by the operator in effect

forces the decentralized algorithms to reallocate the UxVssuch that the operators’ desired tasks will be added to the

task list. This human–automation interaction scheme is

one of high level goal-based control, as opposed to more

low-level vehicle-based control. Unlike present-day opera-

tions where multiple operators command a single vehicle,

for one operator to effectively control multiple vehicles,

the operator must control the network via high-level tasks

as opposed to individual vehicle commands.

A performance plot in the lower left corner of Fig. 3

gives operators insight into the automated planner per-

formance as the graph shows expected performance (given

an a priori cost function) compared to actual performance.

When the automation generates a new plan with a pre-

dicted performance that is 10% higher than the current

plan’s performance, the replan button in Fig. 3 turns greenand flashes. This illuminated replan button indicates that a

new plan is ready for operator approval, and when the

replan button is selected, the operator is taken to the

schedule comparison tool (SCT, Fig. 4).

The three geometrical forms at the top of the SCT in

Fig. 4 are configural displays that enable the operator to

quickly compare three schedules: the current, working,

and proposed schedules. Configural displays allow

Fig. 4. The schedule comparison tool.

Fig. 3. Map display.

Cummings et al. : The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

6 Proceedings of the IEEE |

Page 7: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

operators the ability to utilize more efficient perceptualprocesses rather than cognitively demanding processes

that rely on memory and inference [21]. This reliance on

perceptual reasoning is known as direct perception, and

for this decision support display in Fig. 4, the goal is to

graphically depict to the operator how much area will be

searched given the plan, as well as how many tasks of the

differing priorities will be accomplished.

In Fig. 4, the left form (gray) is the current UxVschedule. The right form (green) is the latest automation

proposed schedule. The middle schedule (blue) is the

working schedule that results from the user modifying the

plan by querying the automation to assign particular tasks.

The rectangular grid on the upper half of each shape re-

presents the estimated area that the UxVs will search ac-

cording to the plan. The hierarchical priority ladders show

the percentage of tasks assigned in high (top), medium(middle), and low (bottom) priority levels. Thus, in keep-

ing with the direct-perception paradigm, the shape that is

filled with the most color depicts the best plan. In addition,

while the SCT button turns green whenever the automa-

tion determines a 10% better plan, operators can select this

button at anytime to rearrange the task list if desired.

When the operator first enters the SCT, the working

schedule is identical to the proposed schedule. The opera-tor can conduct a Bwhat if[ query process by dragging the

desired unassigned tasks into the large center triangle in

Fig. 4 labeled Bassign.[ This query forces the automation to

generate a new plan if possible, which then becomes the

working schedule. The configural display of the working

schedule alters to reflect the changes of newly assigned

tasks. However, due to possible resource shortages, all tasks

may not be assigned to the UxVs, which is representative ofreal-world constraints. When a task cannot be assigned, the

task that was dragged into the center triangle pops out,

representing the case where the automated planners could

not honor the operator’s requests due to a constraint viola-

tion in time, fuel required, or priority conflict (i.e., only

tasks with higher priority could be accommodated).

The working schedule configural display updates with

every individual query so that the operator can quicklycompare the three schedules based on quantity of color.

This querying process represents a more collaborative

effort between the human and automation, which has been

shown to improve operator performance and situation

awareness in similar complex settings [22]. Ultimately, the

operator either accepts the working or proposed schedule

or can cancel to continue with the current schedule.

The system is designed so that either both screens canbe displayed if the operator’s ground control stations sup-

port such screen real estate, or the screens alternate if only

a single display is available. The single display paradigm

was used in the experiment detailed in the next section

since the expected environment was one of an expedi-

tionary controller, i.e., a human moving through the envi-

ronment with only a laptop to control the UxVs.

III . THE EXPERIMENT

In order to determine the impact of human interaction

with the decentralized vehicle network on mission per-

formance, a human-on-the-loop experiment was con-

ducted and the results compared to those of a perfectly

obedient human who always agreed with the automated

planner’s proposed schedule. For this experiment, parti-

cipants were responsible for one rotary-wing UAV, one

fixed wing UAV, one unmanned surface vehicle (USV),

and a WUAV. The UAVs and USVs were responsible for

searching for targets, which each had a unique ID internal

to the system so ground truth was always known to the

experimenters for later analysis. The targets continually

moved on predetermined paths (unknown to the UxVs and

operators), so there was no guarantee that the targets

would be found. Once designated, hostile targets were

tracked by one or more UxVs until the human operator

approved WUAV missile launches. For the purposes of this

experiment, it was assumed that the UxVs could not be

destroyed.

The experiment was conducted on a Dell Optiplex

GX280 with a Pentium 4 processor and an Appian

Jeronimo Pro 4-Port graphics card. The display’s resolution

was 1280 � 1024 pixels. To familiarize each subject with

the interfaces in Figs. 1 and 2, a self-paced, slide-based

tutorial was provided, as well as a practice session. The

training took approximately 30 min to complete. Practice

was followed by three 10-min test sessions, representing

each of the three possible replan intervals, discussed indetail in the next session, in a counterbalanced and ran-

domized order. Each scenario was different, but similar in

difficulty. The interface recorded all operator actions.

Because it has been previously established that the rate

at which an automated scheduling tool prompts the user

for intervention can negatively influence UAV operator

performance due to high workload [22], the subjects in this

experiment were all exposed to three different replanning

intervals of 30, 45, and 120 s. This means that the operator

was prompted by the green illumination of the replan

button and an aural replan alert every 30/45/120 s, indi-

cating an improved schedule was available. In actual use,

such automated planners would not likely generate plans

with such regularity. However, the rate of operator

prompting was fixed at these intervals (with a standard

deviation of �5 s) for experimental control. OPS-USERS

can typically generate a new, theoretically improved plan

once per every 10 s, so these new plans were simply sup-

pressed until the correct window for that session replan

interval was achieved.

The original subject population consisted of 31 sub-jects. However, analysis of the resulting data through

statistical clustering revealed that only approximately one-

third of the subjects ðN ¼ 9Þ actually followed instructionsto collaborate with the automation and evaluate the auto-

mation’s plans at the 30/45/120 s intervals. These subjects

Cummings et al.: The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

| Proceedings of the IEEE 7

Page 8: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

were labeled consenters, in that they consented to thereplanning schedule, although they did not necessarily

agree to the proposed plan, i.e., the consenters changed

the automation’s proposed schedule about one-third of the

time. Consenters had statistically significantly improved

performance over the remaining 22 subjects who were

labeled dissenters (who essentially ignored the automa-

tion) or mixed consenters (who only sometimes agreed

with the automation) in terms of when to replan.Characteristics of the consenters were higher than

average video gaming, as well as no military experience,

i.e., those participants with military experience had nega-

tive attitudes towards UVs. An in-depth analysis of the

consenter versus dissenter performance is detailed else-

where [23], but for the purposes of this paper which

assesses the added value of human–automation collabora-

tion, only the data from the consenter group will be con-sidered since they were statistically superior performers as

compared to the remaining 22 participants. For the re-

mainder of this paper, this consenting group will be called

collaborators since they generally agreed to work with the

automation, with occasional input to redirect the auto-

mation as necessary. They did not necessarily agree with

the automation, on average they disagreed with 30% of

automation-generated plans and conducted Bwhat if[queries to generate their own slightly modified plans.

After the human-on-the-loop experiment, the second

portion of this experiment was conducted during which a

single operator played the role of a perfectly obedient

operator. This operator completed a total of 27 counter-

balanced trials (nine each at the 30/45/120 s intervals).

This operator always agreed with the replanning rates, i.e.,

always entered the SCT when prompted, and also agreedwith the proposed schedule, i.e., the operator never at-

tempted to change the plan. In addition, this operator

never inserted new unprompted search tasks, in effect

always trusting that the automation was performing the

optimal search and tracking task allocation. The use of a

single operator reduced variability in response time, which

was generally less than 500 ms, typical for such point-and-

click behavior that requires no decision making on the partof the operator.

Performance-dependent variables for both phases of

the experiment included the percentage of area covered

(which is critical since one primary task of the UxVs was to

cover as much area as possible), number of targets found,

number of hostile targets neutralized, and the average

operator utilization for a given session. Utilization, a

workload measure, is the percent time an operator is busyover the entire mission. Operators were considered busy

when performing one or more of the following tasks:

creating search tasks, identifying and designating targets,

approving weapons launches, interacting via the chat box,

and replanning in the SCT. Monitoring time was not in-

cluded in percent busy time. Previous research has shown

that in single-operator goal-based control of multiple

entities, operator performance can significantly drop whentasked greater than 70% in terms of utilization [24], [25].

IV. RESULTS AND DISCUSSION

Since this mission was one of searching, tracking foundtargets, and neutralizing hostile targets, each of these

mission elements constitutes an effects-based performance

metric. These are now discussed in turn. For all statistical

tests reported, alpha ¼ 0:05.The first mission performance metric, percentage of

area searched (Fig. 5), illustrates the effectiveness of the

overall system when operators collaborated with the algo-

rithm (which meant changing the algorithm’s plan ap-proximately 30% of the time), as compared to when the

human was perfectly obedient and always executed the

automation’s plan. The overall mixed 3 � 2 ANOVA (with

repeated measures across the three replanning intervals)

shows a statistically significant difference between collab-

orative and obedient humans [Fð1; 24Þ ¼ 13:3; p G 0:001].There was no overall main effect for the replanning inter-

val [Fð2; 24Þ ¼ 1:7; p ¼ 0:171] or interaction between re-planning interval and obedient versus collaborative human

[Fð2; 24Þ ¼ 0:711; p ¼ 0:501]. Bonferroni pairwise com-

parisons reveal that for the 30- and 45-s conditions, the

collaborative human–automation team is superior to the

obedient human (p G 0:001 for both comparisons), but

that there was no statistical difference at the 120-s interval

ðp ¼ 0:411Þ. This indicates that for the faster replan in-

tervals, the collaborative human provided significant ad-vantage to the algorithm, increasing the overall area

covered by up to 30%, on average. However, this large gap

closed in the longer replan intervals, with the collaborative

Fig. 5. Search task performance.

Cummings et al. : The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

8 Proceedings of the IEEE |

Page 9: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

human not appreciably improving the percent area covered

compared to what the automation essentially planned on

its own.

It should also be noted that the collaborative humaneffort for searching in terms of area covered was statis-

tically no different between the 45- and 120-s replanning

intervals ðp ¼ 0:355Þ, but significantly lower at the 30-s

interval (p ¼ 0:002, as compared to 120 s). However, the

automation’s performance with respect to search linearly

increased as replanning interval increased. The reason for

this improvement is likely due to the fact that the UxVs

spend a greater amount of time searching when the re-planning interval is longer. Search is a spare time strategy,

and so if the replan interval is longer, other tasks such as

track or neutralize may be delayed because they are not

released to the distributed tactical planner until the next

replan. These results suggest that there was a plateau for

human effort, but due to the limited experimental condi-

tions, it is not evident in the data that the automation

reached a plateau. This will be discussed further in a sub-sequent section.

Related to the area searched metric is the targets found

metric, since simply searching the most space is not a

comprehensive performance metric, i.e., it represents a

subset of objectives. Since ultimately the goal was to des-

troy as many targets as possible, the number of targets

found can be a proxy measure for the quality of search.

Using the same 3 � 2 ANOVA model discussed previously,the metric of percentage of available targets found by par-

ticipants (Fig. 6) demonstrates that when the humans

worked with the planner, significantly more targets were

found across all replanning intervals ðFð1; 24Þ ¼ 17:8; p G

0:001Þ, than when the human perfectly obeyed automationdirectives. Again, there were no other main effects for

replanning rate [Fð2; 24Þ ¼ 0:470; p ¼ 0:628] or interac-tion [Fð2; 24Þ ¼ 1:85; p ¼ 0:179].

With humans occasionally redirecting the automation

(i.e., changing the automation’s proposed schedule 30% of

the time), up to 50% improvement was seen in the number

of targets found, as compared to if the automation had

been left alone. Unlike the area-searched metric, there wasno convergence of human–automation performance in

that the human–automation team always performed more

than 20% better for every replanning rate condition. Once

again, the collaborative human performance appeared to

plateau for the 45- and 120-s intervals in that they were

statistically no different ðp ¼ 0:891Þ, while the 30-s condi-tion was again statistically different from the 120-s condi-

tion ðp ¼ 0:046Þ.Another important mission performance metric was

the percentage of time targets were successfully tracked,

i.e., not lost. Recall that targets were not continually

tracked, in that UxVs could multitask, and attempt to track

multiple targets. In this experiment, we were not able to

detect any measurable human operator input that im-

proved upon the automation, which was able to effectively

track targets 79%–99% of the time. This is not to say thatthe distributed planner would perform this well in all

cases, but just for the numbers of targets and UxVs in this

short experimental scenario, the planner performed very

well and did not significantly lose target tracks. Moreover,

since human operators could only assist help with finding

lost targets, they could only indirectly influence tracking

targets. However, how often operators inserted and reprio-

ritized search tasks could affect tracking performance, andwe leave this as an area of future research.

For the last mission performance metric, hostiles neu-

tralized, there was no statistical difference between the

collaborative and obedient humans, meaning that active

human management of this process did not improve upon

the automation’s performance [Fð1; 24Þ ¼ 0:001; p ¼0:974]. In addition, there was no statistical difference in

the replanning intervals [Fð2; 24Þ ¼ 1:36; p ¼ 0:272], sothat regardless of the 30-, 45-, or 120-s interval, the hos-

tiles neutralized performance was consistent. There was

no significant interaction [Fð2; 24Þ ¼ 0:400; p ¼ 0:675].These results are not surprising, given that of the three

mission performance requirements, human intervention

for hostile neutralization was minimal, i.e., once a target

was designated as hostile, execution of the neutralization

task was very rule based, in that as soon as a target wasmarked hostile, it was shadowed by a WUAV until an order

was received to neutralize. Given the clearly established

rule set to manage this process, the automation was very

capable of positioning the WUAV correctly, without hu-

man intervention.

Given that workload is a significant concern in

single-operator control of multiple UxVs, operator

Fig. 6. Target-finding task performance.

Cummings et al.: The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

| Proceedings of the IEEE 9

Page 10: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

utilization (i.e., percent busy time) was measured acrossthe three replanning intervals. The results showed no sig-

nificant difference in utilization across the three replan-

ning levels [Fð2; 24Þ ¼ 2:817; p ¼ 0:080], which is not

surprising since the operators in this analysis represent the

top performers. The average percent utilization was 44%

(�7%), indicating that operators were not overloaded and

well below the threshold of 70%.

A. Function Allocation AnalysisThe results from the experiment clearly show that

ultimately mission success was dependent on the collab-

oration between the operators and the automation, and

that mutually exclusive function allocation either within or

across tasks is not an effective solution. Given the mission

for this team of heterogeneous UVs of search, track, andneutralize enemy targets and the results seen from this

experiment, Table 1 summarizes the function allocation

between the human and the automation for the various

mission components that promoted the best overall system

performance, as motivated by the statistical results in the

previous section. As Figs. 5 and 6 demonstrate, the best

search performance was obtained when the human guided

the automation, as opposed to leaving the search task tojust the automation. These results are somewhat expected,

as the human ability to conduct relatively effortless effec-

tive visual searches as compared to computers has long

been recognized [26]. However, these results demonstrate

just how much advantage human guidance in the search

task can provide. In addition, these results suggest that

human performance could plateau as a function of

increased replan times, but the results are conflicting interms of the capabilities of the distributed planners.

For this set of experimental conditions, human per-

formance in terms of the search task appeared to peak at

the 45-s replanning interval, with no statistically signifi-

cant improvement when replanning at the 120-s interval.

For the percent area covered, the automated planner did

worse than when augmented with human guidance at the

30- and 45-s intervals, but at the 120-s interval, it per-formed the same. This is possible evidence that there may

have been a replanning interval that would have promoted

better automated planning performance in the search task.

This is an area of current active research, as it is not ob-

vious how to determine, given an uncertain environment

with emergent targets, how often a distributed network of

UVs should communicate in order to maximize the searcheffort.

While the benefit of the human ability to aid in the

search task appeared to become on par with that of the

automation as the time intervals between planning grew

longer, this relationship did not hold true for the targets

found metric. In this case, the human–automation team

far exceeded the automation’s ability to find targets. While

percent of area searched is an important metric given thestated mission, number of targets found demonstrates the

effectiveness or quality of the search. Thus, humans, aided

by the automation, were better able to judge not just where

the automation should be searching in terms of areas left

uncovered, but also when areas should be revisited and

when to change a course of action when a current plan

appeared to be suboptimal. It is this combination of recog-

nizing both where and when to search that led to superiorhuman–automation teaming in the target-finding task.

While some tasks benefitted from a human–automation

collaborative effort, there are some tasks that should be

primarily executed by either just the UxVs or humans. For

example, given the severe consequences of misidentifica-

tion of a target and the lack of technical progress that has

been made in automated target recognition, target identi-

fication remains a task that is expected to remain a humanendeavor for the near future. However, improved sensor

technology may be able to augment and aid the human in

this complex, high-risk task.

Alternatively, tracking performance did not improve

with human assistance, suggesting that this task was best

left to automated planners. However, one caveat is that a

large range of possible scenarios was not examined in this

experiment, so under different conditions, results couldhave varied. However, the tracking task is one of complex,

rapid computation in that several variables must be con-

sidered such as locations and likely trajectories of targets,

locations, and movement tracks of UxVs, and the need to

revisit targets to maximize tracking coverage in a limited

resource environment. Given the inability of a single ope-

rator to be able to make such rapid, precise computations

in a dynamic environment, it is expected that automationwould be able to execute this task assuming that the target

was clearly identified. Recent developments in automated

target tracking have been realized in operational settings

such that, for example, a designated target in the form of a

white truck can be automatically tracked as it moves. It

Table 1 Agent(s) That Promoted the Best Task Performance in the Human-on-the-Loop Experiment

Cummings et al. : The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

10 Proceedings of the IEEE |

Page 11: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

remains to be determined how humans could augment orassist in these real-world tracking tasks.

Last, it should be noted that the hostile neutralization

task was another that benefitted from a collaborative

human–automated planner effort. This task is a colla-

borative one because the human operator made the final

decision to release a weapon, which is a stressful and

information-gathering intensive task. The automation kept

the WUAV in place so that if and when a command forneutralization was received, the system was ready to

respond. Thus, the operator was able to offload lower level

position keeping tasks to the automation, while reserving

precious cognitive resources for an overall knowledge-

based task of firing a weapon that will likely have grave

consequences.

V. CONCLUSION

In the future concept of one operator supervising multiple

collaborative UxVs, the potential exists for high operator

workload and negative performance consequences. As a

result, significant autonomy is needed to aid the operator

in this multiple UxV control task. Due to the dynamic and

uncertain nature of the environment, control of collabo-

rative and decentralized UxVs requires rapid automatedreplanning. However, as demonstrated in this study, hu-

man management of the automated planners is critical, as

automated planners cannot always generate accurate solu-

tions for every combination of events. Though fast and able

to handle complex computation far better than humans,

computer optimization algorithms are often Bbrittle[ in thatthey can only take into account those quantifiable variables

identified as critical during the design stage [27], [28].This research has shown, for an admittedly narrow set

of conditions, that given a decentralized UV network, hu-

man guidance can provide substantial benefit in search

tasks, with arguable little improvement in tracking tasks.

Most importantly, this research has demonstrated that

there is a shared space in such missions for human–

automation collaboration. While there may be some mis-

sion tasks that can be mutually assigned to either a humanor UxV (due primarily to inherent limitations), tasks exist

that improve substantially when humans and automation

work together. Similar results have been achieved in

human-guided optimization of heuristic search algorithms

in other scheduling domains [29], as well as in single-operator supervisory control of a swarm of UAVs for target

pursuit [30]. Current work is underway to identify char-

acteristics of scheduling and planning problems and

algorithms that make them particularly suitable for

human–algorithm collaboration.

This research also raises a set of new questions that

deserve further consideration. While algorithms such as

those embedded in OPS-USERS can typically generate newschedules in just seconds, often these schedules are not

optimal, i.e., solutions generated by the predetermined

static cost function may not reflect the true state of all

variables in a dynamic command and control scenario.

Such planners will consider even a solution that is 1%

better than a previous solution as a Bbetter solution,[ but

often such algorithms cannot account for Bsatisficing[behavior [31] that ultimately is just as Boptimal[ (from theviewpoint of stakeholders). Reducing the gap between

planner assumptions and human expectations is an impor-

tant area of future research. Moreover, in this study,

allowing more time between replans improved the search

performance metrics, so future work is needed to objec-

tively determine effective replanning intervals. These in-

tervals are likely dynamic parameters, and more research

is needed to determine if some Pareto front exists thatcould predict a region of optimized performance.

However, changing the replan interval to better suit

planner performance could introduce new, unintended

consequences. It remains an open area of research to de-

termine how to design such a system that achieves a ba-

lance between human guidance and human interference.

Previous research has shown that those operators that

refuse to collaborate with the automation can actually de-grade overall system performance and also significantly

increase their own workload [23]. Thus, designing such a

system that reduces operator workload and engenders trust

and collaboration is critical. h

Acknowledgment

The authors would like to thank A. Clare, C. Hart, andD. Southern (Massachusetts Institute of Technology

(MIT), Cambridge) and P. Maere (Delft University of

Technology, Delft, The Netherlands) who were instru-

mental in gathering the data for this effort.

REFERENCES

[1] World Unmanned Aerial Vehicle Systems2010, Teal Group Corporation, Manassas,VA, 2009.

[2] R. Teo, J. S. Jang, and C. Tomblin,BAutomated multiple UAV flightVTheStanford dragon fly UAV program,[ inProc. 43rd IEEE Conf. Decision Control,Nassau, Bahamas, 2004, DOI: 10.1109/CDC.2004.1429422.

[3] M. Alighanbari and J. How, BDecentralizedtask assignment for unmanned aerial

vehicles,[ in Proc. IEEE Conf. DecisionControl/Eur. Control Conf., Sevilla, Spain,2005, pp. 5668–5673.

[4] J. Tisdale, A. Ryan, K. Zu, D. Tornqvist, andJ. K. Hedrick, BA multiple UAV system forvision-based search and localization,[ in Proc.Amer. Control Conf., 2008, pp. 1985–1990.

[5] N. Nigam, S. Bieniawski, I. Kroo, and J. Vian,BControl of multiple UAVs for persistentsurveillance: Algorithm description andhardware demonstration,[ in Proc. AIAAInfotech@Aerospace Conf., Seattle, WA, 2009.

[6] Network Centric Warfare: Department ofDefense Report to Congress, Department ofDefense, Office of the Secretary of Defense,Washington, DC, 2001.

[7] M. L. Cummings, S. Bruni, S. Mercier, andP. J. Mitchell, BAutomation architecturefor single operator-multiple UAV commandand control,[ Int. Command Control J., vol. 1,pp. 1–24, 2007.

[8] M. L. Cummings and P. J. Mitchell,BPredicting controller capacity in supervisorycontrol of multiple UAVs,[ IEEE Trans.

Cummings et al.: The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

| Proceedings of the IEEE 11

Page 12: PAPER TheImpactof Human–Automation ... Impact of Human...motion target indicators. The human helps the automation by guiding it to areas the operator believes are more likely to

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Syst. Man Cybern. A, Syst. Humans, vol. 38,no. 2, pp. 451–460, Mar. 2008.

[9] S. Dixon, C. Wickens, and D. Chang,BMission control of multiple unmannedaerial vehicles: A workload analysis,[Human Factors, vol. 47, pp. 479–487,2005.

[10] M. A. Goodrich, BOn maximizing fan-out:Towards controlling multiple unmannedvehicles,[ in Human-Robot Interactions inFuture Military Operations, M. Barnes andF. Jentsch, Eds. Surrey, U.K.: Ashgate,2010.

[11] J. P. How, C. Fraser, K. C. Kulling,L. F. Bertuccelli, O. Toupet, L. Brunet,A. Bachrach, and N. Roy, BIncreasingautonomy of UAVs decentralized CSATmission management algorithm,[ IEEERobot. Autom., vol. 16, no. 2, pp. 43–51,Jun. 2009.

[12] H. L. Choi, L. Brunet, and J. P. How,BConsensus-based decentralized auctions forrobust task allocation,[ IEEE Trans. Robot.,vol. 25, no. 4, pp. 912–926, Aug. 2009.

[13] E. W. Dijkstra, BA note on two problemsin connexion with graphs,[ NumerischeMathematik, vol. 1, pp. 269–271, 1959.

[14] M. Flint, M. Polycarpou, andE. Fernandez-Gaucherand, BCooperativepath-planning for autonomous vehiclesusing dynamic programming,[ in Proc.15th Triennial World Congr., Barcelona,Spain, 2002, DOI: 10.3182/20020721-6-ES-1901.01305.

[15] I. Maza and A. Ollero, BMultiple UAVcooperative searching operation usingpolygon area decomposition and efficient

coverage algorithms,[ Distrib. Autonom.Robot. Syst., vol. 6, pp. 221–230, 2007.

[16] K. R. Guruprasad and D. Ghose, BMulti-agentsearch using Voronoi partitions,[ in Proc.Int. Conf. Adv. Control Optim. Dyn. Syst.,Bangalore, India, 2007, pp. 380–383.

[17] F. Bourgault, T. Furukawa, andH. F. Durrant-Whyte, BOptimal searchfor a lost target in a Bayesian world,[ FieldService Robot., vol. 24, pp. 209–222, 2003.

[18] Y. Jin, A. Minai, and M. Polycarpou,BCooperative real-time search and taskallocation in UAV teams,[ in Proc. IEEE Conf.Decision Control, 2003, vol. 1, pp. 7–12.

[19] M. Flint, E. Fernandez-Gaucherand, andM. Polycarpou, BA probabilistic frame-workfor passive cooperation among UAVsperforming a search,[ in Proc. 16th Int.Symp. Math. Theory Netw. Syst., 2004.

[20] Interface Standard, Common WarfightingSymbology MIL-STD-2525B, Departmentof Defense, Washington, DC, Jan. 1999.

[21] J. J. Gibson, The Ecological Approach to VisualPerception. Boston, MA: Houghton Mifflin,1979.

[22] M. L. Cummings, A. S. Brzezinski, andJ. D. Lee, BOperator performance andintelligent aiding in unmanned aerial vehiclescheduling,[ IEEE Intell. Syst., vol. 22, SpecialIssue on Interacting With Autonomy, no. 2,pp. 52–59, Mar./Apr. 2007.

[23] M. L. Cummings, A. Clare, and C. Hart,BThe role of human-automation consensusin multiple unmanned vehicle scheduling,[Human Factors, vol. 52, pp. 17–27, 2010.

[24] D. K. Schmidt, BA Queuing analysis of theair traffic controller’s workload,[ IEEETrans. Syst. Man Cybern., vol. SMC-8, no. 6,pp. 492–498, Jun. 1978.

[25] M. L. Cummings and C. Nehme, BModelingthe impact of workload in networkcentric supervisory control settings,[ inNeurocognitive and Physiological Factors DuringHigh-Tempo Operations, S. Kornguth, Ed.Surrey, U.K.: Ashgate, 2010, pp. 23–40.

[26] P. M. Fitts, Human Engineering for an EffectiveAir Navigation and Traffic Control System.Washington, DC: Nat. Res. Council, 1951.

[27] P. Smith, E. McCoy, and C. Layton,BBrittleness in the design of cooperativeproblem-solving systems: The effectson user performance,[ IEEE Trans. Syst.Man Cybern. A, Syst. Humans, vol. 27, no. 3,pp. 360–371, May 1997.

[28] B. G. Silverman, BHuman-computercollaboration,[ Human-ComputerInteraction, vol. 7, pp. 165–196, 1992.

[29] G. W. Klau, N. Lesh, J. Marks, andM. Mitzenmacher, BHuman-guided search,[J. Heuristics, vol. 16, no. 3, pp. 289–310, 2010.

[30] F. Legras and G. Coppin, BAutonomyspectrum and performance perception issuesin swarm supervisory control,[ Proc. IEEE,2012, DOI: 10.1109/JPROC.2011.2174103.

[31] H. A. Simon, R. Hogarth, C. R. Piott,H. Raiffa, K. A. Schelling, R. Thaier,A. Tversky, and S. Winter, Decision Makingand Problem Solving. Washington, DC:Nat. Acad. Press, 1986.

ABOUT THE AUTHORS

M. L. Cummings (Senior Member, IEEE) received

the Ph.D. degree in systems engineering from the

University of Virginia, Charlottesville, in 2004.

She is an Associate Professor in the Aeronau-

tics and Astronautics Department and the Director

of the Humans and Automation Laboratory at the

Massachusetts Institute of Technology (MIT),

Cambridge.

Dr. Cummings is an Associate Fellow of the

American Institute of Aeronautics and Astronau-

tics (AIAA), a member of the National Research Council Board on Human

Systems Integration, and a member of the Naval Research Advisory

Committee.

Jonathan P. How (Senior Member, IEEE) received

the B.A.Sc. degree from the University of Toronto,

Toronto, ON, Canada, in 1987 and the S.M. and

Ph.D. degrees in aeronautics and astronautics

from the Massachusetts Institute of Technology

(MIT), Cambridge, in 1990 and 1993, respectively.

He then studied for two years at MIT as a

Postdoctoral Associate for the Middeck Active

Control Experiment (MACE) that flew onboard the

Space Shuttle Endeavour in March 1995. Currently,

he is the Richard Cockburn Maclaurin Professor of Aeronautics and

Astronautics at MIT. Prior to joining MIT in 2000, he was an Assistant

Professor in the Department of Aeronautics and Astronautics, Stanford

University, Stanford, CA.

Prof. How was the planning and control lead for the MIT DARPA Urban

Challenge team that placed fourth. He was the recipient of the 2002

Institute of Navigation Burka Award and a recipient of a Boeing Special

Invention award in 2008. He is an Associate Fellow of the American

Institute of Aeronautics and Astronautics (AIAA).

Andrew Whitten received the B.S.E. degree in

aerospace engineering from the University of

Michigan, East Lansing, in 2008 and the S.M.

degree from the Department of Aeronautics and

Astronautics, Massachusetts Institute of Technol-

ogy (MIT), Cambridge, in 2010.

He is a Performance and Flying Qualities

Engineer for the United States Air Force. He works

for the Global Vigilance Combined Test Force,

Edwards Air Force Base (AFB), CA, flight testing

the RQ-4 Global Hawk UAV.

Olivier Toupet received the M.S. degree in

aerospace engineering from SUPAERO, Toulouse,

France, in 2004 and the M.S. degree in aeronautics

and astronautics from the Massachusetts Institute

of Technology (MIT), Cambridge, in 2006.

He is an expert in multivehicle command and

control and optimization and, at the time of this

research effort, was a lead Autonomy, Controls

and Estimation (ACE) Research Engineer at

Aurora’s Research and Development Center,

Cambridge, MA.

Cummings et al. : The Impact of Human-Automation Collaboration in Decentralized Multiple Unmanned Vehicle

12 Proceedings of the IEEE |