Testing cooperative autonomous systems for unwanted emergent behaviour and dangerous...
-
Upload
fet-aware-project-self-awareness-in-autonomic-systems -
Category
Education
-
view
197 -
download
6
description
Transcript of Testing cooperative autonomous systems for unwanted emergent behaviour and dangerous...
Jörg Denzinger, Department of Computer Science, University of Calgary [email protected]
Malik Atalla Bernhard Bauer Karel Bergmann Michael Blackadar Jeff Boyd Tom Flanagan Jonathan Hudson Holger Kasinger Jordan Kidney
Torsten Steiner Chris Thornton Jan‐Philipp Steghöfer
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Awareness (www.aware‐project.eu/), a FET coordination action funded by the European Commission under FP7 which provides support for researchers interested in “Self‐Awareness in Autonomic Systems”
Jennifer Willies Levent Gürgen, Klaus Moessner, Abdur Rahim Biswas and Fano Ramparany
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior? Dangerous adaptations?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior? Dangerous adaptations?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior? Dangerous adaptations?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior? Dangerous adaptations?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior? Dangerous adaptations?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
SOS!
IoT aims at large number of autonomous entities working together and manage themselves to adapt to task and environment and create emergent properties But what about
Unwanted emergent behavior?
Dangerous adaptations? How can we test for that and/or avoid it?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
General idea: Use learning to create event sequences for tested system that reveal examples for unwanted emergent behavior and dangerous adaptations.
Use simulations (if necessary) to provide the feedback necessary for learning
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Env
Ag tested,1 Agtested,m …
Ag event,1 Ag event,n …
Learner
Ag byst,1
Ag byst,k
.
.
.
In theory, anything able to learn event sequences could be used
In practice, evaluating complete event sequences is easier than trying to evaluate potential of partial sequences or sequence skeletons evolutionary methods advised
But: use as much knowledge as possible targeted operators
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
We need to evaluate how near the simulation of a given event sequence came to creating a specific (unwanted) behavior:
We evaluate the simulation state after each event ( step‐fitness) and sum up these fitness values
Step‐fitness depends on the application (although we see some common patterns in our applications)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
The main influence on the complexity are Number of event generators Length of event sequences
( search space of the learner) Size of the tested system only plays a role in the simulations (so, hopefully, linear)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Testing computer game AIs FIFA 99 (CEC‐04, CIG‐05) ORTS (CIG‐09) Starcraft (AIIDE‐11)
Finding problems in student written MAS for rescue simulator ARES (ECAI‐06, IAT‐06)
Testing surveillance networks Harbor surveillance and interdiction (CISDA‐09) Patrolling robots with stationary sensor platforms
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Testing and evaluating self‐organizing, self‐adapting transportation systems (ADAPTIVE‐10, SASO‐12)
Testing agriculture sensor networks (and watering machinery) (on‐going work)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
SASO‐08; Communications of SIWN 4, 2008; EASe‐09; ADAPTIVE‐10 Scenario: Group of agents for performing dynamic pickup and delivery tasks Objective: Optimize performance (here: distance traveled)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Self‐organizing system based on info‐chemicals: tasks announce themselves by infochemicals that are propagated
Transport agents follow infochemicals trails while sending out infochemicals themselves
Picked‐up tasks announce this via infochem., again
Test goal: How inefficient can the tested system be?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Events: pickup and delivery coordinates + time when event is announced to the tested system
Learner: GA learning a single event sequence Fitness: One simulation then comparing emergent solution to optimal solution quality normalized by optimal solution (maximizing this difference)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Test system consistently found event sequences solved by tested system 4 times worse than optimum (over varying event numbers)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
EASe 2010, EASe 2011, ADAPTIVE‐10 Scenario: Group of agents for performing dynamic pickup and delivery tasks, quite a number of recurring tasks (every day) Objective: Optimize performance (here: distance traveled)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Self‐organizing system based on info‐chemicals with an advisor: Gets observations from all agents Determines recurring events Computes optimal (good) solution for these Determines what individual agents do wrong and creates advice exception rules for them (several rule variants)
Test goal: how much damage to performance can adaptations by advisor do (if exploited by adversary)?
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Events as before Learner: GA learning two event sequences:
Setup & break sequence targeted operators for “twining”
Fitness:Simulation performs setup sequence often enough to trigger adaptation then does break sequence Comparing
(1) break sequence emergent solution before and after adaptation plus (2) achieved adaptation plus (3) nearness of break solution to optimum, again normalized by optimal break solution (main focus on maximizing (1))
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Least intrusive advisor variant only made things less than twice worse
Test system allowed to compare the danger potential of different advisor variants
Test system also found event sequences showing off the advantages of the variants
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
CISDA‐09, CISDA‐11 Scenario: Group of mobile and stationary sensor platforms (with policies guiding movement of mobile platforms)
Harbors (simulated) Experimental robot setting (simulated and real)
Objective: Protect a particular location
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Implementation of patrol and interception policies (2) for harbor security in a GIS‐based harbor simulation
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Events: high‐level waypoints for attack agents together with speeds for traveling between them Use a standard path planner to navigate between waypoints (creating low level waypoints around obstacles)
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Learner: Particle swarm system Particle represents an attack (i.e. waypoints and speed)
Fitness: Several evaluation functions for a particle based on a simulation run combined using goal ordering structures (<1, {<2, <3, <4},…<n) Nearness to target location Distance to defense agents
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
A lot of the goal ordering structures are able to find weaknesses for various policies and various numbers of defenders and attackers
Some ordering structures find more time‐based attacks, others favor sacrifice attacks
Some found attacks were not very intuitive would most probably be overlooked
Simulation reflects real world well
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
Look at advisor concept! Before giving advice, test it!
Using Monte‐Carlo simulations (SASO‐11)
In adversary situation: use testing described before
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger
General guidelines for Fitness measure Targeted operators
How can we tell approach to find new problem?
For self‐awareness: Run‐times are an issue What if agents need more global view for using exception rules? ”Distributed” trigger
Testing for unwanted emergent behavior and dangerous self‐adaptations J. Denzinger