Applications of Synthetic Vision in Agent-Based Simulation COMP 768 Final Project Glenn Elliott.

Post on 18-Jan-2018

229 views 0 download

description

Motivations  Many agent simulators use “spheres of awareness” to gather information about their environment.  This is not a very accurate representation of how most people sense their environment.  A sense similar to vision would more accurately describe what an agent is aware of. 3

Transcript of Applications of Synthetic Vision in Agent-Based Simulation COMP 768 Final Project Glenn Elliott.

Applications of Synthetic Vision in Agent-Based SimulationCOMP 768 Final Project

Glenn Elliott

Outline

Motivations

Previous Work in Synthetic Vision

Agent Framework

Vision Algorithms

ClearPath Modifications

Demos

Future Work

References

2

Motivations

Many agent simulators use “spheres of awareness” to gather information about their environment.

This is not a very accurate representation of how most people sense their environment.

A sense similar to vision would more accurately describe what an agent is aware of.

3

Motivations (2)

Goals of this project:1. Implement a reusable and modifiable agent

simulator.2. Implement Vision-based sensors in an efficient

manner.3. Implement agent behaviors that can only be done

(or only easily done) with Vision-based sensors.

4

Previous Work in Synthetic Vision Some research has already been done to use synthetic vision

to drive agent awareness. [Penn01] used visibility information to model the movement of

pedestrians. Showed strong correlation between visibility and real-life

movement. Raster grid approach. Not multi-agent.

[Noser95] and others have used false color renderings from an agent’s perspective to generate lists of visible obstacles and agents. Also exploit the z-buffer for path planning. Natural support for 3D vision. Did not scale to a large number of agents due to image

renderings and bandwidth limitations. [Shao05] used ray casting on a raster grid to detect obstacles.

Agents were sensed using standard radial awareness searches.

5

Agent Framework

Implementing a reusable and modifiable agent framework was a major goal in this assignment.

Inspired by the work of [Niederberger03], the “agent” is broken into several separate base components: Agent

Very basic class. Nothing more than current location, velocity, and destination. Sensors and Behaviors are attached to Agent instances.

Sensor Base class for allowing Behaviors to query for information about

their environment. Behavior

Base class for implementing “intelligent” actions. Example behaviors: Local collision avoidance, global path

planning, exploration, playing tag, etc. May be of any complexity.

6

Agent Framework (2)

Vision is implemented as a Sensor. Several different implementations of vision

sensors are available.

Local Collision avoidance is implemented as a Behavior. ClearPath code from S. Guy [Guy09] was adapted

and encapsulated as Behavior. Other behaviors could be easily plugged in:

Social Forces [Helbing95], traditional RVO [Berg08], etc.

7

Agent Framework (3)

Again following the work of [Niederberger03], the simulation event loop is broken up into three sequential steps:1. Sense()

Sensors are instructed to gather information about their environment. In the case of vision, this may mean recomputing the visibility polygon.

2. Think() Behaviors evaluate simulation state (potentially by

querying sensors) and update behavior.3. Act()

Decisions made in the Think() stage are applied to the agent’s velocity and destination here.

Computation within each step may be done in parallel.

8

Vision Algorithms

This project explores 2D vision algorithms that compute the “visibility polygon”.

Visibility Polygon: The region of space visible from an observer in an environment.

9

Vision Algorithms (2)

10

Visibility polygon of a moving agent.

Vision Algorithms (3)

Standard visibility polygon algorithm takes O(n lg n) time where n is the number of environment edges. Based upon a radial sweep-line algorithm. Does not exploit coherence of motion.

Several advanced algorithms [Hall02] [Hornus02], building upon the Visibility Complex [Pocchiola93] and Kinetic Algorithm [Basch99] techniques, exploit coherence of motion and allow logarithmic visibility polygon updates. Very difficult to implement. I was unable to devise a acceptable implementation.

11

Vision Algorithms (4)

Unable to implement the advanced vision algorithms, I opted to use the standard visibility polygon algorithm, but update it at a different frequency than the simulation. Simulation Period: 1/30 second. Visibility Update Period: 1/4 second.

Worked very well to reduce computational cost.

Does not affect pedestrian-scale simulations in most cases. Though the visibility polygon is not updated

every step, Behaviors still see changes within the visible region between visibility polygon updates.

12

Vision Algorithms (5)

Computing the visibility polygon is only half of the vision sensing problem.

The vision sensor must be able to identify obstacles and agents within the visible region. Obstacle identification is easy since visible

obstacles make up the edges of the visibility polygon.

Agent identification is hard. A naïve algorithm takes O(n2) to identify all agent-agent visibility relations.

13

Vision Algorithms (6)

Introduced culling into the naïve algorithm to improve performance. Vision sensors register with a “database” of what

obstacles they can see. Sensors that have no visible obstacles in

common cannot see each other. These obstacle registrations are iterated over

and visibility tests are made. A history is kept to avoid redundant tests.

14

ClearPath Modifications

S. Guy’s ClearPath code was encapsulated into a local collision avoidance behavior.

Tied to a vision sensor, ClearPath is able to make earlier collision avoidance decisions than previously supported by spheres of influence.

15

ClearPath Modifications (2)

In order to better simulate “human” collision avoidance, a minor change to ClearPath was made.

Scenario: Two people on collision paths approach each other from a

distance in an empty hallway. Realistic Resolution:

Anticipating collision long before it is imminent, each person moves to opposite sides of the hallway to avoid each other.

Traditional ClearPath (w/ vision, w/o clipped RVOs) Resolution: Agents move aside only enough to avoid collision,

though agents still come very close.

To better simulate the Realistic Resolution, a new ClearPath parameter, “personal space” was added.

16

ClearPath Modifications (3)

To better simulate the Realistic Resolution, a new ClearPath parameter, “personal space,” was added.

This parameter is used to artificially increase the size of the reciprocal velocity obstacles (RVOs). Effect: Agents in the hallway scenario show more

collision anticipation and give each other more space in passing.

RVOs are only increased in size when agents are sufficiently apart. Current threshold: > 2 * personal space. Normal RVO computation is used once personal space has

been violated. This makes the personal space parameter a soft

constraint on motion.

17

ClearPath Modifications (4)

Figure from [Berg08].18

ClearPath Modifications (4)

Personal Space:

Larger RVO with Personal Space parameter.19

Demos

Hallway Scenario

Traditional ClearPathw/ Radial Sensor

ClearPath w/ Visionand Personal Space

ClearPath w/ Vision and Personal Space anticipating collision, quickly avoid each other AND give each other a good clearance distance.

21

Corner Scenario

Traditional ClearPathw/ Radial Sensor

ClearPath w/ Visionand Personal Space

Radial Sensor Agents begin to react to each other even while they are occluded. This is not a realistic behavior.

22

Wall Scenario #1

Traditional ClearPathw/ Radial Sensor andPersonal Space

ClearPath w/ Visionand Personal Space

Radial Sensor Agents react with others on opposite sides of obstacles. This is not a realistic behavior.

23

Wall Scenario #2

Traditional ClearPathw/ Radial Sensor

ClearPath w/ Visionand Personal Space

Radial Sensor Agents react with others on opposite sides of obstacles.

Notice how the Radial Sensor Agents fall into a staggered movement pattern.

24

Tag Scenario

Agents play tag using vision to sense nearby agents.

This can only easily be done using vision-based sensors.

Red: ChasingGreen: ExploringBlue: Fleeing

25

Performance

Tag Scenario: 100 Agents 200 Obstacle Edges Visibility Update Period: 1/4 second. Simulation Period: 1/30 second. Operating Frame Rates

~30fps: MacOS X, Single Thread, gcc 4.2.1 (Apple), Core 2 @ 2.16 GHz.

~20fps: Linux (2.6.28), Single Thread, gcc 4.3.3, Pentium M @ 1.7 GHz.

26

Performance (2)

Tag Scenario CPU Allocation (MacOS X)

27

Future Work

A more realistic vision sensor would update every time step. Explore use of the GPU to speed up computation.

Clipped visibility polygons to better model real vision.

Benchmarks clearly show that visible agent identification is a major bottleneck. More advanced culling algorithms are needed. Ideal algorithm will partition agents in relation to the

obstacle environment. Probably more important than fast nearest neighbor

queries (kd-tree or Voronoi diagram of agents). BSP? Axial Map [Turner05]?

28

References [Basch99] Basch J.: Kinetic Data Structures. PhD thesis, Stanford University, June 1999. [Berg08] van den Berg, J., Patil, S., Sewall, J., Manocha, D., and Lin, M.: Interactive navigation of

multiple agents in crowded environments. In Proceedings of the 2008 Symposium on interactive 3D Graphics and Games (Redwood City, California, February 15 - 17, 2008). I3D '08. ACM, New York, NY, 139-147. 2008.

[Guy09] Guy S. J., Chhugani J, Kim C., Satish N., Dubey P., Lin M., Manocha D.: ClearPath: Highly Parallel Collision Avoidance for Multi-Agent Simulation. January 2009, TR009-006.

[Hall02] Hall-Holt O.: Kinetic Visibility. PhD thesis, Stanford University, August 2002. [Helbing95] Helbing D., Molnar P.: Social force model for pedestrian dynamics. Phys. Rev. E 51 (1995),

4282-4286. 1995. [Hornus02] Hornus S., Puech C.: A Simple Kinetic Visibility Polygon. 18th European Workshop on

Computational Geometry, pp. 27-30 (2002). [Noser95] Noser N. and Thalmann D.: Synthetic vision andaudition for digital actors. In Proc.

Eurographics ’95,Maastricht, pages 325–336. 1995. [Niederberger03] Niederberger C., Gross M.: Hierarchical and heterogeneous reactive agents for real-

time applications. Computer Graphics Forum 22(3) Proc. Eurographics 2003. 2003. [Penn01] Penn A. and Turner A.: Space Syntax Based Agent Simulation. Proceedings from the first

Pedestrian and Evacuation Dynamics Conference, 2001. pp. 99-114. [Pocchiola93] Pocchiola M. and Vegter G.: The visibility complex. In Proceedings of the Ninth Annual

Symposium on Computational Geometry (San Diego, California, United States, May 18 - 21, 1993). SCG '93. ACM, New York, NY, 328-337. 1993.

[Shao05] Shao W. and Terzopoulos D.: Autonomous pedestrians. In Proceedings of the 2005 ACM Siggraph/Eurographics Symposium on Computer Animation (Los Angeles, California, July 29 - 31, 2005). SCA '05. ACM, New York, NY, 19-28. 2005.

[Tu94] Tu X. and Terzopoulos. Artificial fishes: physics, locomotion, perception, behavior. In Proc. SIGGRAPH '94, pages 43-50. 1994.

[Turner05] Turner A., Penn A., and Hiller B.: An algorithmic definition of the axial map. Environment and Planning B: Planning and Design 32(3):425-444. 2005.

29