Intelligent Agents - University Of Marylandnau/cmsc421/chapter02.pdfEpisodic? Static? Discrete?...
Transcript of Intelligent Agents - University Of Marylandnau/cmsc421/chapter02.pdfEpisodic? Static? Discrete?...
Last update: January 26, 2010
Intelligent Agents
CMSC 421: Chapter 2
CMSC 421: Chapter 2 1
Agents and environments
?
agent
percepts
sensors
actions
environment
actuators
Russell & Norvig’s book is organized around the concept of intelligent agents♦ humans, robots, softbots, thermostats, etc.
The agent function maps from percept histories to actions:
f : P∗ → A
The agent program runs on the physical architecture to produce f
CMSC 421: Chapter 2 2
Chapter 2 - Purpose and Outline
Purpose of Chapter 2:basic concepts relating to agents
♦ Agents and environments
♦ Rationality
♦ The PEAS model of an environment
♦ Environment types
♦ Agent types
CMSC 421: Chapter 2 3
Vacuum-cleaner world
A B
Percepts: location and contents, e.g., [A, Dirty]
Actions: Left, Right, Suck, NoOp
CMSC 421: Chapter 2 4
A vacuum-cleaner agent
Percept sequence Action[A, Clean] Right[A, Dirty] Suck[B, Clean] Left[B, Dirty] Suck[A, Clean], [A, Clean] Right[A, Clean], [A, Dirty] Suck... ...
function Reflex-Vacuum-Agent( [location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
What is the right function?Can it be implemented in a small agent program?
CMSC 421: Chapter 2 5
Rationality
Fixed performance measure evaluates the environment sequence– one point per square cleaned up in time T ?– one point per clean square per time step, minus one per move?– penalize for > k dirty squares?
A rational agent chooses whichever action maximizes the expected value ofthe performance measure, given the percept sequence to date
Rational 6= omniscient– percepts may not supply all relevant information
Rational 6= clairvoyant– action outcomes may not be as expected
Hence, rational 6= successful
Rational ⇒ exploration, learning, autonomy
CMSC 421: Chapter 2 6
PEAS
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing an automated taxi:
Performance measure?
Environment?
Actuators?
Sensors?
CMSC 421: Chapter 2 7
PEAS
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing an automated taxi:
Performance measure? safety, destination, profits, legality, comfort, . . .
Environment? streets/freeways, traffic, pedestrians, weather, . . .
Actuators? steering, accelerator, brake, horn, speaker/display, . . .
Sensors? video, accelerometers, gauges, engine sensors, keyboard, GPS, . . .
CMSC 421: Chapter 2 8
Internet shopping agent
Performance measure?
Environment?
Actuators?
Sensors?
CMSC 421: Chapter 2 9
Internet shopping agent
Performance measure? price, quality, appropriateness, efficiency
Environment? current and future WWW sites, vendors, shippers
Actuators? display to user, follow URL, fill in form
Sensors? HTML pages (text, graphics, scripts)
CMSC 421: Chapter 2 10
Environment types
Solitaire Backgammon Internet shopping TaxiFully observable?Deterministic?Episodic?Static?Discrete?Single-agent?
CMSC 421: Chapter 2 11
Environment types
Solitaire Backgammon Internet shopping TaxiFully observable? No Yes No NoDeterministic?Episodic?Static?Discrete?Single-agent?
CMSC 421: Chapter 2 12
Environment types
Solitaire Backgammon Internet shopping TaxiFully observable? No Yes No NoDeterministic? Yes* No Partly NoEpisodic?Static?Discrete?Single-agent?
*After the cards have been dealt
Episodic: task divided into atomic episodes, each to be considered by itselfSequential: Current decision may affect all future decisions
CMSC 421: Chapter 2 13
Environment types
Solitaire Backgammon Internet shopping TaxiFully observable? No Yes No NoDeterministic? Yes* No Partly NoEpisodic? No No No NoStatic?Discrete?Single-agent?
*After the cards have been dealt
Static: the world does not change while the agent is thinking
CMSC 421: Chapter 2 14
Environment types
Solitaire Backgammon Internet shopping TaxiFully observable? No Yes No NoDeterministic? Yes* No Partly NoEpisodic? No No No NoStatic? Yes Semi Semi NoDiscrete?Single-agent?
*After the cards have been dealt
CMSC 421: Chapter 2 15
Environment types
Solitaire Backgammon Internet shopping TaxiFully observable? No Yes No NoDeterministic? Yes* No Partly NoEpisodic? No No No NoStatic? Yes Semi Semi NoDiscrete? Yes Yes Yes NoSingle-agent?
*After the cards have been dealt
CMSC 421: Chapter 2 16
Environment types
Solitaire Backgammon Internet shopping TaxiFully observable? No Yes No NoDeterministic? Yes* No Partly NoEpisodic? No No No NoStatic? Yes Semi Semi NoDiscrete? Yes Yes Yes NoSingle-agent? Yes No No No
*After the cards have been dealt
The environment type largely determines the agent design
Real world: partially observable, stochastic, sequential, dynamic, continuous,multi-agent
CMSC 421: Chapter 2 17
Agent types
Four basic types in order of increasing generality:– simple reflex agents– reflex agents with state– goal-based agents– utility-based agents
All of these can be turned into learning agents
CMSC 421: Chapter 2 18
Simple reflex agents
AgentE
nviro
nm
ent
Sensors
What the worldis like now
What action Ishould do nowCondition−action rules
Actuators
CMSC 421: Chapter 2 19
Example
function Reflex-Vacuum-Agent( [location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
CMSC 421: Chapter 2 20
Reflex agents with state
Agent
En
viron
men
tSensors
What action Ishould do now
State
How the world evolves
What my actions do
Condition−action rules
Actuators
What the worldis like now
CMSC 421: Chapter 2 21
Example
Suppose the percept didn’t tell the agent what room it’s in. Then the agentcould remember its location:
function Vacuum-Agent-With-State( [location]) returns an action
static: location, initially A
if status = Dirty then return Suck
else if location = A then
location←B
return Right
else if location = B then
location←A
return Left
Above, I’ve assumed we know the agent always starts out in room A. Whatif we didn’t know this?
CMSC 421: Chapter 2 22
Goal-based agents
Agent
En
viron
men
tSensors
What it will be like if I do action A
What action Ishould do now
State
How the world evolves
What my actions do
Goals
Actuators
What the worldis like now
CMSC 421: Chapter 2 23
Utility-based agents
Agent
En
viron
men
tSensors
What it will be like if I do action A
How happy I will be in such a state
What action Ishould do now
State
How the world evolves
What my actions do
Utility
Actuators
What the worldis like now
CMSC 421: Chapter 2 24
Learning agentsPerformance standard
Agent
En
viron
men
tSensors
Performance element
changes
knowledgelearning goals
Problem generator
feedback
Learning element
Critic
Actuators
CMSC 421: Chapter 2 25
Summary
Agents interact with environments through actuators and sensors
The agent function describes what the agent does in all circumstances
The performance measure evaluates the environment sequence
A perfectly rational agent maximizes expected performance
Agent programs implement (some) agent functions
PEAS descriptions define task environments
Environments are categorized along several dimensions:observable? deterministic? episodic? static? discrete? single-agent?
Some basic agent architectures:reflex, reflex with state, goal-based, utility-based
CMSC 421: Chapter 2 26