2 Intelligent Agents

36
Intelligent Agents Chapter 2 Team Teaching AI (created by Dewi Liliana) PTIIK 2012

Transcript of 2 Intelligent Agents

Page 1: 2 Intelligent Agents

Intelligent Agents

Chapter 2Team Teaching AI (created by Dewi Liliana)

PTIIK 2012

Page 2: 2 Intelligent Agents

OutlineAgents and environmentsRationalityPEAS (Performance measure, Environment,

Actuators, Sensors)Environment typesAgent types

Page 3: 2 Intelligent Agents

AgentsAn agent is anything that can be viewed as

perceiving its environment through sensors and acting upon that environment through actuator

Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators

Robotic agent: cameras and infrared range finders for sensors; various motors for actuators

Page 4: 2 Intelligent Agents

Agents and environments

The agent function maps from percept histories to actions:

[f: P* A]The agent program runs on the physical architecture

to produce fagent = architecture + program

Page 5: 2 Intelligent Agents

Vacuum-cleaner world

Percepts: location and contents, e.g., [A,Dirty]

Actions: Left, Right, Suck, NoOp

Page 6: 2 Intelligent Agents

A vacuum-cleaner agent

Page 7: 2 Intelligent Agents

Rational agentsRational do the right thing

Based on what it can perceive and the actions it can perform.

The right action is the one that will cause the agent to be most successful

Performance measure: An objective criterion for success of an agent's behavior

E.g., performance measure of a vacuum-cleaner agent:Amount of dirt cleaned up, Amount of time taken, Amount of electricity consumed, Amount of noise generated, etc.

Goal among many objectives, pick one objective to become the main focus of agent

Page 8: 2 Intelligent Agents

Contoh Goal Performance Measure

Lulus Kuliah IPKKaya Gaji bulananJuara liga sepakbola Posisi

klasemenBahagia Tingkat

kebahagiaan

Page 9: 2 Intelligent Agents

Rational agentsRational Agent: For each possible percept

sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Page 10: 2 Intelligent Agents

Rational agentsRationality is distinct from omniscience (all-

knowing with infinite knowledge)Agents can perform actions in order to

modify future percepts so as to obtain useful information (information gathering, exploration)

An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Page 11: 2 Intelligent Agents

PEASPEAS: Performance measure, Environment,

Actuators, SensorsMust first specify the setting for intelligent agent

designConsider, e.g., the task of designing an automated

taxi driver:Performance measure (kinerja)Environment (tingkungan, suasana)Actuators (tingkah laku)Sensors

Page 12: 2 Intelligent Agents

PEASPEAS: Performance measure, Environment,

Actuators, SensorsMust first specify the setting for intelligent agent

designConsider, e.g., the task of designing an automated

taxi driver:Performance measure: Safe, fast, legal, comfortable

trip, maximize profitsEnvironment: Roads, other traffic, pedestrians,

customersActuators: Steering wheel, accelerator, brake, signal,

hornSensors: Cameras, sonar, speedometer, GPS,

odometer, engine sensors, keyboard

Page 13: 2 Intelligent Agents

PEASAgent: Medical diagnosis systemPerformance measure: Healthy patient,

minimize costs, lawsuitsEnvironment: Patient, hospital, staffActuators: Screen display (questions, tests,

diagnoses, treatments, referrals)Sensors: Keyboard (entry of symptoms,

findings, patient's answers)

Page 14: 2 Intelligent Agents

PEASAgent: Part-picking robotPerformance measure: Percentage of parts

in correct binsEnvironment: Conveyor belt with parts, binsActuators: Jointed arm and handSensors: Camera, joint angle sensors

Page 15: 2 Intelligent Agents

PEASAgent: Interactive English tutorPerformance measure: Maximize student's

score on testEnvironment: Set of studentsActuators: Screen display (exercises,

suggestions, corrections)Sensors: Keyboard

Page 16: 2 Intelligent Agents

Environment types Fully observable vs. Partially observable : An agent's

sensors give it access to the complete state of the environment at each point in time. An environment is fully observable if the sensors detect all aspects that are relevant to the choice of action. Environment might be partially observable because of noisy and inaccurate sensors.

Deterministic vs. Stochastic : The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic)

Page 17: 2 Intelligent Agents

Environment typesEpisodic vs. Sequential (tindakan)The agent's

experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.-- Ex. classification (episodic), chess and taxi driving (sequential).

Static (tanpa waktu) vs. Dynamic (dengan waktu): The environment is unchanged, the agent doesn’t need to keep looking on the environment to decide something. On Dynamic environments are continuously asking the agent what it wants to do. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)

Page 18: 2 Intelligent Agents

Discrete vs. Continuous: A limited number of distinct state, clearly defined percepts and actions. (e.g., chess – distinct, taxi driving – continuous).

Single agent vs. Multi agent: An agent operating by itself in an environment. Competitive vs Cooperative agents

Page 19: 2 Intelligent Agents

Environment typesChess with Chess without Taxi driving a clock a clock

Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes NoSingle agent No No No

• The environment type largely determines the agent design• The real world is (of course) partially observable, stochastic,

sequential, dynamic, continuous, multi-agent

Page 20: 2 Intelligent Agents

The Structure of AgentsAn agent is completely specified by the

agent function mapping percept sequences to actions

One agent function (or a small equivalence class) is rational

Aim: find a way to implement the rational agent function concisely

Page 21: 2 Intelligent Agents

Table-lookup agentKeeps track of percept sequence and index

it into a table of actions to decide what to do.

Drawbacks:Huge tableTake a long time to build the tableNo autonomyEven with learning, need a long time to learn

the table entries

Page 22: 2 Intelligent Agents

Agent program for a vacuum-cleaner agent

Page 23: 2 Intelligent Agents

Agent typesFour basic types in order of increasing generality:Simple reflex agents: select action based on

current percept, ignoring the rest of percept history.Model-based reflex agents: maintain internal

states that depend on the percept history, reflects some unobserved aspects of the current state.

Goal-based agents: Have information about the goal and select the action to achieve the goal

Utility-based agents: assign a quantitative measurement to the environment.

Learning agents: learning from experience, improving performance

Page 24: 2 Intelligent Agents

Simple reflex agents

Page 25: 2 Intelligent Agents

Condition-action rule: condition that triggers some actionex. If car in front is braking (condition) then initiate braking (action)

Connect percept to actionSimple reflex agent is simple but has very

limited intelligenceCorrect decision can be made only in fully

observable environment

Page 26: 2 Intelligent Agents

Simple reflex agentsfunction SIMPLE-REFLEX-AGENT (percept) return an

actionstatic: rules, a set of condition-action rulesstate INTERPRET-INPUT (percept)rule RULE-MATCH(state,rules)action RULE-ACTION[rule]

return action

o INTERPRET-INPUT function generates an abstracted description of the current state from the percept

o RULE-MATCH function return the first rule in the set of rules that matches the given state description

Page 27: 2 Intelligent Agents

Model-based reflex agents

Page 28: 2 Intelligent Agents

The most effective way to handle partial observability is keep track of the part of the world it can see now.

Agent maintains internal state that depend on percept history and thereby reflects some unobserved aspect of the current state.

Updating internal state information requires two knowledges:How the world evolves independently of the agentHow the agents’s own action affect the world

Knowledge about “how the world works” is called a model

Page 29: 2 Intelligent Agents

Model-based reflex agentsfunction REFLEX-AGENT-WITH-STATE (percept) return

an actionstatic: state, a description of the current world staterules, a set of condition-action rulesaction, the most recent action, initially nonestate UPDATE-STATE (state,action,percept)rule RULE-MATCH(state,rules)action RULE-ACTION[rule]

return action

o UPDATE-STATE creates new internal state

Page 30: 2 Intelligent Agents

Goal-based agents

Page 31: 2 Intelligent Agents

Knowing about the current state is not always enough to decide what to do.

The agent needs a goal information that describe situations that are desirable

Page 32: 2 Intelligent Agents

Utility-based agents

Page 33: 2 Intelligent Agents

Achieving the goal alone are not enough to generate high-quality behavior in most environments.

How quick? How safe? How cheap? measured

Utility function maps a state onto a real number (performance measure)

Page 34: 2 Intelligent Agents

Learning agents

Page 35: 2 Intelligent Agents

Learning element is responsible for making improvements

Performance element is responsible for selecting external actions

Critic gives feedback on how the agent is doing

Page 36: 2 Intelligent Agents

To Sum Up Agents interact with environments through actuators and

sensors The agent function describes what the agent does in all

circumstances The performance measure evaluates the environment

sequence A perfectly rational agent maximizes expected performance Agent programs implement (some) agent functions PEAS descriptions define task environments Environments are categorized along several dimensions: observable? deterministic? episodic? static? discrete? single-

agent? Several basic agent architectures exist: reflex, reflex with state, goal-based, utility-based