Software Agents

38
Software Agents CS 486 January 29, 2003

description

Software Agents. CS 486 January 29, 2003. What is an agent?. “Agent” is one of the more ubiquitous buzzwords in computer science today. It’s used for almost any piece of software “I know an agent when I see one” (and the paperclip is not one.). Examples. News-filtering agents - PowerPoint PPT Presentation

Transcript of Software Agents

Page 1: Software Agents

Software Agents

CS 486January 29, 2003

Page 2: Software Agents

What is an agent?

• “Agent” is one of the more ubiquitous buzzwords in computer science today.– It’s used for almost any piece of software

• “I know an agent when I see one” (and the paperclip is not one.)

Page 3: Software Agents

Examples

• News-filtering agents• Shopbots/price comparison agents• Bidding agents• Recommender agents• Personal Assistants• Middle agents/brokers• Etc.

Page 4: Software Agents

Real-world agents

• Secret Agents• Travel Agents• Real Estate Agents• Sports/Showbiz Agents• Purchasing Agents

• What do these jobs have in common?

Page 5: Software Agents

What is an agent?

An agent is a program with:• Sensors (inputs)• Effectors (outputs)• An environment• The ability to map inputs to outputs

– But what program isn’t an agent, then?

Page 6: Software Agents

What is an agent?

• Can perform domain-oriented reasoning.– Domain-oriented: a program has some specific

knowledge about a particular area.– Not completely general.

• Reasoning – What does this mean?– Does the program have to plan ahead?– Can it be reactive?– Must it be declarative?

Page 7: Software Agents

What is an agent?

• Agents must be able to:– Communicate– Negotiate

But what do these terms mean? Language?Are pictures, GUIs communication?How sophisticated is “negotiation”?Communication should “degrade gracefully”

Page 8: Software Agents

What is an agent?• Lives in a complex, dynamic environment

– Getting at the notion of a complicated problem

• Has a set of goals– An agent must have something it intends to do.(we’ll return to this idea.)

• Persistent state• Distinguishes agents from subroutines, servlets, etc.

Page 9: Software Agents

What is an agent?

• Autonomy/Autonomous execution• Webster’s:

– Autonomy: “The quality or state of being self-governing”

• More generally, being able to make decisions without direct guidance.– Authority, responsibility

Page 10: Software Agents

Autonomy

• Autonomy is typically limited or restricted to a particular area.– “Locus of decision making”

• Within a prescribed range, an agent is able to decide for itself what to do.– “Find me a flight from SF to NYC on

Monday.”• Note: I didn’t say what to optimize – I’m allowing

the agent to make tradeoffs.

Page 11: Software Agents

What is an agent?• Not black and white• Like “object”, it’s more a useful

characterization than a strict category

• It makes sense to refer to something as an agent if it helps the designer to understand it.– Some general characteristics:

• Autonomous, goal-oriented, flexible, adaptive, communicative, self-starting

Page 12: Software Agents

Objects vs. Agents

• So how are agents different from objects.• Objects: passive, noun-oriented, receivers

of action.• Agents: active, task-oriented, able to take

action without receiving a message.

Page 13: Software Agents

Examples of agent technology, revisited.

• Ebay bidding agents– Very simple – can watch an auction and increment the

price for you.• Shopping agents (Dealtime, evenBetter)

– Take a description of an item and search shopping sites.– Are these agents?

• Recommender systems (Firefly, Amazon, Launch, Netflix)– Users rate some movies/music/things, and the agent

suggests things they might like. – Are these agents?

Page 14: Software Agents

More examples of agents• Brokering

– Constructing links between• Merchants• Certificate Authorities• Customers• Agents

• Auction agents– Negotiate payment and terms.

• Conversational/NPC agents (Julia)• Remote Agent (NASA)

Page 15: Software Agents

The Intentional Stance• We often speak of programs as if they are

intelligent, sentient beings:– The compiler can’t find the linker.– The database wants the schema to be in a different

format.– My program doesn’t like that input. It expects the last

name first.• Treating a program as if it is intelligent is called

the intentional stance.• It doesn’t matter whether the program really is

intelligent; it’s helpful to us as programmers to think as if it is.

Page 16: Software Agents

The Knowledge Level• The intentional stance leads us to program agents

at the knowledge level (Newell).– Reasoning about programs in terms of:

• Facts• Goals• Desires/needs/wants/preferences• Beliefs

• This is often referred to as declarative programming.

• We can think of this as an abstraction, just like object-oriented programming.– Agent-oriented programming

Page 17: Software Agents

Example• Consider an agent that will find books for me that

I’m interested in.• States: a declarative representation of outcomes.

– hasBook(“Moby Dick”)• Facts: Categories of books, bookseller websites,

etc.• Preferences – a ranking over states

– hasBook(“Neuromancer”) > hasBook(“MobyDick)– hasBook(B) & category(B) == SciFi > hasBook(b2) & category(b2) == Mystery

Page 18: Software Agents

Example

• Goals: find a book that satisfies my preferences.

• Take actions that improve the world state.

• Beliefs: used to deal with uncertainty– May(likes(Chris, “Harry Potter”))– Prob(likes(Chris, “Harry Potter”)) == 0.10

Page 19: Software Agents

Rational Machines

• How do we determine the right thing for an agent to do?

• If the agent’s internal state can be described at the knowledge level, we can describe the relationship between its knowledge and its goals.

• Newell’s Principle of Rationality:– If an agent has the knowledge that an action

will lead to the accomplishment of one of its goals, then it will select that action.

Page 20: Software Agents

Preferences and Utility• Agents will typically have preferences

– This is declarative knowledge about the relative value of different states of the world.

– “I prefer ice cream to spinach.”• Often, the value of an outcome can be

quantified (perhaps in monetary terms.)• This allows the agent to compare the utility

(or expected utility) of different actions.• A rational agent is one that maximizes

expected utility.

Page 21: Software Agents

Example• Again, consider our book agent.• If I can tell it how much value I place on different

books, it can use this do decide what actions to take.

• Prefer(SciFi, Fantasy) • prefer(SciFi, Mystery)• Like(Fantasy), Like(Mystery) like(book) & not_buying(otherBook) -> buy(book).How do we choose whether to buy Fantasy or

Mystery?

Page 22: Software Agents

Example

• If my agent knows the value I assign to each book, it can pick the one that will maximize my utility. (value – price).

• V(fantasy) = $10, p(fantasy) = $7• V(mystery) = $6, p(mystery) = $4

– Buy fantasy.• V(fantasy) = $10, p(fantasy) = $7• V(mystery) = $6, p(mystery) = $1

– Buy mystery.

Page 23: Software Agents

Utility example• Game costs $1 to play.

– Choose “Red”, win $2.– Choose “Black”, win $3.

• A utility-maximizing agent will pick Black.

• Game costs $1 to play.– Choose Red, win 50 cents– Choose Black, win 25 cents

• A utility-maximizing agent will choose not to play. (If it must play, it picks Red)

Page 24: Software Agents

Utility example• But actions are rarely certain.• Game costs $1.

– Red, win $1.– Black: 30% chance of winning $10, 70%

chance of winning 0. • A risk-neutral agent will pick Black.• What if the amounts are in millions?

Page 25: Software Agents

Rationality as a Design Principle

• Provides an abstract description of behavior. • Declarative: avoids specifying how a decision is

reached.– Leaves flexibility

• Doesn’t enumerate inputs and outputs.– Scales, allows for diverse envoronments.

• Doesn’t specify “failure” or “unsafe” states.– Leads to accomplishment of goals, not avoidance of

failure.

Page 26: Software Agents

Agents in open systems• Open systems are those in which no one

implements all the participants.– E.g. the Internet.

• System designers construct a protocol– Anyone who follows this protocol can

participate. (e.g. HTTP, TCP)• How to build a protocol that leads to

“desirable” behavior?– What is “desirable”?

Page 27: Software Agents

Protocol design

• By treating participants as rational agents, we can exploit techniques from game theory and economics.

• “Assume everyone will act to maximize their own payoff – how do we change the rules of the game so that this behavior leads to a desired outcome?”

• We’ll return to this idea when we talk about auctions.

Page 28: Software Agents

Agents and Mechanisms• System designers can treat external

programs as if they were rational agents.• That is, treat external programs as if they

have their own beliefs, goals and agenda to achieve.– For example: an auction can treat bidding

agents as if they are actually trying to maximize their own profit.

Page 29: Software Agents

Agents and Mechanisms

• In many cases, a system designer cannot directly control agent behavior.– In an auction, the auctioneer can’t tell people what to

bid.• The auctioneer can control the mechanism

– The “rules of the game”• Design goal: construct mechanisms that lead to

self-interested agents doing “the right thing.”

Page 30: Software Agents

Mechanism Example• Imagine a communication network G with

two special nodes x and y.– Edges between nodes are agents that can

forward messages.– Each agent has a private cost t to pass a

message along its edge.– If agents will reveal their t’s truthfully, we can

compute the shortest path between x and y.• How to get a self-interested agent to reveal

its t?

Page 31: Software Agents

Solution• Each agent reveals a t and the shortest path is

computed.• Costs are accumulated

– If an agent is not on the path, it is paid 0.– If an agent is on the path, it is paid the cost of the path –

the cost of the shortest path that doesn’t include it - t.– P = gnext– (gbest – t)

• For example, if I bid 10 and am in a path with cost 40, and the best solution without me is 60, I get paid 60 – (40 – 10) = 30

• Agent compensated for its contribution to the solution.

Page 32: Software Agents

Analysis

• If an agent lies:– Was on the shortest path, still on the shortest path.

• Payment lower – no benefit to lying.

– Was on the shortest path, now not on the shortest path. • This means the lie was greater than gnext– gbest • But I would rather get a positive amount than 0!

– Not on the shortest path, but now are.• Underbidding leads to being paid at the lower amount, but still

incurring higher cost. • Truth would be better!

Page 33: Software Agents

Example• Cost = 2, SP = 5, NextSP = 8• My payment if I bid truthfully:

8 – (5 – 2) = 5. Net: 5 – 2 = 3.If I underbid, my payment will be lower and net

cost higher.If I overbid, I either get 0, or the same utility.

e.g – if I bid 3, I get 8 – (5 – 3) = 6, but my net is 6 –3 = 3.Therefore, truthtelling is a dominant strategy.

Page 34: Software Agents

Adaptation and Learning

• Often, it’s not possible to program an agent to deal with every contingency– Uncertainty– Changing domain– Too complicated

• Agents must often need to adapt to changing environments or learn more about their environment.

Page 35: Software Agents

Adaptation

• Adaptation involves changing an agent’s model/behavior in response to a perceived change in the world.

• Reactive• Agents don’t anticipate the future, just

update

Page 36: Software Agents

Learning

• Learning involves constructing and updating a hypothesis

• An agent typically tries to build and improve some representation of the world.

• Proactive• Try to anticipate the future.• Most agents will use both learning and

adaptation.

Page 37: Software Agents

Agents in e-commerce

• Agents play a fairly limited role in today’s e-commerce.– Mostly still in research labs.

• Large potential both in B2C and B2B– Assisting in personalization– Automating payment

Page 38: Software Agents

Challenges for agents

• Uniform protocol/language– Web services? XML?

• Lightweight, simple to use, robust.– Always a challenge

• Critical mass– Enough people need to adopt

• “Killer app”– What will the agent e-mail/IM be?