Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with...

30
Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Can’t Live with them. Can’t Live without them) Given at U. Washington on 11/2/20

Transcript of Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with...

Page 1: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Subbarao Kambhampati

Arizona State University

Human-Aware AI(aka Darned Humans:

Can’t Live with them. Can’t Live without them)

Given at U. Washington on 11/2/2007

Page 2: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

51 year old field of unknown gender; Birth date unclear; Mother unknown; Many purported fathers;Constantly confuses holy grail with daily progress; Morose disposition

Page 3: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

So let’s see if the future is going in quite the right direction…

Page 4: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati

What is missing in this picture?

Environment

actio

n

per

cep

tio

n

Goals

(Static vs. Dynamic)

(Observable vs. Partially Observable)

(perfect vs. Imperfect)

(Deterministic vs. Stochastic)

What action next?

(Instantaneous vs. Durative)

(Full vs. Partial satisfaction)

The

$$$$

$$ Q

uest

ion

Page 5: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

What action next?

Percepts Actions

Environment

Static vs. Dynamic

Full vs. Partial satisfaction

Fully vs.

Partially Observable

Perfectvs.

Noisy

Deterministic vs.

Stochastic

Instantaneous vs.

Durative

Sequentialvs.

Concurrent

Discrete vs.

ContinuousOutcomes

Predictable vs. Unpredictable

A: A Unified Brand-name-Free Introduction to Planning Subbarao Kambhampati

What is missing in this picture?

Environment

actio

nper

cep

tio

n

Goals

(Static vs. Dynamic)

(Observable vs.Partially Observable)

(perfect vs. Imperfect)

(Deterministic vs. Stochastic)

What action next?

(Instantaneous vs. Durative)

(Full vs. Partial satisfaction)

The $

$$$$$ Qu

estion

Page 6: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

AI’s Curious Ambivalence to humans..

You want to help humanity, it is the people that you just can’t stand…

• Our systems seem happiest – either far away from

humans– or in an adversarial

stance with humans

Page 7: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

What happened to Co-existence?

• Whither McCarthy’s advice taker?

• ..or Janet Kolodner’s house wife?

• …or even Dave’s HAL? • (with hopefully a less sinister voice)

Page 8: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Why aren’t we doing HAAI?• ..to some extent we are

– Assistive technology; Intelligent user interfaces; Augmented cognition, Human-Robot Interaction

– But it is mostly smuggled under the radar..– And certainly doesn’t get no respect..

• Rodney Dangerfield of AI?

• Is it time to bring it to the center stage?– Having them as applied side of AI makes them seem

peripheral, and little formal attention gets paid to them by the “main-stream”

Page 9: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

(Some) Challenges of HAAI

• Communication– Human-level communication/interfacing

• Need to understand what makes natural interfaces..– Explanations

• Humans want explanations (even if fake..)

• Teachability– Advice Taking (without lobotomy)

• Elaboration tolerance– Dealing with evolving models

• You rarely tell everything at once to your secretary..– Need to operate in an “any-knowledge” mode

• Recognizing Human’s state– Recognizing intent; activity– Detecting/handling emotions/affect

Hum

an-aware A

I may necessitate acting hum

an (w

hich is not necessary for non-HA

AI)

Page 10: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Caveats & Worries about HAAI

• Are any of these challenges really new?• HAAI vs. HDAI (human-dependent AI)

– Human dependent AI can be enormously lucrative if you find the right sweet spot..

• But will it hamper eventual progress to (HA)AI?

• Advice taking can degenerate to advice-needing..

• Designing HAAI agents may need competence beyond computer science..

Page 11: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Are the challenges really new? Are they too hard?

• Isn’t any kind of feedback “advice giving”? Isn’t reinforcement learning already foregrounding “evolving domain models”– A question of granularity. There is no need to keep the

interactions mono-syllabic..

• Won’t communication require NLP and thus become AI-complete?– There could well be a spectrum of communication modalities

that could be tried

• Doesn’t recognition of human activity/emotional state really AI?– ..it is if we want HAAI (you want to work with humans, you need

to have some idea of their state..)

Page 12: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

HDAI: Finding“Sweet Spots” in computer-mediated cooperative work

• It is possible to get by with techniques blithely ignorant of semantics, when you have humans in the loop– All you need is to find the right sweet spot, where the computer

plays a pre-processing role and presents “potential solutions” – …and the human very gratefully does the in-depth analysis on those

few potential solutions• Examples:

– The incredible success of “Bag of Words” model! • Bag of letters would be a disaster ;-)• Bag of sentences and/or NLP would be good

– ..but only to your discriminating and irascible searchers ;-)

• Concern:– Will pursuit of HDAI inhibit progress towards eventual AI?

• By inducing perpetual dependence on (rather than awareness of) the human in the loop?

Page 13: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Delusions of Advice Taking:Give me Advice that I can easily use

• Planners that expect “advice” that is expressed in terms of their internal choice points– HSTS, a NASA planner,

depended on this type of knowledge..

• Learners that expect “advice” that can be easily included into their current algorithm– “Must link”/ “Must-not

Link” constraints used in “semi-supervised” clustering algorithms

Moral: It is wishful to expect advice that will be tailored to your program internals. Operationalizing high-level advice is your (AI program’s) responsibility

Page 14: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

HAAI pushes us beyond CS…

• By dubbing “acting rational” as the definition of AI, we carefully separated the AI enterprise from “psychology”, “cognitive science” etc.

• But pursuit of HAAI pushes us right back into these disciplines (and more)– Making an interface that improves interaction with

humans requires understanding of human psychology..

• E.g. studies showing how programs that have even a rudimentary understanding of human emotions fare much better in interactions with humans

• Are we ready to do HAAI despite this push beyond comfort zone?

Page 15: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

How are sub-areas doing on HAAI?

• Automated Planning– Full autonomy through

complete domain models

– Can take prior knowledge in the form of

• Domain physics• Control knowledge

– ..but seems to need it

• Machine Learning..– Full autonomy through

tabula rasa learning over gazillion samples

– Seems incapable of taking much prior knowledge

• Unless sneaked in through features and kernels..

I’ll focus on “teachability” aspect in two areas that I know something about

• Automated Planning– Full autonomy through

complete domain models

– Can take prior knowledge in the form of

• Domain physics• Control knowledge

– ..but seems to need it

• Machine Learning..– Full autonomy through

tabula rasa learning over gazillion samples

– Seems incapable of taking much prior knowledge

• Unless sneaked in through features and kernels..

Page 16: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

(Some) Challenges of HAAI

• Communication– Human-level communication/interfacing

• Need to understand what makes natural interfaces..– Explanations

• Humans want explanations (even if fake..)

• Teachability– Advice Taking (without lobotomy)

• Elaboration tolerance– Dealing with evolving models

• You rarely tell everything at once to your secretary..– Need to operate in an “any-knowledge” mode

• Recognizing Human’s state– Recognizing intent; activity– Detecting/handling emotions/affect

Hum

an-aw

are AI m

ay necessitate acting hu

ma

n (w

hich is not n

ecessary for non-H

AA

I)

What’s Rao doing in HAAI?

• Model-lite planning• Planning in HRI

scenarios• Human-aware

information integration

Page 17: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Motivations for Model-lite

• There are many scenarios where domain modeling is the biggest obstacle– Web Service Composition

• Most services have very little formal models attached

– Workflow management• Most workflows are provided with little information about

underlying causal models

– Learning to plan from demonstrations• We will have to contend with incomplete and evolving

domain models..

• ..but our approaches assume complete and correct models..

Is the only way to get more applications is to tackle more and more expressive domains?

Page 18: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

From “Any Time” to “Any Model” Planning

Model-Lite Planning is Planning with incomplete models

• ..“incomplete” “not enough domain knowledge to verify correctness/optimality”

• How incomplete is incomplete?• Missing a couple of

preconditions/effects or user preferences?

• Knowing no more than I/O types?

Page 19: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.
Page 20: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Challenges in Realizing Model-Lite Planning

1. Planning support for shallow domain models [ICAC 2005]

2. Plan creation with approximate domain models [IJCAI 2007, ICAPS Wkshp 2007]

3. Learning to improve completeness of domain models [ICAPS Wkshp 2007]

Page 21: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Challenge: Planning Support for Shallow Domain Models

• Provide planning support that exploits the shallow model available

• Idea: Explore wider variety of domain knowledge that can either be easily specified interactively or learned/mined. E.g.

• I/O type specifications (e.g. Woogle)• Task Dependencies (e.g. workflow specifications)

– Qn: Can these be compiled down to a common substrate?

• Types of planning support that can be provided with such knowledge– Critiquing plans in mixed-initiative scenarios– Detecting incorrectness (as against verifying correctness)

Page 22: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Challenge: Plan Creation with Approximate Domain Models

• Support plan creation despite missing details in the model. The missing details may be (1) action models (2) cost/utility models

• Example: Generate robust “line” plans in the face of incompleteness of action description– View model incompleteness as a form of

uncertainty (e.g. work by Amir et. al.)

• Example: Generate Diverse/Multi-option plans in the face of incompleteness of cost model– Our IJCAI-2007 work can be viewed as being

motivated this way..Note: Model-lite planning aims to reduce the modeling burden; the planning itself may actually be harder

Page 23: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Imprecise Intent & Diversity

Page 24: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Challenge: Learning to Improve Completeness of Domain Models

• In traditional “model-intensive” planning learning is mostly motivated for speedup– ..and it has gradually become less and less important with the

advent of fast heuristic planners• In model-lite planning, learning (also) helps in model

acquisition and model refinement. – Learning from a variety of sources

• Textual descriptions; plan traces; expert demonstrations– Learning in the presence of background knowledge

• The current model serves as background knowledge for additional refinements for learning

• Example efforts– Much of DARPA IL program (including our LSP system); PLOW

etc. – Stochastic Explanation-based Learning (ICAPS 2007 wkhop)

Make planning Model-lite Make learning knowledge (model) rich

Page 25: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Learning & Planning with incomplete models: A proposal..

• Represent incomplete domain with (relational) probabilistic logic– Weighted precondition axiom– Weighted effect axiom– Weighted static property axiom

• Address learning and planning problem– Learning involves

• Updating the prior weights on the axioms

• Finding new axioms– Planning involves

• Probabilistic planning in the presence of precondition uncertainty

• Consider using MaxSat to solve problems in the proposed formulation

Towards Model-lite Planning - Sungwook Yoon

Domain Model - Blocksworld

• 0.9, Pickup (x) -> armempty()• 1, Pickup (x) -> clear(x)• 1, Pickup (x) -> ontable(x)• 0.8, Pickup (x) –> holding(x)• 0.8, Pickup (x) -> not armempty()• 0.8, Pickup (x) -> not ontable(x)• 1, Holding (x) -> not armempty()• 1, Holding (x) -> not ontable(x)

Precondition Axiom:Relates Actions with Current state facts

Effect Axiom:Relates Actions with Next state facts

Static Property:Relates Facts in a State

Towards Model-lite Planning - Sungwook Yoon

A BAB

clear_aclear_barmemptyontable_aontable_b

pickup_apickup_b

clear_aclear_barmemptyontable_aontable_bholding_aholding_b

pickup_apickup_bstack_a_bstack_b_a

clear_aclear_barmemptyontable_aontable_bholding_aholding_bon_a_bon_b_a

noop_clear_anoop_clear_bnoop_armemptynoop_ontable_anoop_ontable_b

noop_clear_anoop_clear_bnoop_armemptynoop_ontable_anoop_ontable_bnoop_holding_anoop_holding_b

0.8

Can we view the probabilistic plangraph as Bayes net?

Evidence Variables

How we find a solution?MPE (most probabilistic explanation)There are some solvers out there

0.5

0.8

Domain Static PropertyCan be asserted too, 0.9

Towards Model-lite Planning - Sungwook Yoon

DA

RP

A Integrated Learning

Project

Page 26: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

MURI 2007: Effective Human-Robot Interaction under Time Pressure

Indiana Univ; ASUStanford, Notre Dame

Page 27: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

QUIC: Handling Query Imprecision & Data Incompleteness in Autonomous Databases

Challenges in Querying Autonomous Databases

Imprecise QueriesUser’s needs are not clearly defined

hence: Queries may be too general Queries may be too specific

General Solution: “Expected Relevance Ranking”

Relevance Function

Density Function

Challenge: Automated & Non-intrusive assessment of Relevance and Density functions

Incomplete DataDatabases are often populated

by: Lay users entering data Automated extraction

Challenge: Rewriting a user’s query to retrieve highly relevant Similar/ Incomplete tuples

However, how can we retrieve similar/

incomplete tuples in the first place? Challenge: Provide explanations

for the uncertain answers in order to gain the user’s trust

Once the similar/incomplete tuples have been

retrieved, why should users believe them?

[CIDR 07; VLDB 07]

Page 28: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Summary: Say Hi to HAAI

• We may want to take HAAI as seriously as we take autonomous agency– My argument is not that everybody should do it, but

rather that it should be seen as “main stream” rather than as some applied

• HAAI does emphasize specific technical challenges: Communication; Teachability; Human state recognition

• Pursuit of HAAI involves pitfalls (e.g. need to differentiate HDAI and HAAI) as well as a broadening of focus (e.g. need to take interface issues seriously)

• Some steps towards HAAI in planning

Page 29: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Points to Ponder..• Do we (you) agree that we might

need human-aware AI? • Do you think anything needs to

change in your current area of interest as a consequence?

• (What)(Are there) foundational problems in human-aware AI?– Is HAAI moot without full NLP?

• How do we make progress towards HAAI– Is IUI considered progress towards

HAAI?– Is model-lite planning?– Is learning by X (X=

“demonstrations”; “being told”…)?– Is elicitation of utility

models/recognition of intent?

(Some) Challenges of HAAI

• Communication– Human-level communication/interfacing

• Need to understand what makes natural interfaces..– Explanations

• Humans want explanations (even if fake..)

• Teachability– Advice Taking (without lobotomy)

• Elaboration tolerance– Dealing with evolving models

• You rarely tell everything at once to your secretary..– Need to operate in an “any-knowledge” mode

• Recognizing Human’s state– Recognizing intent; activity– Detecting/handling emotions/affect

Hum

an-aw

are AI m

ay necessitate acting hu

ma

n (w

hich is not n

ecessary for non-H

AA

I)

Page 30: Subbarao Kambhampati Arizona State University Human-Aware AI (aka Darned Humans: Cant Live with them. Cant Live without them) Given at U. Washington on.

Epilogue: HAAI is Hard but Needed..

• The challenges posed by HAAI may take us out of the carefully circumscribed goals of AI

• Given a choice, us computer scientists would rather not think about messy human interactions..

• But, do we really have a choice?