Can NLP Systems be a Cognitive Black Box? March 2006 Jerry T. Ball Senior Research Psychologist Air...

36
Can NLP Systems be a Cognitive Black Box? March 2006 Jerry T. Ball Senior Research Psychologist Air Force Research Laboratory Mesa, AZ

Transcript of Can NLP Systems be a Cognitive Black Box? March 2006 Jerry T. Ball Senior Research Psychologist Air...

Can NLP Systems be a Cognitive Black Box?

March 2006

Jerry T. BallSenior Research PsychologistAir Force Research Laboratory

Mesa, AZ

2

Can NLP Systems be a Cognitive Black Box?

• Is Cognitive Science Relevant to AI Hard Problems?

• Is it sufficient to model input/output behavior using computational techniques which bear little resemblance to how humans process language?

• Is it necessary to model the internals of human language processing behavior in NLP systems?

3

Can NLP Systems be a Cognitive Black Box?

Yes• Is Cognitive Science Relevant to AI Hard Problems?

• Is it sufficient to model input/output behavior using computational techniques which bear little resemblance to how humans process language?

• Is it necessary to model the internals of human language processing behavior in NLP systems?

4

Can NLP Systems be a Cognitive Black Box?

Yes

No

• Is Cognitive Science Relevant to AI Hard Problems?

• Is it sufficient to model input/output behavior using computational techniques which bear little resemblance to how humans process language?

• Is it necessary to model the internals of human language processing behavior in NLP systems?

5

Can NLP Systems be a Cognitive Black Box?

Yes

No

Yes

• Is Cognitive Science Relevant to AI Hard Problems?

• Is it sufficient to model input/output behavior using computational techniques which bear little resemblance to how humans process language?

• Is it necessary to model the internals of human language processing behavior in NLP systems?

– To some level of abstraction below input/output behavior

6

Theoretical Arguments for Looking Inside the Black Box

7

Green’s Argument that AI is not THE Right Method for Cognitive Science

• Green, C. (2000). Is AI the Right Method for Cognitive Science? Psycholoquy 11 (061)

• Fodor’s “Disneyland” view of AI

– AI is to psychology as Disneyland is to physics

• Denying the importance of looking inside the black box harks back to behaviorism

• But we don’t know enough about psychology to be able to program psychologically well-founded systems

• Need to get our cognitive ontology straightened out before we are likely to make much progress

– What is a thought?

• AI is the attempt to simulate something that is not, at present, at all well understood

8

Green Backtracks Considerably

• Vicente, K. & Burns, C. (2000). Overcoming the Conceptual Muddle: a Little Help from Systems Theory. Psycoloquy 11 (070).

• In conclusion, AI may provide a rich source of models and techniques, but until these models are tested against psychological evidence and under realistic psychological constraints, they cannot claim to have any relevance for cognitive scientists. Therefore, AI alone cannot be the method for cognitive science.

– The simple response is to do just that!

• Green’s response: I am in basic accord with their conclusion that “AI ALONE cannot be the method for cognitive science”… I simply think there are some basic problems in psychology that programming is not equipped to solve…

9

Harnad’s Argument for a Convergence between Cognitive Science and AI

• Harnad, S. (2000). The Convergence Argument in Mind-Modelling. Psycoloquy: 100 (078)

• There are multiple ways to model small, arbitrary subsets of human functional capacity. Arbitrarily many ways to capture

– Calculating skills

– Chess playing skills

– Scene describing skills

• “Many ways to skin a cat”

10

Harnad’s Argument for a Convergence between Cognitive Science and AI

• There are fewer and fewer ways to capture all these skills in the same system

– The degrees of freedom for skinning ALL possible cats with the SAME resources are much narrower than those for skinning just one with ANY resources

• As we scale up from toy tasks to our full performance capacity…the degree of underdetermination of our models will shrink to the normal levels of underdetermination of scientific theories by empirical data

11

Relevance for this Workshop

• If Harnad is right, then AI and Cognitive Science converge on Hard Problems

AI + Hard Problems + White Box = Cognitive ScienceHuman-Centric

HLI(human-level intelligence)

Computational

12

Relevance for this Workshop

• NLP is the quintessential AI Hard Problem

• But Harnad suggests that the Turing Test which is a purely symbol manipulation NLP task, isn’t hard enough

– It’s too underdetermined, allowing for multiple possible solutions

– It’s ungrounded

• Harnad’s Total Turing Test requires a robot to interact with an environment and communicate with humans over the timescale of a human life

– The symbolic representations the system acquires become grounded in experience

– Robot Functionalism

13

My Big Theoretical Claim

• Ungrounded symbol systems are inadequate to represent the meaning of linguistic expressions

– “pilot” does not mean PILOT

– “pilot” does not mean pilot_n_1

– “pilot” means something like:

• Language must be grounded in non-linguistic representations of the objects, events and states of affairs the language describes and/or references

(as interpreted

by our visual

system)

14

My Big Theoretical Claim

“pilot”“pilot” “pilot”

PILOT

Real World Mental Box Mental BoxReal World

perceptionLOT

The Wrong Approach A Better Approach

gro

un

din

g

perception

15

My Big Theoretical Claim

“pilot”“pilot” “pilot”

PILOT

Real World Mental Box Mental BoxReal World

perceptionLOT

The Wrong Approach A Better Approach

gro

un

din

g

perception

16

My Big Theoretical Claim

• Full-scale NLP needs perceptual grounding whether or not NLP systems must actually function in the world

• Barsalou’s Perceptual Symbol Systems hypothesis provides a cognitive basis for grounding linguistic expressions

• Zwann’s Situation Model experiments show close interaction between language and experienced world

• We may not need the Total Turing Test—Robot Functionalism—for AI and Cognitive Science to converge

• We do need to agree to apply Cognitive Science principles to the solution of AI Hard Problems like NLP

17

Practical Arguments for Looking Inside the Black Box

18

Sentences (Normal) Humans Can’t Process (Normally)

• The horse raced past the barn fell

– Many normal humans can’t make sense of this sentence

– Humans don’t appear to use exhaustive search and algorithmic backtracking to understand Garden Path sentences

• The mouse the cat the dog chased bit ate the cheese

– Humans are unable to understand multiply center embedded sentences despite the fact that a simple stack mechanism makes them easy for parsers

– Humans are very bad at processing truly recursive structures

• While Mary dressed the baby spit up on the bed

– 40% of humans conclude in a post test that Mary dressed the baby

– Humans often can’t ignore locally coherent meanings, despite their global incoherence

(despite the claims of Chomsky and collaborators)

19

Sentences (Normal) Humans Can Process (Normally)

• The horse that was ridden past the barn, fell

– Given enough grammatical cues, humans have little difficulty making sense of linguistic expressions

• The dog chased the cat, that bit the mouse, that ate the cheese

– Humans can process “right embedded” expressions

– These sentences appear to be processed iteratively rather than recursively

– AI/Computer Science provides a model for coverting recursive processes into iterative processes which require less memory

– Inability of humans to process recursive structures is likely due to short-term working memory limitations

• Does not appear to be a perceptual limitation!

20

Practical Arguments for Looking Inside the Black Box

• Impact of Short-Term Working Memory (STWM) Limitations

• Lack of evidence for Algorithmic Backtracking in humans

• Lots of evidence that human procedural skills are directional in nature

– The direction is forward

• Inability of humans to process recursive structures

• Inability of humans to ignore locally coherent structures

– Humans can’t retract knowledge

• Lots of evidence that humans combine high-level symbolic knowledge and serial processing with low-level subsymbolic (statistical) knowledge and massively parallel processing

– Symbolic Focus of Attention and STWM

– Subsymbolic (statistical) Spreading Activation

– Does this help solve the Frame Problem?

21

My Big Practical Claim

• Paying attention to cognitive science principles may actually facilitate, not hinder, the development of functional NLP systems

• Full-scale NLP systems which fail to consider human language representation and processing capabilities in sufficient detail are unlikely to be successful

22

Speech Recognition a Counter Example?

• Current speech recognition systems have achieved phenomenal success

• Word error rates have been reduced an average of 10% per year for the last decade, and improvements are likely to continue

• Systems use Hidden Markov Models (HMMs) and algorithms like Viterbi

• No claims of cognitive plausibility for the mechanisms being used

23

Speech Recognition a Counter Example?

• Current performance of speech recognition systems is still well below human performance in large vocabulary, speaker independent recognition in noisy environments

• Some researchers think performance is asymptoting

• Cognitive scientists claim that various assumptions limit the ultimate performance of such systems

– Acoustic model views speech as a sequence of phones

– Language models extremely limited

• Simple Bi-gram or Tri-gram cooccurrence

• Finite state grammars must be fully expanded to integrate with acoustic model

• Yet to be seen if such systems will eventually attain (or exceed) human capabilities

24

Speech Recognition a Counter Example?

• Within Cognitive Science, efforts are underway to improve performance of speech recognition systems by adopting cognitive principles

– Adding syllables and other perceptually salient units

– Integrating higher-level linguistic knowledge

• Efforts are also underway to integrate speech recognition front-ends into theoretically motivated computational systems which heretofore overlooked the raw acoustic signal

• Currently, performance is well below that of AI systems

• Within AI, efforts are also underway to add higher-level language capabilities

– Microsoft wants to combine their speech recognition system with their NLP system

• Unfortunately, the NLP system processes input right to left!

25

Speech Recognition Systems Viewed Cognitively

• Highly interactive

– Recent psycholinguistic evidence supports interactivity

• Probabilistic with discrete symbols

– Hybrid symbolic, probabilistic systems are now the norm

• Feedforward only

– No feedback loops: some cognitive scientists argue that feedback can’t improve perception

• Systems sum evidence without competition

– Not clear (to me) what competition gives you

• Beam search limits number of competing alternatives

– Integration of parallel-like processing and memory limitations

– Not cognitively plausible in its current implementation

26

Wrap-Up

27

Converging AI and Computational Cognitive Science

• Cognitive Architectures

– Development environment of choice for building computational cognitive models

– Need to scale up these architectures• DARPA BICA is leading the way

– Need to build larger-scale models less tied to small empirical data sets

– Need to get AI researchers interested in using Cognitive Architectures• Reusable components

• Better development tools

• Need more symposiums like this one and more opportunities for publishing research

28

Conclusions

• Full-Scale, functional NLP Systems developed as Cognitive Black Boxes are unlikely to be successful

• Human capabilities and limitations are too important to be ignored in this highly human-centric domain

• Need to consider the internals of how humans process language to some level of abstraction below input/output behavior

• What that level is, is an important topic for discussion…

– Neuronal – I hope not!

– Hybrid Symbolic/Subsymbolic – My bets are here!

– Symbolic – Already tried that!

29

The End

30

Arguments From Connectionism

• Parallel Distributed Processing (PDP) approaches look inside the black box

• PDP approaches have highlighted many of the shortcoming of symbolic AI arguing that these shortcomings result from an inappropriate cognitive architecture

• PDP systems attempt to specify what a cognitive system would be composed of at a level of abstraction somewhere above the neuronal level, but definitely inside the black box

• PDP systems capture important elements of perception and cognition

• Many AI systems are now hybrid symbolic/subsymbolic systems

31

Arguments For

• NLP systems must deal with

– Noisy input

– Lexical and Grammatical Ambiguity

– Non-Literal use of language

• These aspects call out for adoption of techniques explored in connectionist and statistical approaches

– Latent Semantic Analysis (LSA)

32

Latent Semantic Analysis (LSA)

• Statistical technique for determining similarity of the meaning of words

– Based on co-occurrence of words in texts

– Submitted to Singular Value Decomposition (SVD)

– Identifies latent semantic relationships between words, even words which do not co-occur together

• Possibility of dealing with intractable problems in meaning representation

• Determine similarity of meaning without requiring discrete word senses

• If successful, an avalanche of NLP research on word sense disambiguation will need to be revisited

33

WordNet

• Psycholinguistically motivated network of word associations

• Words grouped into synonymn sets (synsets)

• Various associations between synsets identified

– Hypernymns (type – subtype)

– Meronymns (part – whole)

• AI researchers using WordNet as a resource for NLP without buying into its psycholinguistic underpinnings

34

Autonomy of Symbolism

• Non-linguistic representations are analogical representations of experience

• They are symbols in the sense that they are in the brain, not the external world

• Wilks’ “Autonomy of Symbolism” is adopted in this sense

35

My Big Claim

• Attempts to solve AI hard problems without applying Cognitive Science principles are likely to fail

– Especially in AI systems that mimic human cognitive capabilities

– Especially in full-scale NLP systems

36

Practical Implications

• Language generation systems should avoid producing linguistic expressions which humans will have difficulty understanding

• One way of achieving this is not to rely on processing mechanisms like stacks, recursion and algorithmic backtracking which humans fail to show evidence for