Newell - The Knowledge Level

21
The Knowl edge Level President al Address, American Association for Arti ficial Intelligence AAA f80, Stanford University, 19 Aug 1980 Allen Newell Department of Computer Scien ce Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 I. Introduction This is the first presidential address of AAAI, the American Association for Artificial Intelligence. In the grand scheme of history, even the history of artificial intelligence (AI), this is surely a minor event. The field this scientific society represents has been thriving for quite some time. No doubt the societ y itself will make solid contributions to the health of our field But it is too much to expect a presidentia l address to have a major impact. So what i s the role of the presidential address and what is the significance of the first one? 1 believe its role is to set a tone, to provide an emphasis. I think the role of the first address i s to take a stand about what that tone and emphasis should be-to set expectations for future addresses and to communicate to my fellow presidents. Only two foci are really possible for a presidential address: the state of the society or the state of the science. I believe the latter to be the correct focus. AAAI itself, its nature and its relationship to the larger society that surrounds it, are surel y important.* However, our main busine ss is to help AI become a science-albeit a scienc e with a strong engineering flavor. Thus, though a president’s address cannot be narrow or highly technical, it can certainly address a substantive issue. hat i s what I propose to do. I wish to address the question of k nowledge and representa- tion. That is a little like a physicist wishing to address the question of radiation and matter. Such broad terms designate 1 am grateful for extensive comments on an earlier draft provided by Jon Bentley, Danny Bobrow, H. T Kung, John McCarthy, John McDermott, Greg Harris, Zeno n Pylyshyn, Mike Rychener and Herber t Simon. They all tried in their several (not necessarily compatible) ways to keep me from error. *I have already provided some comments, as president, on such matters (Newell, 1980a). a whole arena of phenomena and a whole armada of questions. But comprehensive treatment is neither possible nor inten ded. Rather, such broad phrasing indicate s as intent to deal with the subject in some basic way. Thus, the first task is to make clear t he aspect of knowledge and representation of concern, namely, what is the nature of knowledge. The second task will be to outline an answer to this question. As the title indicates, I wil l propose the existence of something called the knowjedge level. The third task will be to descri be the knowledge level in as much detail as time and my own understanding permits. The final task wil l be to indicate some consequences of the existence of a knowledge level for vario us aspects of AI. II. The Problem of Representation and Knowledge The Standard View Two orthogonal and compatible basic view s of the enter- prise of AI serve our field, beyond all theoretical quibbles. The first is a geography of task areas. There is puzzle solvi ng, theorem proving, gam e-playing, induction, natural language, medical diagnosis, and on and on, with sub divisions of each major territory. Al, in this view, is an exploration, in breadth and in depth, of new territories of tasks with their new patterns of intellectual demands The second view is the functional components that comprise an intelligent system. There is a perceptual system, a memory system, a processing This res earch was sponsored by th e Defense Advanced Research Projects Agency (DOD), ARPA Order No 3597 , monitored by the Air Force Avio nics Laboratory Under Contract F 33615-78-C- 155 1 The views and conclusions contained in this document are t hose of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government Al MAGAZINE Summer 198 1 1

Transcript of Newell - The Knowledge Level

Page 1: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 1/21

The Knowledge LevelPresident al Address,

American Association for Artificial IntelligenceAAAf80, Stanford University, 19 Aug 1980

Allen NewellDepartment of Computer Science

Carnegie-Mellon UniversityPittsburgh, Pennsylvania 15213

I. Introduction

This is the first presidential address of AAAI, the AmericanAssociation for Artificial Intelligence. In the grand scheme ofhistory, even the history of artificial intelligence (AI), this issurely a minor event. The field this scientific society representshas been thriving for quite some time. No doubt the societyitself wil l make solid contributions to the health of our fieldBut it is too much to expect a presidentia l address to have amajor impact.

So what i s the role of the presidentia l address and what isthe significance of the first one? 1 believe its role is to set atone, to provide an emphasis. I think the role of thefirstaddress i s to take a stand about what that tone and emphasisshould be-to set expectations for future addresses and tocommunicate to my fellow presidents.

Only two foci are really possible for a presidential address:the state of the society or the state of the science. I believe thelatter to be the correct focus. AAAI itself, its nature and itsrelationship to the larger society that surrounds it, are surelyimportant.* However, our main business is to help AI becomea science-albeit a science with a strong engineering flavor.Thus, though a president’s address cannot be narrow or highlytechnical, it can certainly address a substantive issue. That i swhat I propose to do.

I wish to address the question of knowledge and representa-tion. That is a little like a physicist wishing to address thequestion of radiation and matter. Such broad terms designate

1 am grateful for extensive comments on an earlier draft providedby Jon Bentley, Danny Bobrow, H. T Kung, John McCarthy, JohnMcDermott, Greg Harris, Zenon Pylyshyn, Mike Rychener andHerbert Simon. They all tried in their several (not necessarilycompatible) ways to keep me from error.*I have already provided some comments, as president, on such

matters (Newell, 1980a).

a whole arena of phenomena and a whole armada o

questions. But comprehensive treatment is neither possiblnor intended. Rather, such broad phrasing indicates as intento deal with the subject in some basic way. Thus, the first tais to make clear the aspect of knowledge and representation oconcern, namely, what is the nature of knowledge. The secontask will be to outline an answer to this question. As the tiindicates, I wil l propose the existence of something called tknowjedge level. The third task will be to describe tknowledge level in as much detail as time and my owunderstanding permits. The final task wil l be to indicate soconsequences of the existence of a knowledge level for variouaspects of AI.

II. The Problem of Representation and Knowledge

The Standard View

Two orthogonal and compatible basic views of the entprise of AI serve our field, beyond all theoretical quibblThe first is a geography of task areas. There is puzzle solvintheorem proving, game-playing, induction, natural languamedical diagnosis, and on and on, with subdivisions of eamajor territory. Al, in this view, is an exploration, in breaand in depth, of new territories of tasks with their npatterns of intellectual demands The second view is tfunctional components that comprise an intelligent systeThere is a perceptual system, a memory system, a processin

This research was sponsored by the Defense Advanced ResearcProjects Agency (DOD), ARPA Order No 3597, monitored byAir Force Avionics Laboratory Under Contract F33615-78-C- 1The views and conclusions contained in this document are thosethe authors and should not be interpreted as representing the officpolicies, either expressed or implied, of the Defense AdvancResearch Projects Agency or the US Government

Al MAGAZINE Summer 1981

Page 2: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 2/21

system, a motor system, and so on. It is this second view thatwe need to address the role of representation and knowledge.

Figure 2-l shows one version of the functional view, takenfrom Newell & Simon (1972), neither better nor worse thanmany others. An intelligent agent is embedded in a tuskenvironment; a tusk statetlzent enters via a perceptualcomponent and is encoded in an initia l representation.Whence starts a cycle of activity in which arecognition occurs(as indicated by the eres) of a method to use to attempt theproblem The method draws upon a memory ofgeneral Mjorld

knoi\jledge In the course of such cycle s, new methods andnew representations may occur, as the agent attempts to solvethe problem. The goal structure, a component we all believe tobe important, does not receive its due in this figure, but nomatter. Such a picture represents a convenient and stabledecomposition of the functions to be performed by anintelligent agent, quite independent of particular implementa-tions and anatomical arrangements. It also provides aconvenient and stable decomposition of the entire scientif icfield into subfields Scientists specialize in perception, orproblem solving methods, or representation, etc.

It is clear to us all what representation is in this picture. It isthe data structures that hold the problem and will beprocessed into a form that makes the solution available.Additionally, it is the data structures that hold the worldknowledge and will be processed to acquire parts of the

I I

1 , InteFRepreaentatio; 1 Q

MethodStore

Figut e 2-I. Functiottul diagram of gettet al intelligent agent (qfierNeMsell & Simon, 1972)

solution or to obtain guidance in constructing it. The fdata structures represent the problem, the second represenworld knowledge,

A data structure by itself is impotent, of course. We hlearned to take the representation to include the basoperations of reading and writing-of access and constrution Indeed, as we know, it is possible to take a pure proceview of the representation and work entirely in terms of inputs and outputs to the read and write processes , letting tdata structure itself fade into a mythical story we tell ourselto make the memory-dependent behavior of the read anwrite processes coherent.

We also understand, though not so transparently, MjhJjthrepresentation represents. It is because of the totality procedures that process the data structure. They transform in ways consistent with an interpretation of the data structuas representing something. We often express this by sayinthat a data structure requires an itzterpreter, including in thterm much more than just the basic read/write processenamely, the whole of the active system that uses the dastructure.

The term representation is used clearly (almost technicallyin AI and computer science . In contrast, the termknowledgeis used informally, despite its prevalence in such phrases knoMlledge engineering and knowledge sources. It seemmostly a way of referring to whatever it is that a representtion has. If a system has (and can use) a data structure whiccan be said to represent something (an object, a procedure..whatever), then the system itself can also be said to ha

knowledge, namely the knowledge embodied in that represetation about that thing.

Why Is There A Problem?

This seems to be a reasonable picture, which is serving

well. W hy then is there a problem? Let me assemble somindicators from our current scene.A first indicator comes from our continually giving

representation a somewhat magical role.* It is a cliche of that representation is the real issue we face. Though we havprograms that search, it is said, we do not have programs thdetermine their own representations or invent new representtions. There is of course some substance to such statementsWhat is indicative of underlying difficulties is our inclinato treat representation like ahomzmcz~lu.s, asthe locus of reaintelligence.

A good example is our fascination with problems suchthe rnzrtilated checkboard problem Newell, 1965). The task

to cover a checkboard with two-square dominoes. This is eaenough to do with the regular board and clearly impossible do if a single square is removed, say from the upper rigcorner. The problem is to do it on a (mutilated) board whhas two squares removed, one from each of two oppositcorners This task also turns out to be impossible. The actu

*Representation is not the only aspect of intelligent systems hata magical quality; learning is another But that is a different stora different time

2 Al MAGAZINE Summer 1981

Page 3: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 3/21

task, then, is to show the impossibi lity. This goes fromapparently intractable combinatorially, if the task is represen-ted as all ways of laying down dominoes, to transparentlyeasy, if the task is represented as ust the numbers of black andwhite squares that remain to be covered. Now, the crux for AIis that no one has been able to formulate in a reasonable waythe problem of finding the good representation, so it can betackled by an AI system. By implication-so goes this view-the capabi lity to invent such appropriate representations

requires intelligence of some new and different kind.A second indicator is the great theorem-proving contro-versy of the late sixties and early seventies. Everyone in AI hassome knowledge of it, no doubt, for its residue is sti ll verymuch with us. It needs only brief recounting.

Early work in theorem proving programs for quantifiedlogics culminated in 1965 with Alan Robinson’s developmentof a machine-oriented formulation of first-order logic calledResolution (Robinson, 1965). There followed an immenselyproductive period of exploration of resolution-based theo-rem-proving. This was fueled, not only by technical advances,which occurred rapidly and on a broad front (Loveland,1978), but also by the view that we had a general purpose

reasoning engine in hand and that doing logic (and doing itwell) was a foundation stone of all intelligent action. Withinabout five years, however, it became clear that this basicengine was not going to be powerful enough to provetheorems that are hard on a human scale, or to move beyondlogic to mathematics, or to serve other sorts of problemsolving, such as robot planning.

A reaction set in, whose slogan was “uniform procedureswill not work ” This reaction itself had an immensely positiveoutcome in driving forward the development of the secondgeneration of Al languages: Planner, Microplanner, QA4,conniver, POP2, etc. (Bobrow & Raphael, 1974). Theseunified some of the basic mechanisms in problem solving-

goals, search, pattern matching, and global data bases-into aprogramming language framework, with its attendant gains ofinvolution.

However, this reaction also had a negative residue, whichsti ll exists today, well after these new AI languages have comeand mostly gone, leaving their own lessons. The residue in itsmost stereotyped form is that logic i s a bad thing for AI. Thestereotype i s not usually met with inpure form, of course Butthe mat of opinion is woven from a series of strands thatamount to as much: Uniform proof techniques have beenproven gross ly inadequate; the failure of resolution theoremproving implicates logic generally; logic is permeated with astatic view, and logic does not permit control. Any doubts

about the reality of this residual reaction can be stilled byreading Pat Hayes’s attempt to counteract it in his/n Dcfenceqf Logic (Hayes, 1977).

A third indicator is the recent SIGART Special Issue qfKnowledge Representation (Brachman SCSmith, 1980). Thisconsisted of the answers (plus analysis) to an elaboratequestionnaire developed by Ron Brachman of BBN and BrianSmith of MIT, which was sent to the AI research communityworking on knowledge representation. In practice, this meantwork in natural language, semantic nets, logical formalism s

for representing knowledge, and the third generation programming and representation systems, such as AIMI%,KRL, and KL-ONE. The questionnaire not only covered basic demography of the projects and systems, but also thposition of the respondent (and his system) on many crit icaisues of representation-quantification, quotation, self-decription, evaluation vs reference-finding, and so on.

The responses were massive, thoughtful and thorough,which was impressive given that the questionnaire took w

over an hour ju st to read, and that answers were of the ordeof ten single-spaced pages. A substantial fraction of the fiereceived coverage in the 80 odd returns, since many of themrepresented entire projects. Although the questionnaire lemuch to be desired in terms of the precision of its questionthe Special Issue sti ll provides an extremely interestingglimpse of how AI sees the issues of knowledge representationtion.

The main result was overwhelming diversi ty-a veritabjungle of opinions. There is no consensus on any question osubstance. Brachman and Smith themselves highlight thithroughout the issue, for i t came as a major surprise to them

Many (but of course not all !) respondents themselves felt tsame way. As one said, “Standard practice in the representa-tion of knowledge is the scandal of AI.”

What is so overwhelming about the diversity is that it defcharacterization. The role of logic and theorem proving, judescribed above, are in evidence, but there is much elsbesides. There is no tidy space of underlying issues in whicrespondents, hence the field, can be plotted to reveal a patternof concerns or issues. Not that Brachman and Smith coulsee. Not that this reader could see.

A Formulation of the Problem

These three items-mystification of the role of representation, the residue of the theorem-proving controversy, and thconflicting webwork of opinions on knowledge representtion-are sufficient to indicate that our views on representation and knowledge are not in satisfactory shape Howeverthey hardly indicate a crisis, much less a scandal. At least noto me. Science easily inhabits periods of diversity; it toleratbad lessons from the past in concert with good ones. The chiesignal these three send is that we must redouble our efforts tbring some clarity to the area. Work on knowledge anrepresentation should be a priority item on the agenda of ouscience.

No one should have any illus ions that clar ity and progres

wil l be easy to achieve. The diversity that is represented in thSIGART Special Issue is a highly articulate and often highlprincipled. Viewed from afar, any attempt to clari fy the issis simply one more entry into the cacophony-possibly treblpossib ly bass, but in any case a note whose first effect wil l bto increase dissonance, not diminish it.

Actua lly, these indicators send an ambiguous signal. Aalternative view of such situations in science is that effort premature. Only muddling can happen for the next while-until more evidence accumulates or conceptions ripen els

Al MAGAZINE Summer 1981 3

Page 4: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 4/21

where in AI to make evident patterns that now seem only oneposs ibility among many. W ork should be left to those alreadycommitted to the area; the rest of us should make progresswhere progress can clearly be made.

Stil l, though not compelled, I wish to have a go at thisproblem.

I wish to focus attention on the question: What isknowledge? In fact, knowledge gets very little play in the threeindicators just presented. Representation occupies center

stage, with logic in the main supporting role. 1 could claimthat this is already the key-that the conception of knowledgeis logically prior to that of representation, and until a clearconception of the former exists , the latter wi ll remainconfused. In fact, this is not so. Knowledge is simply oneparticular entry point to the whole tangled knot. Ultimate ly,clar ity wil l be attained on all these notions together. The paththrough which this is achieved will be grist for those interestedin the history of science, but is unlike ly to affect our finalunderstanding.

To reiterate: What is the nature of knowledge? How is itrelated to representation? What is it that a system has, when ithas knowledge? Are we simply dealing with redundantterminology, not unusual in natural language, which is betterreplaced by building on the notions of data structures,interpreters, models (in the strict sense used in logic), and thelike? 1 think not. I think knowledge is a distinct notion, withits own part to play in the nature of intelligence.

The Solution Follows From Practice

Before starting on matters of substance, I wish to make amethodological point. The solution I wil l propose followsfrom the practice of AI. Although the formulation I presentmay have some novelty, it should be basically familiar to you,for it arises from how we in AI treat knowledge in our workwith intelligent systems. Thus, your reaction may (perhapseven should) be “But that is just the way I have been thinkingabout knowledge all along. What is this man giving me?” Onthe first part, you are right. This is indeed the way Al hascome to use the concept of knowledge. However, this is notthe way the rest of the world uses the concept. On the secondpart, what I am giving you is a directive that your practicerepresents an important source of knowledge about the natureof intelligent systems. It is to be taken seriously.

This point can use expansion. Every science develops itsown ways of finding out about its subject matter. These gettidied up in meta-models about scientific activi ty, eg, the so-called scientific method in the experimental sciences. Butthese are only models, in reality, there is immense diversi ty inhow scien tific progress is made.

For instance, in computer science many fundamental

conceptual advances occur by (scientifically) uncontrolledexperiments in our own style of computing.* Three excellenexamples are the developments of time-sharing, packetswitched networks, and locally-networked personal computing. These are major conceptual advances that have broad-ened our view of the nature of computing. Their primarvalidation is entirely informal. Scientific activity of a mtraditional kind certain ly takes place-theoretica l development with careful controlled testing and evaluation of result

But it happens on the details, not on the main conceptions.Not everyone understands the necessary scient fic role of suchexperiments in computational living, nor that standarexperimental techniques cannot provide the same informa-tion. How else to explain, for example, the calls for controlleexperimental validation that speech understanding wil l buseful to computer science? When that experiment of style final ly performed there wil l be no doubt at all. No standaexperiment wil l be necessary. Indeed, none could havesufficed

As an example related to the present paper, I have spentsome effort recently in describing what Herb Simon and have called the PhJlsica l s!jmbol system hypothesis (Newell Simon, 1976, Newell, 1980b). This hypothesis identifies a claof systems as embodying the essential nature of symbols andas being the necessary and sufficient condition for a generallyintelligent agent. Symbol systems turn out to be universacomputational systems, viewed from a different angle. For mpoint here, the important feature of this hypothesis is that grew out of the practice in AI-out of the development of processing languages and Lisp, and out of the structureadopted in one AI program after another We in AI were lto an adequate notion of a symbol by our practice. In thstandard catechism of science, this is not how great ideadevelop. Major ideas occur because great scientists discove r(or invent) them, introducing them to the scientific commuity for testing and elaboration But here, working scientishave evolved a new major scientific concept, under partial analternative guises. Only gradually has it acquired its propename.

The notions of knowledge and representation about to bpresented also grow out of our practice. At least, so I asserThat does not give them immunity from critic ism, for listening for these lessons I may have a tin ear. But in so far they are wanting, the solution lies in more practice and moattention to what emerges there as pragmat ically successfulOf course, the message wil l be distorted by many things, epeculiar twists in the evolution of computer science hardwarand software, our own limitations of view, etc. But opractice remains a source of knowledge that cannot bobtained from anywhere else. Indeed, AI as a field committed to it. If it is fundamentally flawed, that wil l justoo bad for us Then, other paths will have to be found fro

*Computer science s not unique in having modes of progress that elsewhere to discover the nature of intelligence

don’t fit easily into the standard frames In the heyday ofpaleontology, major conceptual advances ocurred by stumbling III. The Knowledge Level

across the bones of immense beasties Neither controlled experi-mentation nor theoretical prediction played appreciable roles. I am about to propose the existence of something called th

4 Al MAGAZINE Summer 1981

Page 5: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 5/21

Configuration (PMS) Level

IProgram (Symbol) Level

Register-Transfer Sublev el 1

I

I Logic Level

Logic Circuit Le vel

Circuit LevelDevice Level

Figure 3-1: Computer s,Brtew levels.

krloti rleclge level, within which knowledge is to be defined. Tostate this clearly, requires first reviewing the notion ofcomputer systems levels.

Computer Systems Levels

Figure 3-1 shows the standard hierarchy, familiar to

everyone in computer science . Conventionally, it starts at thebottom with the device level, then up to the circu it level, thenthe logic level, with its two sublevels, combinatorial andseqtrerltial circuits, and the register-trarufkr level, then theprogram level (referred to also as the swlbolic level) andfinally, at the top, the configuration level(a lso called thePM Sor Processor-Memor!j-Switch level). We have drawn theconfiguration level to one side, since it lies directly above boththe symbol level and the register-transfer level.

The notion of levels occurs repeatedly throughout scienceand philosophy, with varying degrees of utilit y and precision.In computer science, the notion is quite precise and highlyoperational Figure 3-2 summarizes i ts essentia l attributes. A

level consists of a medium that is to be processed, componentsthat provide pr imitive processing, laws qf composition thatpermit components to be assembled into svstel l7s, and la\rls qfbehavior that determine how system behavior depends on thecomponent behavior and the structure of the system. Thereare many variant instantiations of a given level, eg, manyprogramming systems and machine languages and manyregister-transfer systems.*

Each level is defined in two ways. First, it can be definedautonomously, without reference to any other level To anamazing degree, programmers need not know logic circuits,logic designers need not know electrical circuits, managers canoperate at the configuration level with no knowledge of

programming, and so forth. Second, each level can be reducedto the level below. Each aspect of a level-medium, compo-nents, laws of composition and behavior-can be defined interms of systems at the level next below The architecture isthe name we give to the register-transfer level system that

*Though currently dominated by electrical circuits, variant circuitlevel instantiations also exist, eg, fluidic circuits.

defines a symbol (programming) level, creating a machilanguage and making it run as described in the programmermanual for the machine. Neither of these two definitions olevel is the more fundamental. It is essentia l that they boexist and agree.

Some intricate relations exist between and within leveany instantiation of a level can be used to create aninstantiation of the next higher level. Within each lesystems hierarchies are possible, as in the subroutine hiearchy at the programming level. Normally, these do not aanything specia l in terms of the computer system hierarchitself. However, as we all know, at the program level ipossible to construct any instantiation within any othinstantiation (modulo some rigidity in encoding one dstructure into another), as in creating new programminglanguages.

There is no need to spin out the details of each level. We liwith them every day and they are the stuff of architectutextbooks (Bell & Newell, 1971), machine manuals and digcomponent catalogues, not research papers. However, i t noteworthy how radically the levels differ. The medichanges from electrons and magnetic domains at the devi

level, to current and voltage at the circuit level, to bits at logic level (either single bits at the logic circuit level ovectors at the register-transfer level), to symbolic expressioat the symbol level, to amounts of data (measured in data biat the configuration level. System characterist ics change frocontinuous to discrete processing, from parallel to seroperation, and so on.

Aspects Register-Transfer Level Symbo l Level

Systems Digital Systems Computers

Mediu m Bit Vectors Symbols, expressio

Compone nts Registers MemoriesFunctional Units Operations

Compo sition Laws Transfer Path Designation ,association

Behavior Laws Logical Operations Sequentialinterpretation

Figure 3-2. Dqfinirzg arperts qf a computer wste m level

Despite th is varie ty, all levels share some common featueFour of these, though transparently obvious, are important us:

1) Specification of a system at a level always determincompletely a definite behavior for the system at thlevel (given initial and boundary conditions).

2) The behavior of the total system results from the loeffects of each component of the system processin

medium at its input to produce its output.3) The immense variety of behavior is obtained

system structure, ie, by the variety of ways assembling a smal l number of component typ(though perhaps a large number of instances of eacWe).

4) The medium is realized by state-like properties matter, which remain passive until changed by tcomponents.

Al MAGAZINE Summer 1981

Page 6: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 6/21

Computer systems levels are not simply levels of abstrac-tion. That a system has a description at a givkn level does notnecessarily imply it has a description at higher levels. There isno way to abstract from an arbitrary electronic circuit toobtain a logic-level system. This contrasts with many types ofabstraction which can be uniformly applied, and thus have acertain optional character (as in abstracting away from thevisua l appearance of objects to thier masses). Each computersystem level is a specialization of the class of systems capable

of being described at the next level. Thus, it is a priori openwhether a given level has any physical realizationsIn fact, computer systems at all levels are realizable ,

reflecting indirectly the structure of the physical world. Butmore holds than this. Computer systems levels are realized bytechnologies. The notion of a technology has not received theconceptual attention it deserves. But roughly given a specif ica-tion of a particular system at a level, it is possible to constructby routine means a physical system that real izes thatspecifica tion. Thus, systems can be obtained to specifica tionlimits of time and cost. It is not possible to invent arbitrarilyadditional computer system levels that nestle between existinglevels . Potential levels do not become technologies, just by

being thought up. Nature has a say in whether a technologycan exist.

Computer system levels are up,ur.o.ximations. Al l of theabove notions are realized in the real world only to variousdegrees. Errors at lower levels propagate to higher ones,producing behavior that is not explicable within the higherlevel itself. Technologies are imperfect, with constraints thatlimit the size and complexity of systems that can actually befabricated. These constraints are often captured in designrules (eg, fan-out limits, stack-depth limits, etc), whichtransform system design from routine to problem solving. Ifthe complexities become too great, the means of systemcreation no longer constitute a technology, but an arena of

creative inventionWe live quite comfortably with imperfect system levels,

especially at the extremes of the heirarchy. At the bottom, thedevice level is not complete, being used only to devisecomponents at the circu it level. Likewise, at the top, theconfiguration level is incomplete, not providing a full set ofbehavioral laws. In fact, it is more nearly a pure level ofabstraction than a true system level. This accounts for bothsymbol level and register-transfer level systems having confi-guration (PMS) level abstractions.*

These levels provide ways of describing computer systems;they do not provide ways of describing their environments.This may seem somewhat unsatisfactory, because a level doesnot then provide a general closed description of an entireuniverse, which is what we generally expect (and get) from alevel of scientific description in physics or chemistry. How- ’ever, the situation is understandable enough. System designand analysis requires only that the interface between the

*See Bell, Grason & Newell (1972) for a PMS approach to theregister-transfer level

environment and the system (ie, the inner side of thtransducers) be adequately described in terms of each leveleg, as electrical signals, bits, symbols or whatever Almosnever does the universe of system plus environment have to bmodeled in toto, with the structure and dynamics of tenvironment described in the same terms as the system itselfIndeed, in general no such description of the environment the terms of a given computer level exists. For instance, nregister-transfer level description exist s of the airplane

which an airborne computer resides. Computer system levedescribe the internal structure of a particular class of systemnot the structure of a total world.

To sum up, computer system levels are a reflection of thnature of the physical world. They are not just a point of vithat exists solely in the eye of the beholder. This reality comfrom computer system levels being genuine specializationsrather than being just abstractions that can be applieduniformly.

A New Level

I now propose that there does exist yet another system leve

which I will call the kno\tlfedge level It is a true systems lein the sense we have just reviewed. The thrust of this paper that distinguishing this level leads to a simple and satisfactorview of knowledge and representation. It disso lves some of tdifficu lties and confusions we have about this aspect artificial intelligence.

A quick overview of the knowledge level, with an indicatiof some of its immediate consequences, is useful beforentering into details.

The system at the knowledge level is the agent Thcomponents at the knowledge level are goals, actions anbodies. Thus, an agent is composed of a set of actions, a set ogoals and a body. The medium at the knowledge level

knowledge (as might be suspected) Thus, the agent processeits knowledge to determine the actions to take Finally, tbehavior law is the principle of r.atior7alit1’. Actions selected to attain the agent’s goals.

To treat a system at the knowledge level is to treat it having some knowledge and some goals, and believing it wdo whatever is within its power to attain its goals, in so farits knowledge indicates. For example:

l “She knows where th is restaurant is and said she’meet me here. I don’t know why she hasn’t arrived

l “Sure, he’ll fix it. He knows about cars.”l “If you know that 2 + 2q 4, why did you write 5

The knowledge level sits in the hierarchy of systems levimmediately above the symbol level, as Figure 3-3 shows. components (actions, goals, body) and its medium (knowledge) can be defined in terms of systems at the symbol levejust as any level can be defined by systems at the level onbelow. The knowledge level has been placed side by side witthe configuration level. The gross anatomical description oknowledge-level system is simply (and only) that the agent h

6 Al MAGAZINE Summer 1981

Page 7: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 7/21

as parts bodies of knowledge, goals and actions. They are allconnected together in that they enter into the determination ofwhat actions to take. This structure carries essentially noinformation; the specifica tion at the knowledge level isprovided entirely by the content of the knowledge and thegoals, not by any structura l way they are connected togetherIn effect, the knowledge level can be taken to have adegenerate configuration level

As is true of any level, although the knowledge level can be

constructed from the level below (ie, the symbol level), it alsohas an autonomous formulation as an independent level.Thus, knowledge can be defined independent of the symbollevel, but can also be reduced to symbol systems

As the figure makes evident, the knowledge level is aseparate level from the symbol level. Relative to the existingview of the computer systems hierarchy, the symbol level hasbeen split in two, one aspect remaining as the symbol levelproper and another aspect becoming the knowledge level. Thedescription of a system as exhibiting intelligent behavior st illrequires both levels, as we shall see. Intelligent systems are notto be described exclusively in terms of the knowledge level.

Know ledge Level ____ _--Configcration (PMS) Level

Program (Symbol) Level

Register-Transfer Sublevel

Iogic Level

Logic Circuit Sublevel

Circuit Level

Device Level

J

To repeat the final remark of the prior section: Computersystem levels really exist, as much as anything exists. They arenot just a point of view. Thus, to claim that the knowledgelevel exists is to make a scientific claim, which can range fromdead wrong to slightly askew, in the manner of all scientif icclaims. Thus, the matter needs to be put as a hypothesis.

The Knm~iedge Level Hvpothe sis.There exists adistinct computer systems level, lying immediatelyabove the symbol level, which is characterized byknowledge as the medium and the principle ofrationality as the law of behavior.

Some preliminary feeling for the nature of knowledgeaccording to this hypothesis can be gained from the followingremarks:

l Knowledge is intimately linked with rationality.Systems of which rat ionality can be posited can besaid to have knowledge. It is unclear in what sense

other systems can be said to have knowledge.l Knowledge is a competence-like notion, being

potential for generating action.*l The knowledge level is an approximation. Nothing

guarantees how much of a system’s behavior can beviewed as occurring at the knowledge level. Althoughextremely useful, the approximation is quite imperfect, not just in degree but in scope.

l Representations exist at the symbol level, being

systems (data st ructures and processes) that realize abody of knowledge at the knowledge level.l Knowledge serves as the specification of what

symbol structure should be able to dol Logics are simply one class of representations among

many, though uniquely fitted to the analysis oknowledge and representation.

IV. The Details of the Knowledge Level

We begin by defining the knowledge level autonomously, independently of lower system levels. Figure 3-2 lists whrequired, though we wil l take up the various aspects

different order: firs t, the structure, consist ing of the syscomponents and laws for composing systems; second, the lof behavior (the law of rationality), and third, the me(knowledge). After this we wil l describe how the knowllevel reduces to the symbol level.

Against the background of the common features of familiar computer system levels listed earlier, there wifour surprises in how the knowledge level is defined. Twil l stretch the notion of system level somewhat, but wilbreak it.

The Structure of the Knowledge Level

An agent (the system at the knowledge level) , hasextremely simple structure, so simple there is no need evepicture it.

First , the agent has some physical hod~l with which itact in the environment (and be acted upon). We talk aboutbody as if it consists of a set of actions,‘but that is onlysimplicity It can be an arbitrary physical system arbitrary modes of interaction with its environment. Ththis body can be arbitrarily complex, its complexity external to the system described at the knowledge level, wsimply has the power to evoke the behavior of this physystem.

Second, the agent has a boctv of knowledge. This bodlike a memory. Whatever the agent knows at some timcontinues to know. Actions can add knowledge to the exis

*How almost interchangeable the two notions might be can befrom a quotation from Chomsky (1975.~315): “In the past Itried to avoid, or perhaps evade the problem of explicatingnotion ‘knowledge of language’by using an invented technical namely the term ‘competence’ in place of ‘knowledge”‘.

Al MAGAZINE Summer 1981

Page 8: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 8/21

body of knowledge However, in terms of structure, a body ofknowledge is extremely simple compared to a memory, asdefined at lower computer system levels . There are nostructura l constraints to the knowledge in a body, either incapacity (ie, the amount of knowledge) or in how theknowledge is held in the body. Indeed, there is no notion ofhow knowledge is held (encoding is a notion at the symbollevel, not knowledge level) . Also , there are not well-definedstructura l properties associated with access and augmenta-tion. Thus, it seems preferable to avoid calling the body ofknowledge a memory. In fact, referring to a “body ofknowledge,” rather than just to “knowledge,” is hardly morethan a manner of speaking, since this body has no functionexcept to be the physical component which has the know-ledge.

Third, and finally, the agent has a set qf goals. A goal is abody of knowledge of a state of affairs in the environment.Goals are structu rally distinguished from the main body ofknowledge. This permits them to enter into the behavior ofthe agent in a distinct way, namely, that which the organismstrives to realize. But, except for this distinction, goalcomponents are structurally identica l to bodies of knowledgeRelationships exist between goals, of course, but these are notrealized in the structure of the system, but in knowledge.

There are no laws of composition for building a knowledgelevel system out of these components. An agent always hasjust these components. They a ll enter direct ly into the laws ofbehavior. There is no way to build up complex agents fromthem.

This complete absence of significant structure in the agent isthe first surpr ise, running counter to the common feature atall levels that variety of behavior is realized by variety ofsystem structure (point 3, page 5). This is not fortuitous,butis an essentia l feature of the knowledge level. The focus for

determining the behavior of the system rests with theknowledge, ie, with the content of what is known. The internalstructure of the system is defined exact ly so that nothing needbe known about it to predict the agent’s behavior. Thebehavior is to depend only what the agent knows, what itwants and what means it has for interacting phys ically withenvironment.

The Principle of Rationality

The behavioral law that governs an agent, and permitsprediction of its behavior, is the rational principle thatknowledge wil l be used in the service of goals.* This can be

formulated more precisely as follows:Principle qf rationalit )): If an agent has knowledge thatone of it s actions wil l lead to one of its goals, then theagent will select that action.

*This principle is not intended to be directly responsive to all theextensive philosophical discussion on rationality, eg, the notion thatrationality implies the ability for an agent to givereasons for what itdoes

This principle asserts a connection between knowledge agoals, on the one hand, and the selection of actions on other, without specifica tion of any mechanism through whthis connection is made. It connects all the components of agent together direct ly.

This direct determination of behavior by a global princis the second surprise, running counter to the common featuat all levels that behavior is determined bottom-up throuthe local processing of components (point 2, page 5). Suglobal principles are not incompatible with systems whbehavior is also describable by mechanistic causal laws, testified by various global principles in physics, eg, Fermprinciple of least time in geometrical optics, or the principlleast effort in mechanics. The principles in physics are usuaoptimization~ (ie, extremum) principles. However, the prple of rationality does not have built into it any notionoptimal or best, only the automatic connection of actionsgoals according to knowledge.

Under certain simple conditions, the principle as stapermits the calculation of a system ’s trajectory, given requisite initia l and boundary conditions (ie, goals, actioinitial knowledge, and acquired knowledge as determined actions). However, the principle is not sufficient to determthe behavior in many situations. Some of these situations be covered by adding auxiliar~~ principles Thus, we can thof an extended principle of rationality , building out fromcentral or tnairl principle, given above

To formulate additional princip les, the phrase selecting action is taken to mean that the action becomes a member ocandidate set of actions, the selected set, rather than being taction that actually occurs When all principles are taken iaccount, if only one action remains in the selected set, thaction actually taken is determined, if several candidatremain, then the action actually taken is limited to thpossibilities An action can actually be taken only if physical ly possible, given the situation of the agent’s body the resources available. Such limits to action w ill affect actions selected only if the agent has knowledge of them

The main principle is silent about what happens if princip le applies for more than one action for a given gThis can be covered by the following auxiliary principl

Equipotence of acceptable actions. For given know-ledge, if action AI and action A2 both lead to goal G,then both actions are selected.*

This principle simply asserts that all ways of attaining the g

are equally acceptable from the standpoint of the goal itseThere is no implicit optimality principle that selects amsuch candidates

The main principle is also silent about what happens if principle applies to several goals in a given situation A sim

*For sim plicity, in this principle and others, no explicit made of the agent whose g oals, knowledge and actions discussion

8 Al MAGAZINE Summer 1981

Page 9: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 9/21

auxiliary principle is the following

Preference of joint goal satisfk-tion:For given know-ledge, if goal GI has the set of selected actions { Al, i} andgoal Gz has the set of selected actions i tz.1 , then theeffective set of selected actions is the intersectionof {Ali and (A*ji

It is better to achieve both of two goals than either alone. Thisprinciple determines behavior in many otherwise ambiguous

situations. If the agent has general goals of minimizing effort,minimizing cost, or doing things in a simple way, these generalgoals select out a specific action from a set of otherwiseequipotent task-specific actions

However, this principle of joint satisfaction still goes only alittle ways further towards obtaining a principle that willdetermine behavior in all situations. What if the intersectionof selected action sets is null? What if there are severalmutually exclusive actions leading to several goals? What ifthe attainment of two goals is mutually exclusive, no matterthrJugh what actions attained? These types of situations toocan be dealt with by extending the concept of a goal to includegoal preferences, that is. specific preferences for one state of

the affairs over another that are not grounded in knowledge ofhow these states different ially aid in reaching some commonsuperordinate goal

Even this extended principle of rationality does not coverall situations The central principle refers to an actionIeadingfo a goal In the real world it is often not clear whether anaction will attain a specific goal. The difficulty is not just oneof the poss ibility of error The actual outcome may be trulyprobab ilistic, or the action may be only the first step in asequence that depends on other agents’ moves, etc. Again,extensions exist to deal with uncertainty and risk , such asadopting expected value calcula tions and principles of mini-mizeing maximum loss

mizing maximum loss.In proposing each of these solutions, I am not inventing

anything. Rather, th is growing extension of rational behaviormoves along a well-explored path, created mostly by modernday game theory, econometrics and decision theory (VanNeumann & Morgenstern, 1947, Lute & Raiffa, 1957). It neednot be retraced here’. The exhibit ion of the first fewelementary extensions can serve to indicate the total develop-ment to be taken in search of a princip le of rationality thatalways determines the action to be taken.

Complete retracing is not necessary because the path doesnot lead, even ultimate ly, to the desired principle. No suchprinciple exists.* Given an agent in the real world, there is no

*An adequate critical recounting of this intellectual development isnot possible here Formal systems (utility fields) can be constructedthat appear to have the right property. But they work, not byconnecting actions with goals via knowledge of the task environ-ment but by positing of the agent a complete set of goals (actuallypreferences) that directly specify all action selections over allcombinations of task states (01 probabilistic options over taskstates) Such a move actually abandons the enterprise

guarantee.at all that his knowledge wil l determine whiactions realize which goals Indeed, there is no guarantee evethat the difficu lty resides in the incompleteness of the agenknowledge. There need not exist any state of knowledge thwould determine the action.

The point can be brought home by recalling FranStockton’s famous short story, The Lad ,, or the Tiger.?Theupshot has the lover of a beautiful princess caught by the kinHe now faces the traditional ordeal of judgement in t

ancient and barbaric land: In the public arena he must choosto open one of two utterly indistinguishable doors. Behione is a ferocious tiger and death, behind the other a lovellady, life and marriage. Guilt or innocence i s determined fate. The princess alone, spurred by her love, finds out thsecret of the doors on the night before the trial. To hconsternation she finds that the lady behind the door wil l her chief rival in loveliness. On the judgement day, from seat in the arena beside the king, she indicates by a barelyperceptible nod which door her lover should open He, unhesitating faithfulness, goes direct ly to that door. The stoends with a question. Did the princess send her lover to tlady or the tiger?

Our knowledge-level model of the princess, even if it wto include her complete knowledge, would not tell us whshe chose. But the failure of determinancy is not the modelNothing says that multip le goals need be compatible, nor ththe incompatibility be resolvable by any higher principles Tdilemma belongs to the princess, not to the scientist attemping to formulate an adequate concept of the knowledge levThat she resolved it is clear, but that her behavior in doing was describable at the knowledge level does not follow.

This failure to determine behavior uniquely is the thsurprise , running counter to the common feature at all levthat a system is a determinate machine (point 1, page 5) complete description of a system at the program, logic

circuit level yields the trajectory of the system’s behavior otime, given initial and boundary conditions. This is takenbe one of its important properties, consistent with being description of a deterministic (macro) physical system. Yradical incompleteness characterizes the knowledge leveSometimes behavior can be predicted by the knowledge levdescription, often it cannot The incompleteness is not justfailure in certain special situations or in some small depaures. The term radical is used to indicate that entire ranges obehavior may not be describable at the knowledge level, bonly in terms systems at a lower level (namely, the symbollevel). However, the necessity of accepting this incompletenesis an esential aspect of this level

The Nature of Knowledge

The ground is now laid to provide the definition knowledge As formulated so far, knowledge is defined to the medium at the knowledge level, something to be processeaccording to the principle of rationali ty to yield behavior wish to elevate this into a complete definition.

Knondedge: Whatever can be ascribed to an agent,

Al MAGAZINE Summer 1981

Page 10: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 10/21

such that it s behavior can be computed accordingto the princip le of rationality.

Knowledge is to be characterized entirely fimctionall,~, interms of what it does, not .structural!,~, in terms of physicalobjects with particular properties and relations. This stillleaves open the requirement for a physical structure forknowledge that can fil l the functional role. In fact, that keyrole is never filled directly. Instead, it is filled only indirectlyand approximately by srnrhol sI’stems at the next lower level.These are total systems, not just symbol structures Thus,knowledge, though a medium, is embodied in no medium-likepassive physical structure

This failure to have the medium at the knowledge level be astate-like physical structure is the fourth surprise, runningcounter to the common feature at all levels of a passivemedium (point 4, page 5) Again, it is an essentia l feature ofthe knoweldge level, giving knowledge its special abstract andcompetence-like character. One can see on the blackboard asymbolic expression (say a list of theorem names). Though theactual seeing is a tad harder. yet there is sti ll no difficultyseeing this same expression residing in a definite set ofmemory cells in a computer The same is true at lower levels-the bit vectors at the register-transfer level-marked on theblackboard or residing in a register as voltages Moreover, themedium at one level plus additional static structure defines themedium at the next level up The bit plus its organization intoregisters provides the bit-vector; collections of bit vectors plusfunctional specialization to link fields, type fields, etc , definesthe symbolic expression All this fails at the knowledge level.The knowledge cannot so easily be seen, only imagined as theresult of interpretive processes operating on symbolic expres-sions. Moreover, knowledge is not just a collect ion ofsymbolic expressions plus some static organization; it requires

both processes and data structuresThe definition above may seem like a reverse, even perverse,way of defining knowledge. To understand it-to see how itworks, why it is necessary, why it is useful, and why it iseffective-we need to back off and examine the situation inwhich knowledge is used.

How it works. Figure 4-l shows the situation, whichinvolves an ohseriler and an agent The observer treats the

Observer Agent

agent as a system at the knowledge level, ie, ascribeknowledge and goals to it. Thus, the observer knows tagent’s knowledge (K) and goals (G), along with his possiactions and his environment (these latter by direct observtion, say). In consequence, the observer can make predictionof the agent’s actions using the principle of rationality.

Assume the observer ascribes correctly, ie, the agebehaves as if he has knowledge K and goals G What the agereally has is a symbol system, S. that permits it to carry othe calcula tions of what act ions it wil l take, because it haand G (with the given actions in the given environment)

The observer is itself an agent. ie. is describable at tknowledge level. There is no way in this theory of knowledto have a system that is not an agent have knowledge ascribe knowledge to another system. Hence, the observer hall this knowledge (ie, K’, consisting of knowledge knowledge that K is the agent’s knowledge. knowledge knowledge that these are the agent’s goals, knowledge of tagent’s action, etc ) But what the observer really has. course, is a symbol system, S’, that lets it calculate actions the basis of K, goals, etc., ie, calculate what the agent woudo if it had K and G

Thus, as the figure shows, each agent (the observer and thobserved) has knowledge by virtue of a symbol system thprovides the abili ty to act as if it had the knowledge The tosystem (ie, the dyad of the observing and observed agentruns without there being any physica l structure that is knowledge.

Whr it is rzecessarl’. Even granting the scheme of Figure 4to be possible , why cannot there simply be some physicstructure that corresponds to knowledge? Why are not S aS’ simply the knowledge? The diagram seems similar to wcould be drawn between any two adjacent system levels . a

in these other cases corresponding structures do exist at eaclevel For example, the medium at the symbol level (symbolic expressions) corresponds to the medium at tregister-transfer level (the bit vectors). The higher mediumsimply a specia lized use of the lower medium, but both physical structures The same holds when we descend from tregister-transfer level (bit vectors) to the logic circuit l(single bits); the relationship is simply one of aggregatioDescending to the circuit level again involves specializationwhich some feature of the circu it level medium (eg, voltagetaken to define the bit (eg, a high and a low voltage lev

The answer in a nutshell is that knowledge of the wocannot be captured in a finite structure The world is too

and agents have too great a capability for responding Knowledge is about the world Insofar as an agent can selactions based on some truth about the world. any candidatstructure for knowledge must contain that truth. Thknowledge as a structure must contain at least as much var ieas the set of all truths (ie, propositions) that the agent

*Agents with fin ite knowledge are certain ly possible, but wouextraordinarily limited.

10 Al MAGAZINE Summer 1981

Page 11: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 11/21

respond to.A representation of any fragment of the world (whether

abstract or concrete), reveals immediate ly that the knowledgeis not finite Consider our old friend, the chess position. Howmany propositions can be stated about the position? For agiven agent, only propositions need be considered that couldmaterially affect an action of the agent relative to its goal. Butgiven a reasonably intelligent agent who desires to win, the listof aspects of the position are unbounded. Can the Queen takethe Pawn? Can it take the Pawn next move? Can it if theBishop moves? Can it if the Rook is ever taken? Is a pinpossible that will unblock the Rook a ttacker before a Kingside attack can be mounted? And so on.

A seemingly appropriate objection is: (1) the agent is finiteso can’t actually have an unbounded anything; and (2) thechess position is also just a finite structure. However, theobjection fails, because it is not possible to state from afar (ie,from the observer’s viewpoint) a bound on the set ofpropositions about the position that wil l be available. Ofcourse, if the observer had a model of the agent at thesymbolic process level, then a (possibly accurate) prediction

might be made But the knowledge level is exact ly the levelthat abstracts away from symbolic processes. Indeed, the waythe observer determines what the agent might know is toconsider what he (the observer) can find out. And th is hecannot determine in advance.

The situation here is not really strange The underlyingphenomena is the generative abili ty of computational systems,which involves an active process working on an intially givendata structure. Knowledge is the posited extensive form of allthat can be obtained potentially from the process. Thispotential is unbounded when the details of processing areunknown and the gap is closed by assuming (from the principleof rationality) the processing to be whatever makes the correct

selection. Of course, there are limits to this processing,namely, that it cannot go beyond the knowledge given.

What the computational system generates are selections ofactions for goals, conditioned on states of the world. Eachsuch basic means-ends relation may be taken as an element ofknowledge To have the knowledge available in extensionwould be to have all these possible knowledge elements for allthe goals, actions and states of the world discriminable to theagent at the given moment. The knowledge could then bethought of as a giant table fu ll of these knowledge elements,but it would have to be an infinite table. Consequently, thisknowledge (ie, these elements) can only be created dynami-cally in time If generated by some simple procedure, only

relat ively uninteresting knowledge can be found. Interestingknowledge requires generating only what is relevant to thetask at hand, ie, generating intelligently.

Whla it is useful. The knowledge level permits predictingand understanding behavior without having an operationalmodel of the processing that is actually being done by theagent The utili ty of such a level would seem clear, given thewidespread need in life’s affairs for distal prediction, and alsothe paucity of knowledge about the internal workings of

humans! The utility is also clear in designing AI systemwhere the internal mechanisms arer still to be specified. To thextent that AI systems successfully approximate rationaagents, it is also useful for predicting and understanding themIndeed, the usefulness extends beyond AI systems to acomputer programs.

Predict ion of behavior is not possible without knowinsomething about the agent. However, what is known is not tprocessing structure, but the agent’s knowledge of its externaenvironment, which is also accessible to the observer direct(to some extent) The agent’s goals must also be known, course, and these certain ly are internal to the agent. But theare relat ively stable character istics that can be inferred frobehavior and (for human adults) can sometimes be conveyeby language. One way of viewing the knowledge level is as attempt to build as good a model of an agent’s behavior possible based on information external to the agent, hencpermitting distal prediction. This standpoint makes undestandable why such a level might exist even though it radically incomplete. If such incompleteness is the best thcan be done, it must be tolerated.

Why it is qffective The knowledge level, being indirect anabstract, might be thought to be excessive ly complex, evabstruse. It could hardly have arisen naturally. On thcontrary, given the situation of Figure 4-1, it arises as dfollows night. The knowledge attributed by the observer to agent is knowledge about the external world. If the observtakes to itself the role of the other (ie, the agent), assuming thagent’s goals and attending to the common external environment, then the actions it determines for itsel f will be those ththe agent should take. Its own computational mechanisms (its symbolic system) will produce the predictions that flfrom attributing the appropriate knowledge to the agent

This scheme works for the observer without requiring tdevelopment of any expl icit theory of the agent or tconstruction of any computational model. To be moaccurate, all that is needed is a single general purposecomputational mechanism, namely, creating an enzheddincontext that posits the agent’s goals and symbolizes (rathethan executes) the resulting acti0ns.t Thus, simulation turnsout to be a central mechanism that enables the knowledgelevel and makes it useful.

Solutions to the Representation of Knowledge

The principle of rational ity provides, in effect, a gene

functional equation for knowledge. The problem for agentsto find systems at the symbol level that are solutions to th

*And also animals, some of which are usefully describeknowledge level.

THowe ver, obtaining this is still not a completely trivial accomplishme nt, as indicated by the emphasis on egocentrismearly work of Piaget (1923) and the significanc e of taking the other in the work of the social philosopher, George (1934), who is generally credited with originating the ph

Al MAGAZINE Summer 1981

Page 12: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 12/21

functional equation, and hence can serve as representations ofknowledge. That, of course, is also our own problem, asscientists of the intelligent. If we wish to study the knowledgeof agents we must use representations of that knowledge.Little progress can be made given only abstract characteriza-tions. The solutions that agents have found for their ownpurposes are also potentially useful solutions for scientificpurposes, quite independently of their interest because theyare used by agents whose intelligent processing we wish tostudy.

Knowledge, in the principle of rationality , is definedentirely in terms of the environment of the agent, for it is theenvironment that is the object of the agent’s goals, and whosefeatures therefore bear on the way actions can attain goals.This is true even if the agent’s goals have to do with the agentitself as a physical system. Therefore, the solutions are ways tosay things about the environment, not ways to say thingsabout reasoning, internal information processing states, andthe like.*

Logics are obvious candidates. They are, exactly , a refinedmeans for saying things about environments. Logics certainlyprovide solutions to the functional equation. One can findmany situations in which the agent’s knowledge can becharacterized by an expression in a logic, which one can gothrough in mechanical detail a ll the steps implied in theprinciple, deriving ultimate ly the actions to take and linkingthem up via a direct semantics so the actions actual ly occur.Examples abound. They do not even have to be limited tomathematics and logic, given the work in Al to use predicatelogic (eg, resolution) as the symbol level structures for robottasks of spatial manipulation and movement, as well as formany other sorts of tasks (Nilsson, 1980).

A logic is just a representation of knowledge. It is not theknowledge itself, but a structure at the symbol level. If we are

given a set of logical expressions, say {Li], of which we arewilling to say that the agent “knows {L$‘,then the knowledgeK that we ascribe to the agent is:

The agent knows all that can be inferred from theconjunction of [ Li 1.

This statement simply expresses for logic what has been setout more generally above. There exist s a symbol system in theagent that is able to bring any inference from {LiJ to bear toselect the actions of the agent as appropriate (ie, in the servicesof the agent’s goals). If this symbol system uses the clausesthemselves as a representationi then presumably the activeprocesses would consis t of a theorem prover on the logic,along with sundry heuris tic devices to aid the agent in arrivingat the implications in time to perform the requisite actionselections.

This statement should bring to prominence an importantquestion, which, if not already at the forefront of concern,should be.

*However, control over internal processing does require symbolsthat designate internal processing states and structures

Given that a human cannot know all the impl ica-tions of an (arbitrary) set of axioms, how can sucha formulation of knowledge be either correct oruseful?

Philosophy has many times explicitly confronted the proption that knowledge is the logical closure of a set of axiomshas seemed so obvious. Yet the proposal has always come grief on the rocky question above. It is triv ial to genecounterexamples that show a person cannot possibly know that is implied. In a field where counterexamples arprimary method for making progress, this has proved fand the proposal has little standing.* Yet, the theory knowledge being presented here embraces that the knowledgto be associated with a conjunction of logical expresssionits logica l closure. How can that be?

The answer is straightforward. The knowledge level is an approximation, and a relat ively poor one on moccasions-we called it radically incomplete. It is poor predicting whether a person remembers a telephone numbjust looked up. It is poor for predicting what a person kn

given a new set of mathematical axioms with only a short to study them. And so on, through whole meadows counterexamples Equally, i t is a good approximation many other cases. It is good for predicting that a person find his way to the bedroom of his own house, for predictthat a person who knows arithmetic wil l be able to adcolumn of numbers. And so on, through much of whacalled common sense knowledge.

This move to appeal to approximation (as the philosopheare wont to call such proposals) seems weak, becaudeclaring something an approximation seems a genepurpose dodge, applicable to dissovling every difficulty, hclearly dispelling none. However, an essentia l part of

current proposal is the existence of the second level approximation, namely, the symbol level. We now hmodels of the symbol level that describe how informatiprocessing agents arrive at actions by means of searchsearch of problem spaces and search of global data basesand how they map structures that are representations of givknowledge to structures that are representations of task-staknowledge in order to create representations of solutions Tdiscovery, development and elaboration of this second leveapproximation to describing and predicting the behavioran intelligent agent has been what AI has been all about in quarter century of its existence. In sum, given a theory ofsymbol level, we can finally see that the knowledge level isabout what seemed obvious all along.

Returning to the search for solutions to the functionequation expressed by the principle of rationality, logics

*For example, both the beautiful formal treatment of knowledgHintikka (1962) and the work in AI by Moore (1980) contininsist that knowing P and knowing that P implies Q need not leknowing Q

12 Al MAGAZINE Summer 1981

Page 13: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 13/21

only one candidate. They are in no way privileged.* There aremany other systems (ie, combinations of symbol st ructuresand processes) that can yield useful solutions. To be useful anobserver need only use it to make predictions according to theprinciple of rationality. If we consider the problem from thepoint of view of agents, rather than of AI scientists , then thefundamental principle of the observer must be:

To ascribe to an agent the knowledge associatedwith structure S is to ascribe whatever the observercan know from structure S.

Theories, models, pictures, physical views, remembered scenes,lingu istic texts and utterances, etc., etc -all these are entirelyappropriate structures for ascribing knowledge. They areappropriate because for an observer to have these structures isalso for it to have means (ie, the symbolic processes) forextracting knowledge from them.

Not only are logics not privileged, there are difficulties withthem. One, already mentioned in connection with theresolution controversy, is processing inefficiency Another i sthe problem of contradiction. From an inconsistent conjunc-tion of propositions, any proposition follows. Further, ingeneral, the contradiction cannot be detected or extirpated byany finite amount of effort. One response to this latterdifficu lty takes the form of developing new logics or logic-likerepresentations, such as non-monotonic logic s (Bobrow,1980). Another is to treat the logic as only an approximation,with a limited scope, embedding its use in a larger symbolprocessing system. In any event, the existence of difficultiesdoes not distinguish logics from other candidate representa-tions It just makes them one of the crowd

Knowledge Level Symbol LevelAgent Total symbol system

Actions Symbo l systems with transducers

Knowledge Symbo l structure plus its processes

Goals (Knowled ge of goals)

Principle of Rationality Total problem solving process

Figure 4-2. Reductionqf the Knoii,ledge Level to the Swnhol Level.

When we turn from the agents themselves and considerrepresentations from the viewpoint of the Al scient ist, manycandidates become problems rather than solutions. If the data

structures alone are known (the pictures, language expres-sions, and so on), and not the procedures used to generate theknowledge from them, they cannot be used in engineered Al

*Though the contrary might se em the case The principle ofrationality might seem to presuppose logic, or at least its formulationmight Untangling this knot requires more care than can be spenthere Note only that it is we, the observer, who formulates theprinciple of rationality (in som e representation) Agen ts only use it;indeed, only approximate it.

systems or in theoretical analyses of knowledge. The difficu ltyfollows simply from the fact that we do not know the entrepresentational system.* As a result, representations, such anatural language, speech and vision, become arenas forresearch, not tools to be used by the AI scient ist tcharacterize the knowledge of agents under study. Here, logichave the virtue that the entire system data structure andprocesses (ie, rules of inference) has been externalized and iwell understood.

The development of AI is the story of constructing manother systems that are not logics, but can be used asrepresentations of knowledge Furthermore, the developmentof mathematics, science and technology is in part the story obringing representational structures to a degree of explic itnessvery close to what is needed by Al. Often only relativemodest effort has been needed to extend such representationsto be useful for Al systems. Good examples are algebraimathematics and chemical notations.

Relation of the Knowledge Level to the Symbol Level

Making clear the nature of knowledge has already requirediscussing the central core of the reduction of the knowledglevel to the symbol level. Hence the matter can be summarizedbriefly.

Figure 4-2 list s the aspects of the knowledge level and showto what each corresponds at the symbol level. Starting at thtop, the agent corresponds to the total system at the symbolevel. Next, the actions correspond to systems that includexternal transducers, both input and output. An arbitraryamount of programmed system can surround and integratethe operations that are the actual primit ive transducers at thsymbolic level.

As we have seen, knowledge, the medium at the knowledge

level, corresponds at the symbol level to data structures pluthe processes that extract from these structures the knowledgethey contain. To “extract knowledge” is to participate wsymbolic processes in executing actions, jus t to the extent ththe knowledge leads to the selection of these actions at thknowledge level. The total body of knowledge correspondthen, to the sum total of the memory structure devoted to sudata and processes.

A goal is simply more knowledge, hence corresponds at thsymbol level to data structures and processes , just as does anybody of knowledge. Three sorts of knowledge are involveknowledge of the desired state of affairs; knowledge that tstate of affairs is desired; and knowledge of associate

concerns, such as useful methods, prior attempts to attain thgoals, etc. It is of little moment whether these latter items taken as part of the goal or as part of the body of knowledgof the world.

The principle or rationality corresponds at the symbol le

*It is irrelevant that each of us, as agents rather than AI shappens to embody some of the requisite procedures, so long AI scientists, cannot get them externalized appropriately

Al MAG AZINE Summer 1981

Page 14: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 14/21

to the processes (and associated data structures) that attemptto carry out problem solving to attain the agent’s goals. Thereis more to the total system than just the separate symbolsystems that correspond to the various bodies of knowledge.As repeatedly emphasized, the agent cannot generate at anyinstant all the knowledge that it has encoded in its symbolsystems that correspond to its bodies of knowledge It mustgenerate and bring to bear the knowledge that, in fact, isrelevant to its goals in the current environment.

At the knowledge level, the princip le of rationality andknowledge present a seamless surface: a uniform princip le tobe applied uniformly to the content of what is known (ie, towhatever is the case about the world). There is no reason toexpect this to carry down seamlessly to the symbolic level,with (say) separate subsystem s for each aspect and a uniformencoding of knowledge. Decomposition must occur, ofcourse, but the separation into processes and data structures isentirely a creation of the symbolic level, which i s governed byprocessing and encoding considerations that have no exis -tence at the knowledge level. The interface between theproblem solving processes and the knowledge extractionprocesses is as diverse as the potential ways of designingintelligent systems. A look at existing AI programs will givesome idea of the diversity, though no doubt we sti ll are only atthe beginnings of exploration of potential mechanisms. Insum, the seamless surface at the knowledge level is most l ikelya pastiche of interlocked intricate structures when seen frombelow, much l ike the smooth skin of a baby when seen under amicroscope.

The theory of the knowledge level provides a definition ofrepresentation, namely, a symbol system that encodes a bodyof knowledge. It does not provide atheory of representation,which properly exists only at the symbol level and which tellshow to create representations with particular properties, how

to analyse their efficiency, etc. It does suggest that a usefulway of thinking about representation is according to theslogan equation:

Representation q Knowledge + Access

The representation consists of a system for providing access toa body of knowledge, ie, to the knowledge in a form that canbe used to make select ions of actions in the service of goals.The access function is not a simple generator, producing oneknowledge element (ie, means-ends relat ion) after another.Rather, it is a system for delivering the knowledge encoded ina data structure that can be used by the larger system that

represents the knowledge about goals, action, etc. Access is acomputational process, hence has associated costs. Thus, arepresentation imposes a profile of computational costs ondelivering different parts of the total knowledge encoded inthe representation.

Mixed systems. The classic relationship between computersystem levels is that, once a level is adopted, useful analysisand synthesis can proceed exclusively in terms of that level,with only side studies on how lower level behavior might show

up as errors or lower level constraints might condition types of structures that are efficient. The radical incompletness of the knowledge level leads to a different relationshibetween it and the symbol level. Mixed systems are oftconsidered, even becoming the norm on occasions.

One way this happens is in the distal prediction of humbehavior. in which it often pays to mix a few processinotions along with the pure knowledge consideration. Thiswhat is often called man-in-the-street psychology. We reconize that forgetting is possible, and so we do not assume thknowledge once obtained is forever We know that inferencare available only if the person thinks it through, so we doassume that knowing X means knowing all the remoconsequences of X, though we have no good way determining exact ly what inferences will be known. We knthat people can only do a little processing in a short time, do less processing when under stress Having only crudmodels of the processing at the symbol level, these mixemodels are neither very tidy or uniformly effective The matool is the use of self as simulator for the agent But mixmodels are often better than pure knowledge-level model

Another important case of mixed mode ls-especially for and computer science-is the use of the knowledge level characterize components in a symbol-level description ofsystem. Memories are described as having a given body knowledge and messages are described as transferring knowledge from one memory to another. This carries all the way design philosophies that work in terms of a “socie ty of mind(Minsky, 1977) and to execute systems that oversee ananalyse the operation of other internal processing. The utiliof working with such mixed-level systems is evident for bdesign and analysis. For design, it permits specifications of tbehavior of components to be given prior to specifying thinternal structure. For analysis, it lets complex behavior

summarized in terms of the external environments of tcomponents (which comprises the other internal parts of tsystem).

Describing a component at the knowledge level treats it an intelligent agent. The danger in this is well known, icalled the problem of the homtrncrrlrrs If an actual system produced, all knowledge-level descriptions must ultimate ly replaced by symbol-level descript ions and there is no problemAs such replacement proceeds, the internal structure of thcomponents becomes simpler, thus moving further away froa structure that could possibly realize an intelligent ageThus, the interesting question is how the knowledge-levedescription can be a good approximation even though t

subsystem being so described is quite simple. The answeturns on the limited nature of the goals and environments such agents, whose specification is also under the control the system designer.

V. Consequences and Relationships

We have set out the theory in sufficient outline to express general nature. Here follow some discussion of its consquences, its relations to other aspects of AI, and its relationto conceptions in other fields.

14 Al MAGAZINE Summer 1981

Page 15: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 15/21

The Practice of AI

At the beginning I claimed that this theory of knowledgederived from our practice in AI. Some of its formal aspectsclearly do not, e specially, positing a distinct systems level,which spli ts apart the symbol and the knowledge level. Thus,it is worth exploring the matter of our practice brief ly.

When we say, as we often do in explaining an action of aprogram, that “the program knows K” (eg, “the theorem

prover knows the distribu tive law”), we mean that there issome structure in the program that we view as holding K andalso that this structure was involved in selecting the action inexact ly the manner claimed by the principle of rationality,namely, the encoding of that knowledge is related to the goalthe action is to serve, etc.

More revealing, when we talk, as we often do during thedesign of a program, about a proposed data structure havingor holding knowledge K (eg, “this table holds the knowledgeof co-articula tion effects”), we imply that some processes mustexist that take that data structure as input and makeselections of which we can say, “The program did action Abecause it knew K.” Those processes may not be known to the

system’s designer yet, but the belief exists that they can befound. They may not be usable when found, because they taketoo much time or space. Such considerations do not affectwhether “knowledge K is there,” only whether it can beextracted usefully . Thus, our notion of knowledge hasprecisely a competence-like character Indeed, one of its mainuses is to let us talk about what can be done, before we havefound out how to do it

Most reveal ingly of all, perhaps, when we say, as we oftendo, that a program “can’t do action A, because it doesn’t haveknowledge K,” we mean that no amount of processing by theprocesses now in the program on the data structures now inthe program can yield the selection of A (Eg, “This chess

program can’t avoid the tie, because it doesn’t know aboutrepetition of positions.“) Such a statement presupposes thatthe principle of rational ity would lead to A given K, and noway to get A selected other than having K satis fy the principleof rationality . If in fact some rearrangement of the processingdid lead to selecting A, then additional knowledge can beexpected to have been imported, eg, from the mind of theprogrammer who did the rearranging (though accident canconfound expectation on occasion, of course)

The Hearsay I1 speech understanding system (Erman &Lesser, 1980) provides a concrete example of how the conceptof knowledge is used. Hearsay has helped to make widespreada notion of a system composed of numerous sources qfknowledge, each of which is associated with a separatemodule of the program (called, naturally enough, a knoul-ledge source), and all of which act cooperat ively andconcurrently to attain a solution. What makes this idea soattractive-indeed, seductive-is that such a system seems aclose approximation to a system that operates purely at theknowledge in the abstract-eg, syntax, phonology, co-articulation, etc-and them designing representations andprocesses that encode that knowledge and provide for its

extraction against a common representation of the task in thblackboard (the working memory).

This theory of knowledge has not arisen sui generis fromunarticulated practice. On the contrary, the fundamentalinsights on which the theory draws have been well articulatedand are part of existing theoretical notions, not only in AI bwell beyond, in psychology and the social sciences. That aadpative organism, by the very act of adapting, conceals iinternal structure from view and makes its behavior solely

function of the task environment, has been a major theme the artif icial sciences In the work of my colleague, HeSimon, it stretches back to the forties (Simon, 1947), with concern for the nature of adminis trative man versus economiman. In our book on Human Problem Solving (Newel l Simon, 1972), we devoted an entire chapter to an analysis othe task environment, which turned on precisely this poinAnd Herb Simon devoted his recent talk at the formativmeeting of the Cognitive Science Socie ty to a review of thsame topic (Simon, 1980), to which I refer you for a widdiscussion and references. In sum, the present theory is to bseen as a refinement of this existing view of adaptive systemnot as a new theory,

Contributions to the Knowledge Level vs the Symbol Lev

By distinguishing sharply between the knowledge level anthe symbol level the theory implies an equally sharp distintion between the knowledge required to solve a problem anthe processing required to bring that knowledge to bear in retime and real space. Contributions to AI may be of eithflavor, ie, either to the knowledge level or to the symbol levBoth aspects always occur in particular studies, becausexperimentation always occurs in total AI systems. Blooking at the major contribution to science, it is usualtoward one pole or the other; only rarely is a piece of researc

innovative enough to make both types of contributions. Finstance, the major thrust of the work on MYCIN (Shortliff1976), was fundamentally to the knowledge level, in capturinthe knowledge used by medical experts. The processing, aadaptation of well understood notions of backward chainingplayed a much smaller role Similarly, the SNAC procedurused by Berliner (1980) to improve radically his Backgammoprogram was primarily a contribution to our understanding the symbol level, since it discovered (and ameliorated) teffects of discont inuities in global evaluation functionpatched together from many local ones It did not add to oformulation of our knowledge about Backgammon.

This proposed separation immediately recalls the weknown distinct ion of John McCarthy and Pat Hayes (196between epistemological adequac,) and heurist ic adequacyIndeed, they would seem like ly to be steadfast supporters the theory presented here, perhaps even claiming much of it be at most a refinement on their own position. I am ncompletely against such an interpretation, for I find consideable merit in their position. In fact, a recent essay bMcCarthy on ascribing mental qualities to machines (MCarthy, 1979a) makes many points similar to those of th

Al MAGAZINE Summer 1981 15

Page 16: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 16/21

When program is told.“Mary has the same telephon e number as Mike”

“Pat knows Mike’s telepho ne number ”

“Pat dialed Mike’s telephone number ”

Program should assert:“Pat dialed Mary’s telephone number ”

“I do not know if Pat knows Mary’s telephon e number”

Figure S-I: Erample jiiom McCarth~~ (1977).

present paper (though without embracing the notion of a newcomputer systems level).

However, all is not quite so simple. I once publ icly put toJohn McCarthy (Bobrow, 1977) the proposition that the roleof logic was as a tool for the analysis of knowledge, not forreasoning by intelligent agents, and he denied it flat out.

The matter is worth exploring briefly . It appears to be aprime plank in McCarthy’s research program that theappropriate representation of knowledge is with a logic. Theuse of other forms plays little role in his analyses. Thus, thefundamental question of epistemological adequacy, namely,whether there exists an adequate explicit representation ofsome knowledge, is conflated with how to represent theknowledge in a logic. As observed earlier, there are manyother forms in which knowledge can be represented, evensetting entirely t8 one side forms whose semantics are not yetwell enough understood scien tifica lly, such as natural lan-guage and visual images.

Let us consider a simple example (McCarthy, 1977, p987),shown in Figure 5-1, one of several that McCarthy has used toepitomize various problems in representation within logicThis one is built to show difficulties in transparency ofreference. In obvious formulations, having identified Mary’sand Mike’s telephone numbers, it is difficult to retain thedistinct ion between what Pat knows and what is true in fact.

However, the difficulty simply does not exist if the situationis represented by an appropriate model. Let Pat and theprogram be modeled as agents who have knowledge, with theknowledge localized inside the agent and associated withdefinite data structures, with appropriate input and outputactions, etc.-ie, a simple version of an information proces-sing system. Then, there is no difficulty in keeping knowledgesstraight, as well as what can be and cannot be inferred.

The example exhibits a couple of wrinkles, but they do notcause conceptual problems. The program must model itself, if

it is to talk about what it doesn’t know. That is, it mustcircumscribe the data structures that are the source of itsknowledge about Pat. Once done, however, its problem ofascertaining its own knowledge is no different from itsproblem of ascertaining Pat’s knowledge. Also, one of itsutterances depends on whether from its representation of theknowlege of Pat it can infer some other knowledge. But thedifficu lties of deciding whether a fact cannot be inferred froma given finite base, seem no different from deciding any issueof failure to infer from given premises.

To be sure, my treatment here is a little cavalier, givexisting analyses. It has been pointed out that difficultiemerge in the model-based solution, if it is known only either Pat knows Mike ’s telephone or his address; and theven worse difficulties ensue from knowing that Pat doesknow Mike’s telephone number (eg, see Moore 1980, chap Without pretending to an adequate discussion, it does nseem to me that the difficulties here are other than those dealing with knowing only that a box is painted red or b

inside, or knowing that your hen will not lay a golden eggboth cases, the representation of this knowledge by tobserver must be in terms of descript ions of the model, notan instance of the model. But knowledge does not pose specidifficulties.*

As one more example of differences between contributionto the knowledge level and the symbol level, consider a wknown, albeit somewhat controversial case, namely, Schankwork on conceptual dependent,) structures.I believe its macontribution to AI has been at the knowledge level That sa view is not complete ly obvious, can be seen by the contrastance of Pat Hayes in his already mentioned piece, “Defence of Logic” (Hayes, 1977). Though not much variance from the present paper in its basic theme, h is papexhibits Figure 5-2, classifies it aspretend-it S-English, anargues that there i s no effective way to know its meanin

IS-A:FISH

PTRANS

\\

\

‘1Cause Direction=Towards (Mary)

11Impact

Figure 5-2 A conceptual dependemy diagram from P a(1977)

On the contrary, I claim that conceptual dependencstructures made a real contribution to our understanding the knowledge level. The content of this contribution liethe model indicated in Figure 5-3, taken rather directly froSchank & Abelson (1979). The major claim of conceptu

dependency is that the simplest causal model is adequate tgive first approximation semantics of a certain fragment

*Indeed, two additional recent attempts on this problem, though cas logical systems, seem essentially to adopt a model view One is McCarthy himself (McCarthy, 1979b), who introduces theconcepof a number as distinct from the number. Concepts seem ust to beway to get data structures to talk about The other attem(Konolige, 1980) usesexpressions n two languages, again with whseems the same effect

16 Al MAGAZINE Summer 1981

Page 17: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 17/21

natural language. This model, though really only sketched inthe figure, is not in itself very problematica l, though as withall formalization it is an important act to reduce it to a finiteapparatus. There is a world of states filled with objects thathave attributes and whose dynamics occur through actions.Some objects, called actors, have a mentality, which i s to saythey have representations in a long term memory and arecapable of mental acts. The elementary dynamics of thisworld, in terms of what can produce or inhibit what, are

indicated in the figure.The claim in full is that if sentences are mapped into amodel of this sort in the obvious way, an interpretation ofthese sentences is obtained that captures a good deal of itsmeaning. The program Margie (Schank, 1975) provided theinitia l demonstration, using paraphrase and inference asdevices to provide explic it evidence. The continued use ofthese mechanisms as part of the many programs that havefollowed Margie has added to the evidence implic itly, as itgradually has become part of the practice in AI.

World: states, actions, objects, attributes

Actors. objects with mentality (central processor pluslong-term memory)

CauseAn act results in a state

{; A state enables an actA state or act ini tiates a mental state

r R A mental act is the reason for a physical actfdE A state disables an act

Figure 5-3 Conceptual Depenclenc~~ model (qf/er Schank & Abelron.1977)

Providing a simple model such as this constitutes acontribution to the knowledge level-to how to encodeknowledge of the world in a representation. It is a quitegeneral contribution, expressed in a way that makes itadaptable to a wide variety of intelligent systems * On theother hand, this work made relat ively little contribution to thesymbol level, ie, to our notions of symbolic processing. Thetechniques that were used were essentailly state of the art. Thiscan be seen in the relative lack of emphasis or discussion ofthe internal mechanics of the program. For many of us, themeaning of conceptual dependency seemed undefined withouta process that created conceptual dependency structures fromsentences. Yet, when this was final ly forthcoming (in Margie),there was nothing there except a large AI program containingthe usual sorts of things, eg, various ad hoc mechanismswithin a famil iar framework (a parser, etc.). What was missed

was that the program was simply the implementation of thmodel in the obvious, ie, straightforward, way. The programwas not supposed to add signif icant ly to the specifica tion the mapping. There would have been trouble if additions hbeen required, just as a computer program for partiadifferential equations is not supposed to add to the mathematics

Laying to Rest the Predicate Calculus Phobia

Let us review the theorem proving controve rsy of tsixtie s. From all that has been said earlier, it is clear that tresidue can be swept aside Logic is the appropriate tool foanalljzing the knowledge level, though it is often not tpreferred representation to use for a given domain. Indeedgiven a representation-eg, a semantic net, a data structurefor a chess position, a symbolic structure for some abstracproblem space, a program, or whatever-to determine exactlwhat knowledge is in the representation and to characterize requires the use of logic. Whatever the detailed differencerepresented in my discussion in the last section of the Pat anMike example, the types of analysis being performed

McCarthy, Moore and Konolige (to mention only the namthat arose there) are exact ly appropriate. Just as talking oprogrammerless programming violates truth in packaging, does talking of a non-logical analysisof knowledge.

Logic is of course a representation (actually, a family o

them), and is therefore a candidate (along with its theoremproving engine) for the representation to be used by aintelligent agent. Its use in such a role depends strongly computational considerations, for the agent must gain acceto very large amounts of encoded knowledge, swif tly areliably. The lessons of the sixt ies taught us some things abouthe limitat ions of using logics for this role However, thlessons do not touch the role of logic as the essential languagof analysis at the knowledge level.

Let me apply this view to Nilsson’s new textbook in (Nilsson, 1980). Now, I am an admirer of Nilsson’s bo(Newell, 1981).* It is the first attempt to transform textbooin AI into the mold of basic texts in science and engineerinNilsson’s book uses the first order predicate calculus as thlingua franca for presenting and discussing representationthroughout the book. The theory developed here says that just right; I think it is an important position for the bookhave adopted. However, the book also introduces logic intimate connection with resolution theorem proving, tasserting to the reader that logic i s to be seen as representation for problem solving systems. That seems to mjust wrong. Logic as a representation for problem solvirates only a minor theme in our textbooks. One consequencof the present’theory of knowledge wil l be to help assign loits proper role in AI.

*This interpretation of conceptual dependency in terms of a rnoriel ismy own; Schank and Abelson (1977) prefer to cast it as a causalrlwtax This latter may mildly obscure its true nature, for it seem s tobeg for a causal sernanticy as well.

*Especially so, because Nils and 1 both set out at the samwrite textbooks of AI His is now in print and I am silent a

Al MAGAZINE Summer 1981

Page 18: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 18/21

The Relationship with Philosophy

The present theory bears some close and curious relation-ships with philosophy, though only a few preliminary remarksare possible here. The nature of mind and the nature ofknowledge have been clas sica l concerns in philosophy,forming major continents of its geography. There has beenincreasing contact between philosophy and Al in the lastdecade, focussed primari ly in the area of knowledge represen-tation and natural language. Indeed, the respondents to theBrachman and Smith questionnaire, reflecting exactly thisgroup, opined that philosophy was more relevant to Al thanpsychology.

Philosophy’s concern with knowledge centers on the issueof certainty. When can knowledge by trusted? Does a personhave privileged access to his subjective awareness, so hisknowledge of it is infallible? This is ensconced in thedistinct ion in philosophy between knoM4edg-e and he&f, asindicated in the slogan phrase, knorclledge is justified truebelief: AI, taking all knowledge to be errorful, has seen fit tocall all such systems knoulledge swtetns It uses the term beliefonly informally when the lack of veridicality is paramount, as

in political belief systems. From philosophy’s standpoint, Aldeals only in belief systems. Thus, the present theory ofknowledge, sharing ‘as it does AI’s view of general indifferenceto the problems of absolute certainty , is simply inattentive tosome central philosophical concerns

An important connection appears to be with the notion ofintentional systems. Starting in the work of Brentano (1874)the notion was of a class of systems capable of having desires,expectations, etc -all things that were about the externalworld. The major function of the formulation was to providea way of distinguishing the physical from the mental, ie, ofproviding a characterizat ion of the mental. A key result of thisanalysis was to open an unbridgeable gap between the

physical and the mental. Viewing a system as physicalprecluded being able to ascribe intentionality to it. Theenterprise seems at opposite poles from work in AI, which isdevoted precisely to realizing mental functions in physicalsystems.

In the hands of Daniel Dennett (1978), a philosopher whohas concerned himself rather deeply with AI, the doctrine ofintentional systems has taken a form that corresponds closelyto the notion of the knowledge level, as developed here. Hetakes pains to lay out an intentional stance,and to relate it towhat he cal ls the subpersonal stance, His notion of stance is asystem level, but ascribed entire ly to the observer, ie, to hewho takes the stance. The subpersonal stance corresponds tothe symbolic or programming level, being illustrated repeated-ly by Dennett with the gross flow diagrams of AI programs.The intentional stance corresponds to the knowledge level. Inparticular, Dennett takes the important step of jettisoning themajor result-cum-assumption of the original doctrine, to wit,that the intentional is unalterably separated from the physical(ie, subpersonal).

Dennett’s formulation differs in many details from thepresent one It does not mention knowledge at all, but focuses

on intentions It does not provide (in the papers I have seen)technical analysis of intentions-one understands this class osystems more by intent than by characterization. It does nodeal with the details of the reduction. It does not, as noteassign reality to the different system levels, but keeps them the eye of the reduction. Withal, there is little doubt that boDennett and myself are reaching for the characterizat ion exactly the same class of systems. In particular, the role rationality is central to both, and in the same way. A detaileexamination of the relation of Dennett’s theory with present one is in order, though it cannot be accomplishedhere.

However, it should at least be noted that the knowledglevel does not itself explain the notion ofahoutness; rather, iassumes it. The explanation occurs partly at the symbol levein terms of what it means for symbols to designate externasituations, and partly at lower levels, in terms of tmechanisms that permit designation to actually occur 1980b

A Generative Space of Rational Systems

My talk on physical symbol systems to the La Jol

Cognitive Science Conference last year (Newell, 1980bemployed a frame s tory that decomposed the attempt tunderstand the nature of mind into a large number oconstraints-universal flexibility of response, use of symbouse of language, development, real time response, and so onThe importance of physical symbol systems was underscoredby its being a single c lass of systems that embodied twdistinct constraints, universality of functional response asymbolic behavior, and was intimate ly tied to a third, godirected behavior, as indicated by the experience in AI

An additional indicator that Al is on the right track understanding mind came from the notion of agenerativeclass of systems, in analogy with the use of the term

generate and test.Designing a system is a problem preciselbecause there is in general no way simply to generate a systemwith specified properties. Always the designer must back oto some class of encompassing systems that can be generated,and then test (intel ligent ly) whether generated candidatesystems have desireable properties The game, as we all know,is to embody as many constraints as possible in the generator,leaving as few as possible to be dealt with by testing.

Now, the remarkable property about universa l, symbolisystems is that they are generative in this sense. We havefashioned a technology that le ts us take for granted thawhatever system we construct will be universal and will hafull symbolic capability. Anyone who works in Lisp, or otsimilar systems, gets these constraints satisfied automatically.Effort is then devoted to contriving designs to satisadditional constraints-real time, or learning or whatnot.

For most constraints we do not have generative classes osystems-for real-time, for development, for goal-directedness, etc. There is no way to explore spaces of systems thatautomatically satisfy these constraints, looking for instancwith additional important properties. An interesting questiis whether the present theory offers some hope of building

18 Al MAGAZINE Summer 1981

Page 19: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 19/21

generative clas s of rational goal-directed systems. It wouldperforce also need to be universal and symbolic, but that canbe taken for granted.

It seems to me possible to glimpse what such a clas s mightbe like, though the idea is fairly speculative. First, imp licit inall that has gone before, the class of rational goal-directedsystems is the class of system s that has a knowledge level.Second, though systems only approximate a knowledge leveldescription, they are rational systems precisely to the extent

they do. Thus, the design form for all intelligent systems is interms of the body of knowledge that they contain and theapproximation they provide to being a system describable atthe knowledge level. If the technology of symbol systems canbe developed in this factored form, then it may be possible toremain always within the domain of rational systems, whileexploring variants that meet the additional constraints of realtime, learnability, and so forth.

Conclusion

I have presented a theory of the nature of knowledgeand representation. Knowledge is the medium of a systems

level that resides immediate ly above the symbol level Arepresentation is the structure at the symbol level that realizesknowledge, ie, it is the reduction of knowledge to the nextlower computer systems level The nature of the approxima-tion is such that the representation at the symbol level can beseen as knowledge plus the access structure to that knowledge.

This new level fits into the existing concept of computersystems level. The nature of the approximation is such thatthe representation at the symbol level can be seen asknowledge plus the access structure to that knowledge.

This new level fits into the existing concept of computersystems level However, it has several surprising features: (1) acomplete absence of structure, as characterized at the

configuration level; (2) no specificat ion of processing mecha-nisms, only a global principle to be satisfied by the systembehavior; (3) a radical degree of approximation that does notguarantee a deterministic machine; and (4) a medium that isnot realized in physical space in a passive way, but only in anactive process of selection.

Both little and much flow from this theory. This notion ofknowledge and representation corresponds to how we in AIalready use these terms in our (evolving) everyday scientificpractice Also, it is a refinement of some fundamental featuresof adaptive systems that have been well articulated and it hasnothing that is incompatible with foundation work in logic.To this extent not much change will occur, especially in the

short run. We already have assimilated these notions and usethem instinctively In this respect, my role in this paper ismerely that of reporter.

However, as I emphasized at the beginning, I take this closeassociation with current practice as a source of strength, notan indication that the theory is not worthwhile, because it isnot novel enough. Observing our own practice-that is, seeingwhat the computer implicit ly tells us about the nature ofintelligence as we struggle to synthesize intelligent systems-is

a fundamental source of scientific knowledge for us. It mube used wise ly and with acumen, but no other source oknowledge comes close to it in value.

Making the theory expl icit wil l have many consequences.have tried to point out some of these, ranging fromfundamental issues to how some of my colleagues should dtheir business. Reiteration of that entire list would take tolong. Let me just emphasize those that seem most important,in my current view.

l Knowledge is that which makes the principle rationality work as a law of behavior. Thuknowledge and rationali ty are intimately tied gether.

l Splitting what was a single level (symbol) into tw(knowledge plus symbol) has immense long-termimplications for the development of AI. It permeach of the separate aspects to be adequatelydeveloped technically.

l Knowledge is not representable by a structure at thsymbol level. It requires both structures and processes. Knowledge remains forever abstract and can

never be actually in handl Knowledge is a radical approximation, failing

many occasions to be an adequate model of anagent’. It must be coupled with some symbol levrepresentation to make a viable view.

l Logic i s fundamentally a tool for analysis at tknowledge level. Logical formalisms with theoremproving can certainly be used as a representation ian intelligent agent, but it is an entirely separaissue (though one we already know much aboutthanks to the investigations in AI and mechanicamathematics over the last fifteen years).

0 The separate knowledge level may lead to construc

ting a generative class of rational systems, althoughthis is still mostly hope.

As stated at the beginning, I have no illusions that yet omore view on the nature of knowledge and representation wserve to quiet the cacophony revealed by the noble surveyinefforts of Brachman and Smith. Indeed, amid the din, it mnot even be possible to hear another note being palyedHowever, I know of no other way to proceed. Of greaconcern is how to determine whether this theory of knowledgis correct in its essentials, how to find the bugs in it, how shake them out, and how to turn it to technical use. n

References

Bell, C. G &Newell, A.Computer St~uctures~ Readings anExampler New York McGraw-Hill 1971

Bell, C. G , Grason, J & Newell, A Designing Computersand Digital System s using PDP16 Register Trarwfer dules Maynard, MA: Digital Press 1972.

Al MAGAZINE Summer 1981

Page 20: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 20/21

Berliner, H. J. Backgamm on computer program beats worldchamp ion Artificial Intelligence , 1980, 14, 205-220

Bobrow, D. A panel on knowledge representation InProceedings qf the Ftfth International Joint Conference onAlttficial Jntelligence Available from: Pittsburgh, PA:Computer Science Department, Carnegie-Mellon University,1977

Bobrow, D G , (Ed ). Special Issue on Non-MonotonicLogic Arttficial Intelligence , 1980, 13, l-174

Bobrow, D. & Raphael, B New programming languages forAl research Computing SurveIjs, 1974, 6, 153-174.

Brachman, R. J. & Smith, B. C. Special Issue on KnowledgeRepresentation SIGART Nettrsletter, February 1980, (70),l-138

Brentano, F Ps~~chologv from an Empirical StandpointLeipzig: Duncker & Humblot 1874 (New YorK: H umanitiesPress, 1973)

Cho msky. Knowledge of language In Gundelson, K (Ed ),Language, Mind and Kno\czledge, Minneapolis, Minn :University of Minnesota Press, 1975.

Dennent, D. C. Brainstorms. Philocophical essavs on mindand psychologv Montgom ery, VT: Bradford Books 1978

EIman, L D , F Hayes-Roth, V. R Lesser, and D. RReddy, “The Hearsay-11 Speech-Understanding System :Integrating Knowledge to Resolve Uncertainty,” ComputingSrtrveljs I2, (2), June 1980, 213-253

Haye s, P In deference of logic In Proceedings qf the FifthInternationa l Joint Confe rence on Art$icial IntelligenceAvailable from: Pittsburgh, PA Computer Science Depart-ment, Carnegie-Mellon University, 1977

Hintikka, J KnoMaledge and Be&f: Ithaca, NY CornellUniversity Press 1962

Konolige, K A First-order Forma lization of Kno\~~ledge andAction for a Multiagent Planning Svste m. Technical Note232, SRI International, Dec. 1980

Loveland, D W. Automated Theotem Proving. A logicalbasis. Amsterdam : North-Holland 1978.

Lute, R D . & Raiffa, H Games and Decisions New York:Wiley 1957

McCa rthy, J Predicate Calculus In PI oceedings o f the F{fthJntet national Joint Conference on Art$cial JntelligenceAvailable from: Pittsburgh, PA: Computer Science Depart-ment, Carnegie-Mellon University, 1977

McCa rthy, J Ascribing mental qualities to machines Ringle, M (Ed ), Philosophical Persp ectives in ArtifiIntelligence: Harvester Press , 1979

McCa rthy, J. First order theories of individual concepts propositions In Michie, D (Ed ), Machine JntelligencEdinburgh: Edinburgh Press, 1979

McCa rthy, J & Haye s, P J., Some philosophical probl

from the standpoint of artificial intelligence. In Meltzer, Michie, D. (Ed.), Machine Intelligence 4, EdinburEdinburgh University Press, 1969

Mead, G. H Mind, Se!f and Societlafiom the standpoint Social Behaviorist. Chicago: U niversity of Chicago 1934.

Minsky, M. Plain talk about neurodevelopmental epistelogy In Procearlirlgs 41 the Fif,h Internationa l JConference on Arf$icial Jntelligence Available from: burgh, PA* Computer Science Departmen t, Carnegie-Mlon University, 1977

Moore, R. C Reasoning about Knoivledge and ActTechnical Note 191, SRI International, Ott 1980

Newell, A Limitations of the current stock of ideasproblem solving In Kent, A & Taulbee, 0 (Eds.), Coem-e on Electronic Jflformation Handling, Washington, Spartan, 1965

Newell, A AAAI President’s Message, AI Magazine, 1. l-4

Newell, A Physical symbol systems Cognitive Scie1980, 4, 135-183

Newell, A. Review of Nils Nilsson, Principles of ArtiIntelligence. Contemporar 1’ Pyvcholog\B, 1981, 26, 50

Newell, A & Simon, H A C omputer science as empiinquiry: Symbols and search Comn1rmicationy of the A

1976, 19(3), 113-126

Nilsso n, N Principles qj Artificial Intelligence, Palo CA: Tioga 1980

Piaget, J The Language and Thought qf the Child AtlaHighlands, NJ: Human ities Press 1923

Robinson, J A A machine-oriented logic based on resolution principle Journal qf the ACM , 1965, 12. 2

Schank, R D Conceptual Jrzfoi matiou Processing, Amdam: North Holland 1975

(continued on paKe 33

20 Al MAGAZINE Summer 1981

Page 21: Newell - The Knowledge Level

8/8/2019 Newell - The Knowledge Level

http://slidepdf.com/reader/full/newell-the-knowledge-level 21/21

(P-QR4) is a thing of beauty, getting rid of a weak pawnand forcing the issue on the Q-side, whereas the moreobvious QR-Ql, 17 Q-K2 leaves Black with a less dynamicposition.

(U) It would have been better to play 17. RxP, PxP, 18.PxP, BxNP, when Black has clearly the better of it, butWhite does not appreciate that he is getting in deeper anddeeper.

(V) White probably counted on only 18.-- BxRP, 19. RxPwhen he should be able to defend himself. The textthreatens R-Q7 with the double threat of RxB and QxPchwith mate next move To defend against this by 19.

QR-Ql is not pleasant as BxRP threatens B-KNS whichwould force White to give up control of the Q-file andpermit new intrusions into the White position

(W) Again White had probably counted on 19 -- BxRPwhen 20. P-QB4, BxP, 21. BxP gives White chances.

Black’s actual move is his best of the game B-B5 ispositionally desirable as the B exercises more control fromthis point than anywhere else on the board, and thuscramps White’s whole position However, to foregowinning a pawn by BxRP is something that a program isunlikely to do unless it can see more elsewhere. This was,in fact, the case (see next note).

(W) Commenting on the game in front of the audience, Isuddenly realized that Black would never have allowed thisposition when it could have won a P by 19.-- BxRP. Therehad to be a reason why White could not get away withthis. Armed with this information, it was easy to point outthat now Black’s next move (B-N41 was coming and winseverything Later we learned that CHESS 4.9 thought themain line was 21 R/2-Bl, P-K5 when it thought itconsidered itself to be .7 pawns ahead. Agreed; exceptthat the positional advantage is much greater since Whitecannot rally his pieces to the defense of the king, which issitting there waiting to attacked by moves such as Q-R3,followed by P-KB4-B5 It would be too much to ask 4.9 tounderstand all that; i ts judgment was fine indeed.

(X) Q(or BlxB IS insufficient as RxR threatens anotherpiece and also mate Black now makes short shrift of

White

(Y) White is trying to set up a defense against theadvancing Q-side pawns along the diagonal QR6-KBl.However, this move which threatens the B and alsoQ-K8ch winning the rook, forces the White pieces intocramped positions where they cannot achieve this goal.

(Z) This is directed against possible back rank mates.Note that 4.9 correctly moves the pawn that frees a square

that cannot be covered by the opposing B (1 am not sure it did this by accident or not).

(AA) The final combination begins. Very pretty. We it all figured out in the demonstration room and wonderif 4.9 would be able to see the whole thing. Apparently did. White’s moves are all forced now.

(RR) At this point, the communication operator to closed room where the game was being played, told thaudience that Benjamin had just said that “he had nevplayed a game with 3 queens before” (apparently believinthat 34 -- P-Q7, 35. P-R7, P-QS(Q), 36. P-R8tQ)ch wgoing to happen. I commented that “he wasn’t going have that experience tonight either”

(CC> Now he sees what CHESS 4.9 saw at move 33. 3KxQ, P-Q8(Q)ch, 37. K-N2, Q-Q4ch wins the rook also prevents the pawn from queening. n

The Knowledge Level (continuedfrom page 20)

References (continued)

Schank, R & Ableson, R Scripts, P lans, Goals aUnderstanding. Hillsdale, NJ: Lawrence Erlbaum 1977

Shortliffe, E H. Computer-bared Medical ConsultatioMYC IN New York: American Elsevier 1976

Simon, H. A. Administrative Behavior. New York: Millan 1947

Simon, H A Cognitive science The newest of the artifi

sciences Cognitive Science, 1980, 4, 33-46

Stockton, F R The Lady or the Tiger? In A Chosen FShort stories, New York: C harles Scribners Sons, 189

von Neuman n, J & Morgenstern, 0 The theor\, of Gaand Ecconom ic Behavior Princeton, NJ: PrincetonIJniver-

sity Press 1947

I

Advertise in Al Magazine

Present your message to a highly qualifiedaudience of Computer Science professionalsFor a media kit, write to.

A I MagazineP. 0. Box 801La Canada, Calif. 91011attn: Advertising