Being there: why implementation matters to cognitive science

14
Artificial Intelligence Review (1987) 1,231-244 Being there: why implementation matters to cognitive science Andy Clark Cognitive Studies Research Programme, University of Sussex, Falmer, Brighton, United Kingdom Abstract It is widely believed that the mind can be studied in isolation from the details of its physical embodiment and environmental surround- ings. This is a form of residual Cartesianism which cognitive science can ill afford. Arguments are presented to show that cognitive powers should not be treated in isolation from the motor and object manipulation skills conferred upon us by our physical bodies. A new model is needed in which inner computational processes are seen to co-operate with external (physical and social) structures to produce the phenomena of natural cognition. The scope of investigations required by such a model is exam- ined, and suggestions made concerning its practical implications for workers in the field. "Well, what do you think you understand with? With your head? Bah!" (Kazantzakis, 1959). Cognitive science is heir to an unfortunate tradition. This tradition, firmly rooted in the philosophical doctrine of 'machine functionalism', asserts the independence of the study of mind from the details of its physical realization, or implementation, in a human body. We may call this the thesis of implementation neutrality. It is a thesis which one leading commentator in the field has described as "approaching the status of a dogma" (Boden, 1984). I wish to argue that this dogma has had its day. This is not to deny the well-established implementation-neutrality of particu- lar algorithms; a single algorithm may be rendered in many programming languages and physically realized in many ways by different machines. But where 'cognitive content' is concerned, both the perceptual and motor capacities of the system in which implementation occurs are crucial. Indeed, so tight are the links between physical activity and cognitive content that it is fair to describe the physical and social environment, and our actions upon it, as part and parcel of the cognitive process itself. Cognitive science, if it is at all concerned with shedding light on human cognition, cannot afford to ignore the extended and active aspects of our thought. Implementation-neutrality, used as an excuse to investigate program struc- tures in isolation from their role in an active human life, is thus a recipe for Z31

Transcript of Being there: why implementation matters to cognitive science

Page 1: Being there: why implementation matters to cognitive science

Artificial Intelligence Review (1987) 1,231-244

Being there: why implementat ion matters to cognitive sc ience

Andy Clark Cognitive Studies Research Programme, University of Sussex, Falmer, Brighton, United Kingdom

Abstract It is widely believed that the mind can be studied in isolation from the details of its physical embodiment and environmental surround- ings. This is a form of residual Cartesianism which cognitive science can ill afford. Arguments are presented to show that cognitive powers should not be treated in isolation from the motor and object manipulation skills conferred upon us by our physical bodies. A new model is needed in which inner computational processes are seen to co-operate with external (physical and social) structures to produce the phenomena of natural cognition. The scope of investigations required by such a model is exam- ined, and suggestions made concerning its practical implications for workers in the field.

"Well, what do you think you understand with? With your head? Bah!"

(Kazantzakis, 1959).

Cognitive science is heir to an unfortunate tradition. This tradition, firmly rooted in the phi losophical doctrine of 'machine functionalism', asserts the independence of the s tudy of mind from the details of its physical realization, or implementat ion, in a human body. We may call this the thesis of implementat ion neutrality. It is a thesis which one leading commentator in the field has described as "approaching the status of a dogma" (Boden, 1984). I wish to argue that this dogma has had its day. This is not to deny the well-established implementat ion-neutral i ty of particu- lar algorithms; a single algorithm may be rendered in many programming languages and physical ly realized in many ways by different machines. But where 'cognitive content ' is concerned, both the perceptual and motor capacities of the system in which implementa t ion occurs are crucial. Indeed, so tight are the links between physical activity and cognitive content that it is fair to describe the physical and social environment , and our actions upon it, as part and parcel of the cognitive process itself. Cognitive science, if it is at all concerned with shedding light on human cognition, cannot afford to ignore the extended and active aspects of our thought. Implementat ion-neutral i ty , used as an excuse to investigate program struc- tures in isolation from their role in an active human life, is thus a recipe for

Z31

Page 2: Being there: why implementation matters to cognitive science

232 A. Clark

distortion and omission. It is obviously tempting to hope for a circumscribed science of the mind. But politics aside, neatness is no substitute for truth.

The history and methodology of implementation-neutral cognitivism

Implementation-neutral cognitivism (INC) has well-defined (and well-motivated) philosophical roots. INC grew out of the functionalist project which aimed to preserve the insights of identity theory while avoiding the charge of parochialism. The identity theory (Borst, 1970) (mental states are brain processes) was perceived to be an improvement on both the behaviourists' denial of the importance of inner states (Ryle, 1949) and the dualists' mystery mongering about non-physical matter. The trouble with identity theory is that it makes the tie between a given mental state and a brain state too close. Surely, it is argued (Putnam, 1980), there cannot be any type- type identity between mental states and neurophysiological details. This would mean that no creature who lacks a brain just like ours (even assuming our own brains are suitably similar to one another!) could share our beliefs, or feel pain as we do etc.

Thus functionalism was born. The functionalist (Putnam, 1980) claims that what is essential to a mental state can be captured by a description of the internal state's abstract functional role. What this amounts to is a joint function of (a) the role of the state in mediating between system input and system output (the 'behaviourist ' element) and (b) the role of the state in activating or otherwise affecting other internal states of the system (the 'inner' element). The digital computer is at hand to provide the perfect paradigm of such a functional specification. Mental states are to be viewed as akin to program states. Any machine which takes a certain input, goes through a certain series of internal state-transitions and generates a certain output can be seen to be running the same program in a suitably abstract sense. Just how the machine physically carries out the steps of the algorithm is unimportant. Here, then, we seem to have the ideal solution to the problem of parochialism which faced the identity-theorists. Internal states do matter, but only at a level of descrip- tion suitably removed from the nitty-gritty of physical implementation. This pic- ture of the mind (sometimes termed 'machine functionalism') (Lycan, 1981) proved to be a very convivial one for cognitive science. Cognitive science is largely an agglomeration around the central idea that the mind can be studied as an 'automatic formal system' using concepts developed in dealing with digital computers (see Pylyshyn, 1984; Haugeland, 1985; Sloman, 1978). To quote one recent commentator:

"The guiding inspiration of a cognitive science is that, at a suitable level of abstraction, a theory of 'natural intelligence' should have the same basic form as the theories that explain sophisticated computer systems."

(Haugeland, 1981)

For our purposes, the crucial point is that high among the theories which 'explain sophistiCated computer systems' is the theory of virtual machines. It is this element of computer lore which has, I shall argue, misled cognitive science into the

Page 3: Being there: why implementation matters to cognitive science

Being there: why implementation matters to cognitive science 233

distorting perspective of INC. A virtual machine (Sloman, 1984) is a 'machine' which owes its existence solely to a program which runs (perhaps with other intervening stages) on the real physical machine and causes it to imitate the more complex machine to which we address our instructions. The virtual machine, however, is every bit as real as the functionalist requires it to be. It takes input, runs through sequences of internal state changes (realized by the underlying machine) and generates output. The physical machine does not matter to the (high-level) programmer or to the functionally-inclined philosopher of mind. What counts is the abstract structure of the virtual machine - - a structure which can be realized on any number of different physical machines. INC, therefore, amounts to the claim that we can study the mind as we would a virtual machine, i.e. by attending only to program-level descriptions of cognitive processes and not worrying about the physical realization of the program in a particular body-brain-machine. In short, the prospect on offer is nothing less than a se]f-contained science of cognitive activity; a study of mind "fully abstracted in principle from both biological and phenomenological foundations" (Pylyshyn, 1974, p. 68).

By entertaining this prospect INC has incorporated many elements of a general class of reductionist heuristics. These heuristics, according to Wimsatt (1980), tend to issue in a systematic distortion of the subject matter to which they are applied. Wimsatt makes his points in the course of a discussion concerning the units of selection controversy in evolutionary theory. But they are intended to be quite general and I think we can see them clearly at work in the INC paradigm just described. Thus he notes that given the reductionist goal of "understanding the behaviour of a system in terms of the interaction of its parts" it is natural to focus on entities and interrelations internal to the system under study and to simplify and standardize environmental factors to facilitate this study of internal structure (see Wimsatt, 1980, p. 159). The negative aspect is that this kind of procedure tends to give rise to a related set of heuristic strategies which both systematically bias the explanations produced and at the same time make that bias harder to detect and prove. Such heuristics include:

1 Descriptive localization - - "Describe a relational property as if it were monadic." 2 Modelling localization - - " L o o k for an intrasystemic mechanism to explain a systemic property, rather than an intersystemic one." 3 Simplification - - " . . . in model building, simplify environment before simplify- ing system." 4 Testing - - "Make sure that a theory works out only l o c a l l y . . , rather than testing it in appropriate natural environments." (from Wimsatt, 1980, p. 160).

The mutual support and disguise afforded by such heuristics are clear enough. Problems are incompletely specified (descriptive localization) thus making incom- plete models look acceptable (modelling localization and simplification), a conclu- sion re-inforced by the unnaturally restricted test conditions to which such models are subjected. This, it seems to me, is precisely the pattern exhibited by much of INCodominated cognitive science. We shall later see many examples of cases where

Page 4: Being there: why implementation matters to cognitive science

Z34 A. Clark

cognitive processes involve complex interdependencies upon action in the world. Yet such processes, with their complicated interplay of internal and environmental variables, are seldom treated by cognitive science. The general strategy, common to all cognitive science, to divide human intellectual achievements into a series of localized 'micro-worlds ' for purposes of study and testing looks like a clear case of heuristics, 3 and 4 above, in operation. I have argued the case for suspicion of the evolutionary and biological soundness of such a strategy elsewhere (Clark, 1986), and therefore shall not pursue this issue any further. The point is s imply that the INC paradigm may be seen as a consequence both of historical forces and general heuristic strategies deployed to make complex problems more tractable. In the case of INC, however, the 's implification' may be one that nobody seriously interested in natural intelligence can ult imately afford. In the next section I review some reasons far mistrusting INC as a means of understanding the mind.

INC in crisis?

The attempt to model human cognitive contents in an implementat ion-neutral style has run into a number of serious difficulties. These difficulties are frequently cited, but all too frequently misconstrued. I shall (very briefly) rehearse the major prob- lems and then distinguish the conclusions which they might seem to support from those (rather less spectacular) conclusions which they in fact warrant.

The major problems encountered so far seem to be the following:

I. The problem of background knowledge. II. The problem of content. III. The 'al location of complexi ty ' issue.

I. The problem of background knowledge is an old favourite. It concerns what Dreyfus calls "the organization of world knowledge" (Dreyfus, 1979, p. 203). The point is that our ordinary understanding of some situation (ordering a hamburger seems to be the entrenched example) is virtually inexhaustible. We know to be surprised if a Martian serves us, to run if a terrorist attacks the shop, that the hamburger should endure in space and time until we consume it and a mill ion other things besides. How on earth can we write a program with all that in it and, worse, how can we say that a system which does not know that hamburgers cannot fly unaided (or any other 'obvious ' truth you care to name) knows anything about hamburgers at all? Dreyfus' claim is that there is a fullness (he calls it a "thickness") to our understanding of the world which any point-by-point programmed system is bound to lack.

II. The problem of content involves a continuous philosophical worry to the effect that gross physical activity may be a necessary pre-condit ion to identification of the states of any system as contentful ones. The point is made forcibly by Tennant (1985). Intelligent thought, he argues, cannot be a matter of mere "symbol shunting" alone. Rather, it must involve some other activity on the part of the organism. It is impossible, he thinks, to identify any states as contentfnl, or to pick out their specific contents unless we are able to "see the programme ' running' in an agent". Tennant would therefore withhold ascriptions of intelligence to a machine

Page 5: Being there: why implementation matters to cognitive science

Being there: why implementation matters to cognitive science 235

in the absence of rich non-verbal behaviour. "Intelligence, rationality, thought and consciousness", he suggests, may all "supervene in an emergent rush on the proper prerequisites: the demands, namely, that the agent perceive and act so as to penetrate our own life-form by being one among us - - sharing our table, so to speak, and not just the main frame." (Tennant, 1985, p. 73.)

I suspect that underlying this objection are ideas about radical translation (how to determine the meanings of utterances of alien speakers) and the need to link speech and situation via a principle of charity which assumes that most of a being's utterances express truths about its actual environment. The problem, in short, is how could a radical interpreter assign meaning to the output of an immobile computer system? Nor is the problem solved if the output happens (seems) to be in English. The question is, do the words as used by the system mean anything at all, and if so, what? If complex 'non-verbal' activity is needed to fix meaning, then the system's output, in itself, must remain strictly meaningless. Representation and cognitive content, according to this objection, depend on activity; an immobile system understands nothing at all.

III. The last area of concern involves what Pylyshyn (1974, p. 68) calls the "allocation of complexity". An ant walking on rugged terrain is seen to follow a complex path. But its behaviour need not be the result of a complex program. Rather, it may have a simple program which, in response to a complex environ- ment, yields complex behaviour. Gibson and his followers have accused cognitive science of allocating far too much complexity to the software program and too little to the environment. It has recently been suggested by Don Norman (in a lecture at Sussex University) that most human daily problem-solving involves only very shallow computational operations on our part; operations which prove effective thanks to the structure of the environment to which we apply them. Thus, for example, there may be only a few options open to us when we seek to open a door (push, pull, slide?).

What conclusions can we reasonably draw from such considerations? One radi- cal set of conclusions (R*, one for each of the above arguments) would be the following:

R* 1 No system which is not fully situated (i.e. moving, acting and responding) in the world can really understand real-world events and concepts (e.g. 'ordering a hamburger'). R* 2 Nor - - given the intimacy of the link between interpretation and non-verbal activity - - can we ascribe to an immobile system any understanding of anything whatsoever (not even of events in a restricted, hallucinated micro-world). R* 3 Moreover, a great deal of our own so-called 'cognitive' activity is not, after all, a matter of manipulating internal (potentially implementation-neutral) repre- sentations. Rather, we respond and act appropriately because the actual environ- ment is richly structured and we are able to exploit that structure to augment (or even replace) our internal symbol manipulations. Much of what we normally regard as our cognitive character is thus a feature not of the functional organization of the mind but of the physical organization of the world.

Page 6: Being there: why implementation matters to cognitive science

236 A. Clark

It is not clear that conclusions such as these are properly warranted by the corresponding arguments. Argument I points to the depth of our ordinary under- standing and suggests that we achieve this depth by being embodied and situated in the world. As a comment about the developmental nature of human understanding this seems reasonable enough. The question remains; what does that ongoing engagement do for us which is not, in principle at least, capable of being copied into an immobile system. Dreyfus at one point suggests that we recognize chairs in part by relying on our bodily and cultural intuitions about what may be sat on (Dreyfus, 1979, p. 184-185). But whatever we have learnt by such means, the resultant knowledge seems to be available in principle without the bodily or cultural experience. After all, we cope with chairs in our dreams so we have some idea of a chair which is independent of actual current physical relations to chairs. Could we give this knowledge (whatever it is) to an immobile machine?

Argument I: thus seems to warrant only the weaker conclusion: W* (i) Human understanding of real-world events and concepts arises out of and is sustained by our actual engagement (bodily and cultural) with the world.

Argument II involves a somewhat suspect inference to the effect that our (alleged) failure to ascribe specific mental states to a system is fatal to the claim that such states could be had by the system, nevertheless. This worry aside, it is still not clear why verbal behaviour should not, in principle, be sufficient to motivate a scheme of interpretation which ascribes original semantic content to system outputs. Suppose a person, paralysed from birth, began to signal (via small eye movements) in a consistently interpretable morse code. Would we not treat that person as an intentional system with beliefs and desires? I believe we would. In so doing, we commit ourselves, at most, to the claim that if he could move and act then his movements and actions would be largely compatible with the system of beliefs we have assigned to them. If he was subsequently cured and yet his activity made no sense given our interpretations of his (putative) beliefs and desires, then we would have to think again. We might question the alleged 'cure' or even revise our interpretation of the verbal output. All this shows is: W* (ii) Our interpretations of verbal output are answerable to any observations of non-verbal behaviour which we may be in a position to make.

Argument III and the associated conclusion R* 3 seems to be acceptable. Evolu- tion is unlikely to favour detailed internal representations where a cheaper solution (use the external world) is available instead. This means that cognitive science could, by mistake, generate internal representational models of some ability which is in fact heavily dependent on the exploitation of the structure of the real world (see the jigsaw puzzle example in the section on 'Prospects for an engaged cogni- tivism'). This need not imply that a standard cognitivist story can never be told (e.g. for mental arithmetic) nor that environment-exploiting strategies will not have important camponents which involve the manipulation of internal representations. The best way to express the real meaning of argument II! is therefore: W* (iii) Some human cognitive achievements (e.g. creating a sculpture, planning a garden, solv- ing a jigsaw puzzle) may involve both the manipulation of internal representations and the manipulation of real objects. The object manipulations may yield states of

Page 7: Being there: why implementation matters to cognitive science

Being there: why implementation matters to cognitive science 237

the world required as inputs to later stages of the cognitive process. Therefore some computations may essentially involve the attempted manipulation of a real or hallucinated environment.

W* i-iii, though undoubtedly weaker than their R* counterparts, still raise serious difficulties for INC as a practical research strategy in cognitive science. The move from R* to W*, i.e., radical to weak conclusions, concedes that, in principle, the cognitive (internal, representational) components of human thought could be modelled in an immobile system. But we have also stressed (and more arguments are adduced below) that in practice our thoughts are very heavily influenced by our engagement in the world both (a) developmentally, because we build up a rich repertoire of knowledge by actively investigating our world and (b) occurrently, because we often use the world as a cheap and convenient 'model ' of itself for data storage, planning and other kinds of problem solving. (There is also a weak constitutive relation insofar as ascriptions of mental content seem to be answerable to physical behaviour, though they do not necessarily depend on it.)

Given (a) and (b), and given that we do not have the expertise to simulate the full richness and responsiveness of the real world, it may be that the practical strategy for cognitive science is to push its systems out into the world equipped to exploit its riches as best they can. We would not expect to teach someone to swim without letting them find out about the hydrodynamics of water at first hand. The only alternative would be a very sophisticated simulation of a watery environment (like a trainee fighter pilot's simulated air experience). If it was practical, this would be perfectly acceptable. But it may well be that the real world is too richly structured to model in the detail required to support the illusion of active engagement. (To paraphrase Luria, if the world were so simple that we could simulate it, we could be so simple that we could not write the program!)

In sum, the practical goal to induce a machine to understand may best be served by giving it the advantages which accrue to us by virtue of our embodiment and embeddedness. We are embodied, therefore we can exploit real world structures to reduce our cognitive load. We are embedded, therefore we can base our ideas on a history of interactions with the richest and most responsive database we know - - the external world. Insofar as INC seeks to model cognitive contents without trying to recapitulate these advantages, it sets itself a task which is as pragmatically Herculian as it is theoretically ill-motivated.

Prospects for an e n g a g e d cognitivism

The key cognitivist claim, according to Haugeland (1978), is that the mind is an information processing system and that

"intelligent behavior is to be explained by appeal to internal 'cognitive processes' - - meaning, essentially, processes interpretable as working out a rationale."

(Haugeland, 1978, p. 260-261.)

A more enlightened cognitivism may still abide by the spirit of this definition. Suppose we drop the requirement that the processes to be interpreted as working

Page 8: Being there: why implementation matters to cognitive science

238 A. Clark

out the rationale be all and only internal ones and accept that some of the 'working- out' may involve non-cognitive operations. This amounts to abandoning the reliance on heuristics 1-4 cited in the section 'INC in crisis' above. A good example of the kind of process I have in mind is the solution to a jigsaw puzzle. This activity combines purely internal cogitations (e.g. "this section has half a bird on it so I need a piece with a wing and a piece with a foot") with physical operations on a real object (the actual puzzle). These physical operations are essential to our problem- solving activity; we seldom represent the shape of a piece to ourselves well enough to know for sure that it will fit in advance of trying it in a plausible location. The physical operation of trying the piece out may result in a fit, in which case the ordinary ( 'non-extended') cognitive process may again commence ('what will the next piece have to look like given the shape and pictorial content of this piece?'). Solving a jigsaw puzzle (or, at least, solving human jigsaw puzzles) is not to be explained purely by appeal to a set of internal processes interpreted as working out the steps of the solution. Rather, the internal processes must be such that they tie in with real operations on the world to test hypotheses and to generate new states of information. Imagine trying to devise a model of human jigsaw puzzle solving capacities which took no account of our ability to manipulate the real puzzle! Any such model would achieve its goal only by constructing a complete internal representation of the shape of each piece. This may do the job, but it would hardly be a model of human cognition in the domain. It would instead be an example of what Dennett (1984) calls a "cognitive wheel"; an elegant but unnatural solution to a problem of natural design.

It is, on reflection, amazing that INC could have exerted such a hold on cognitive science as to allow us even to contemplate treating human cognition in isolation from the motor and manipulative capacities of the human body. It seems almost trivially true to say that beings who evolved to cognize as an aid to survival will exhibit extensive interdependencies between cognitive content and activity. A simplistic model of cognition as preceding and independent of action is quite absurd in the case of natural intelligence, though the precise nature and extent of such inter-dependencies is far from completely understood. Certainly, the practical importance of activity is not confined, for example, to the generation of a rich, multi-model set of inputs. It is not just the range of inputs which is cognitively significant, but also how they are produced. This is evident in Held and Hein's well-known, though somewhat controversial, work (1963) on the role of self- directed activity in developmental processes. They showed that self-directed activ- ity has cognitive consequences which movement alone does not. Very briefly, two kittens were placed in an artificial environment in which one kitten was allowed free self-directed movement. The other kitten was moved around in precisely the same way as the first, except that its movement was passively determined by the movement of the self-directed kitten. Under these contrived circumstances, it was found that the self-directed kitten appeared to gain an appreciably greater under- standing of its environment than did its passively moving counterpart. Held and Hein concluded that there must be some developmental process in operation which depends crucially on "stimulus variation concurrent with and systematically

Page 9: Being there: why implementation matters to cognitive science

Being there: why implementation matters to cognitive science 239

dependent upon self-produced movement" (Held & Hein, 1963, p. 186). Similar processes seem to operate in human beings. It is said, for example, that people can only learn to see underwater if they are able to move themselves around in the new environment. Likewise, it is argued (Hacking, 1963, p. 186-209) that it is only by practical interference (poking, moving, prodding) with the parts of the microscopic 'scene' that an aspiring biologist can learn to see through a microscope. I conjecture that in all these cases the role of the self-directed activity is somehow to compare visual inputs with respect to a finely-tuned proprioceptive awareness of our own self-motivated movements in order to generate a set of heuristics. Such heuristics make the task of scene-recognition tractable by ruling out many logically possible but physically unlikely interpretations of subsequent sensory data. Whether this is right or not, the point we need to notice is that there may be many different ways in which activity and cognitive content are bound together in naturally evolved intelligences. For example, it may often be desirable to reduce internal processing loads by engaging in active investigations of the world. Interpretative tasks whose rapid performance is vital to survival may be quite intractable by other methods. Natural selection would thus favour cognizers who obtained fast results by incor- porating physical shortcuts into their cognitive development programs. In the case of natural intelligence, action and cognition may need to be treated as inextricably inter-related parts of complex systems which understand their environment. Per- haps what cognitive science ought to claim to study, in Haugeland's terms, is internal processing systems which, in action in a broader setting, can be seen to work out a rationale (e.g. solve a problem by carrying out a series of steps). This is what I mean by an engaged cognitivism.

The notion of an engaged cognitivism is by no means an original one. Hints of such a position are to be found in Cummins (1982), Baden (1983) and Pylyshyn (1974). Sinha (1985) is quite explicit and notes that one effect of recognizing the 'social-ecological matrix' of human cognition is to erode any sharp competence- performance distinction and to undermine the neo-Cartesian idea of a pure cogni- tive subject. Probably the clearest statement of this type of idea, however, is given by Rutkowska (1984). Rutkowska is concerned with the question "How should we conceptualize the abilities of young infants?". Her aim is to preserve the insights of the ecological psychologist 's emphasis on action and environmental sources of complexity without subscribing to their out and out rejection of all computational -cognitivist styles of explanation. To this end she suggests an extended notion of computation in which:

"The notion of computation as rule-governed structure manipulation must be taken to include environmental as well as intra-subject structures."

Rutkowska is also led (as is inevitable once we allow manipulation of external structures into our picture of cognitive process) to criticize the idea of a completely implementat ion-independent study of natural cognition. She notes that:

"Physical structure - - including not only neurophysiology but also body form - - will limit the processes which the program level can control."

Page 10: Being there: why implementation matters to cognitive science

240 A. Clark

and argues that this fact is especially important in understanding the development of infant cognition. Rutkowska's brand of 'engaged cognitivism' is thus dominated by the type of extended computational process we described earlier in our example of the jigsaw puzzle, i.e. by the extension of possibilities provided by our body's ability to manipulate external structures. Engaged cognitivism, however, need not restrict itself to that extension of our cognitive powers provided by the body's embedding in a physical environment. It can also treat the extension of cognitive power provided by our location and participation in a social environment. This indeed may prove to be the single most important and most neglected aspect of 'situate' intelligence. The issue divides into two main areas. First, there is the matter of how we acquire social knowledge; the knowledge which enables us to represent others as being motivated by beliefs and desires. The acquisition of such knowledge must surely depend on the form of our interactions with others. There is some evidence to suggest that, at the level of internal cognitive processing, the acquisition of social knowledge depends on specially adapted cognitive sub-sys- tems. Such sub-systems may be 'designed' (by evolution) to be especially adept at meta-representation, i.e. at representing the representational system of other agents. (It has been suggested (Baron-Cohen et al., 1985) that childhood autism is a dysfunction of this dedicated mechanism.) Secondly, and most interesting from our present point of view, there is the question of the social acquisition of knowledge. Knowledge acquired socially need not be just social knowledge. Indeed, it is extremely difficult, if not impossible, to draw a firm boundary between social knowledge and knowledge of the rest of the physical world. As one current writer on social cognitive development puts it:

"Categories of the world - - whether social or physical - - are not derived by the child in social isolation, but are worked out in the course of innumerable social exchanges . . . (in wh ich ) . . , the child's attention is drawn to particular aspects of subject-object interactions that have special social and cultural meaning (El Konen, 1972). In this manner, the child's cognitive development is continually guided by the social context in which all knowledge is presented and created."

(Damon, 1981, p. 163.)

Perhaps it is the social dimension of our acquisition of ordinary knowledge which is responsible for what Dreyfus termed the 'thickness' of our presence in the world. If so, cognitive content and its social origins cannot be studied in isolation from one another. Cognitive science will need to learn to model cognitive processes which feed off a complex co-ordination with other processes running in other molar agents. The difference between this and the 'normal' picture of the mind as a collection of processes merely waiting to feed off a set of raw inputs is as great as that between symbiotic and predator-prey relationships between organisms.

A related speculation would be that some aspects of human cognition are best understood as emergent properties of groups of molar agents. According to this hypothesis, some regularities in the behaviour of molar agents should be explained not by appeal to the individual's cognitive capacities and contents but by appeal to properties possessed by the individual only qua member of a social group. This is a

Page 11: Being there: why implementation matters to cognitive science

Being there: why implementation matters to cognitive science 241

highly involved and often confused issue which I shall not attempt to address here (but see, for example, the papers of Meldelbaum & Lukes in Ryan, 1973). I note merely that cognitive science has the beginnings of some new conceptual apparatus which may help address such issues. I have in mind the recent connectionist work of Rumelhart and McClelland (1986), in which groups of individual units, acting together in a network linked by excitory and inhibitory connections, succeed in 'working out a rationale' (e.g. simultaneously satisfying a number of inter-related constraints on the interpretation of some visual information) without that rationale being explicitly coded as a discrete symbolic representation anywhere in the system. The subject is complex and I cannot do it justice here (see McClelland & Rumelhart, 1985; Rumelhart & McClelland, 1986; Draper, 1986; Feldman & Ballard, 1982; Hinton, 1984; Hinton & Anderson, 1981). But the idea is that the phenome- non which the connectionists study in micro is visible in macro in whole groups of human agents. (There are hints of such a line of thought in McClelland and Rumelhart's conjectures about a relation between the process of regularization in distributed models of memory and patterns of regularization in the language of a whole human communi ty (McClelland & Rumelhart, 1985, p. 185-186).)

I have tried to present a brief picture of the kinds of phenomena which might be studied by engaged cognitivism, i.e. one freed from the distorting heuristics of INC. Such cognitivism would study the practical interdependencies of thought and action, both with regard to the physical and social environment. It need not commit itself to the more radical claim that immobile, non-socially situate systems could not have mental states. It might reasonably be objected, however, that the practical interpenetration of action of thought in human cognition does not imply that it is practical to try to model cognition in an interpenetrating 'engaged' way. That is to say, there may be a real methodological worry to the effect that the scope of the new enquiry would be too wide, and that the heuristics of INC, though acknowledged to be distorting, are forced upon us as a matter of practical necessity. We may conclude by making a few brief comments concerning this worry.

Practical implications

If engaged cognitivism is to be anything more than a philosophical fantasy then some strategic decomposit ion and simplification of the puzzles of cognition must be found. To line up to its own standards it must resist the unrealistic simplifica- tion of the environment and the obsession with internal processes characteristic of INC. There seem to be three main ways to satisfy both these goals at once:

(i) concentrate on robotics, (ii) create rich virtual environments in which to run AI programs*,

(iii) study internal mechanisms with an extended context always in mind.

The easy option, is, of course, (iii). Following (iii) the cognitive scientist keeps

* Thanks are due to A. Sloman for bringing this option to my attention.

Page 12: Being there: why implementation matters to cognitive science

242 A. Clark

abreast of developmental and social psychological theorizing, and devises mechan- isms which (it is hoped) generate the right outputs if they are embodied in an appropriately endowed system situated in an appropriate physical and social context. This would, in a sense, be to do 'cognitive science without representations' insofar as the mechanisms could not normally be implemented and run and hence would fail ever to instantiate the cognitive contents they aim to explain. The trouble with (iii) is that the criteria of success are consequently vague. Ordinarily, it is the great advantage of computer modelling that a straightforward sufficiency test is available for any proposed model. All we have to do is run the program and see if it gives the desired result. Option (iii) deprives us of this enviable editorship because it makes the desired output depend on an implementation that we do not in fact possess.

Option (ii) tries to overcome this in an allegedly economical way. Instead of actually implementing the program structure in a robot in a physical-social setting, we create a virtual environment within the computer so that it appears that the program drives a robot body in the real world. The problems here, as hinted earlier, are enormous. Indeed, it would take a Cartesian demon to solve them. What is required is at least (a) a sufficiently rich virtual environment to obey the laws of physics and perspective, and (b) the inclusion (if our closing observations in 'Prospects for an engaged cognitivism' are correct) within that environment of full models of other cognitive agents. The trouble here is that (a) is no mean feat and (b) effectively demands that the problems of cognitive psychology be solved already! The virtual representation of other agents, if it is to have the proper interactive potential, must depend on the very theory of human psychology which we hope to use the virtual environment system to help us achieve. For all but the simplest cases (the modelling, say, of the cognitive capacities of simple animals in restricted niches), (ii) is a hopeless option. As I have argued elsewhere (Clark, 1986), it is option (i) which, I suspect, will ultimately prove most fruitful. Why waste time programming in a meagre virtual environment when we are surrounded by the incomparable riches of the real thing? Even so, (i) is not unproblematic. Given our current state of knowledge, no robot we build has any hope of participating in the full range of physical activities of a human being (let alone the social ones!). One answer might be to concentrate instead on the robotic modelling of the cognitive capacities of our non-human predecessors, especially simple animals in restricted niches. The value of this for cognitive science interested in human cognition should not be underestimated. We have cause to believe in a stepwise evolution of human cognitive capacities in which our high-level capacities are a function of increasingly complex combinations of, and minor additions to, old cognitive strate- gies. Increased focus on animal intelligence is thus compatible with an interest in human psychology. It is also desirable insofar as a cognitive science which con- cerns itself with simple animal intelligences first, and complex human achieve- ments later, may stand the best chance of avoiding the "unbiological solutions to natural problems of design" against which Dennet has recently warned. (See Clark, 1986; Dennett 1984.)

The best methodology for an engaged cognitivism would no doubt involve a

Page 13: Being there: why implementation matters to cognitive science

Being there: w h y i m p l e m e n t a t i o n mat ters to cogni t ive sc ience 243

jud i c ious use of all the s t ra tegies m e n t i o n e d . Given the m u c h - n e g l e c t e d poss ib i l i t y

of a s u s t a i n e d c o m p u t a t i o n a l inves t iga t ion of an ima l in te l l igences , the ho l i s t i c

m o d e l l i n g r e q u i r e d by the engaged a p p r o a c h m a y be far more t rac tab le than it at

first appear s .

Conclusions

I have t r i ed to suggest that INC has a d i s to r t ing inf luence on cogni t ive sc ience. Its

i so la t ion i s t heur i s t i c s have b l i n d e d resea rchers to the deep role w h i c h e m b o d i m e n t

and ac t ive i n t e r v e n t i o n p lays in h u m a n cogni t ion . The resu l t is a cogni t ive sc ience

able to i l l u m i n a t e on ly a t iny f rac t ion of the cons te l l a t ion of power s w h i c h together

we t e rm h u m a n in te l l igence . I have begun to sketch the ou t l ines of an a l t e rna t ive

a p p r o a c h w h i c h invo lves the careful s tudy of e m b o d i e d and e m b e d d e d in te l l igen t

behav iours . The a p p r o a c h is not new, but I have a t t e m p t e d to give it some sys tem-

atic form and to locate the h i s to r i ca l and prac t ica l forces agains t w h i c h it mus t be

u n d e r s t o o d and measu red . In the end there r e ma ins an u n d e n i a b l e t i d ines s about the INC p a r a d i g m w h i c h mus t be lost w i th the a l t e rna t ive approach . Put t ing the

m i n d back into the w o r l d is a m e s s y bus iness ; but it may be that whe re the re ' s mess there ' s men ta l i ty !

Acknowledgment

I w o u l d l ike to t h a n k an a n o n y m o u s referee for this journa l w h o s e c o m m e n t s

h e l p e d to c lar i fy the a rgumen t s p r e sen t ed in the sec t ion on 'INC in cr is is? '

References

Baron-Cohen, S., Leslie, A. & Frith, A. (1985) Does the autistic child have a theory of mind? Cognition 21, 37-46.

Boden, M. (1983) Artificial intelligence and animal psychology. New Ideas in Psychology, I, 1-33. Boden, M. (1984) Methodological links between AI and other disciplines. In: (ed. S. Torrance) The

Mind and The Machine. Ellis Horwood, Sussex. Borst, C. V. (ed.) (1970) e.g.U.T. Place: Is consciousness a brain process? and Smart, J.: Sensations

and brain processes. In: The Mind-Brain Identity Question Macmillan, London. Clark, A. (1986) A biological metaphor. Mind and Language 1, no. 1. Cummins, R. (1982) What can be learned from Brainstorms? In: Mind, Brain and Function (eds

Biro, J. & Shahan, R.) Harvester Press, Sussex. Damon, W. (1981) Exploring children's social cognition on two fronts. In: (eds J. Flavell & L. Ross)

Social Cognitive Development. CUP, New York. Dennett, D. (1984) Cognitive wheels: the frame problem of AI. In: Minds, Machines and Evolution

(ed. C. Hookway) CUP, Cambridge. Draper, S. (1986) Does connectionism constitute a paradigm revolution? Unpublished draft,

University of Sussex. Dreyfus, H. (1979) From micro-worlds to knowledge representation; AI at an impasse. In: (ed.

J. Haugeland) Mind Design. MIT Press, Cambridge, MA. Feldman, J. & Ballard, D. (1982) Connectionist models and their properties. Cognitive Science, 6,

205-255. Flavell, J. & Ross, L. (eds) (1981) Social Cognitive Development. CUP, New York.

Page 14: Being there: why implementation matters to cognitive science

244 A. Clark

Fodor, J. (1981) Representations. Harvester Press, Sussex. Yacking, I. (1983) Representing and Intervening. CUP, Cambridge. Haugeland, J. (1978) The nature and plausibility of cognitivism. In: (ed. J. Haugeland) Mind Design.

MIT Press, Cambridge, MA. Haugeland, J. (1981) Semantic engines; an introduction to mind design. In: (ed. J. Haugeland) Mind

Design. MIT Press, Cambridge, MA.. Haugeland, J. (ed.) (1981a) Mind Design. MIT Press, Cambridge, MA. Haugeland, J. (1985) Artificial Intelligence: The Very Idea. MIT Press, Cambridge, MA. Held, R. & Hein, A. (1963) Movement-produced stimulation in the development of visually guided

behaviour. In! Journal of Comparative and Physiological Psychology, 56, 872-976 (reprinted in Bennett, T. (ed) (1978). Perception; An Adaptive Process. MSS, New York).

Hinton, G. (1984) Distributed representations. CMU Technical Report, CMU-CS-84-157. Hinton, G. & Anderson, J. (eds) (1981) Parallel Models of Associative Memory. Erlbaum, New

Jersey. Kazantzakis (1959) Zorba The Greek. Bruno Cassirer, Oxford. Lycan, W. (1981) Form, function and feel. Journal of Philosophy, LXXVIII, 24-50. McClelland, J. L. & Rumelhart, D. E. (1985) Distributed memory and the representation of general

and specific information. Journal of Experimental Psychology: General 1985, 114. Putnam, H. (1980) The nature of mental states. In: (ed. N. Block) Readings in the Philosophy of

Psychology, Vol. 1. Methuen & Co., London. Pylyshyn, Z. (1974) Complexity and the study of artificial and human intelligence. In: (ed.

J. Haugeland) Mind Design. MIT Press, Cambridge, MA.. Pylyshyn (1984) Computation and Cognition. MIT Press, Cambridge, MA. Rumelhart, D. & McClelland, J. (1986) Parallel Distributed Processing: Explorations in the Micra-

structure of Cognition. MIT Press, Cambridge, MA. Rutkowska, J. (1984) Explaining infant perception: insights from Artificial Intelligence. Cognitive

Studies Research Paper 005, University of Sussex. Ryan, A. (ed) (1973) The Philosophy of Social Explanation. CUP, Cambridge. Ryle, G. (1949) The Concept of Mind. Hutchinson, London. Sinha, C. (1985) A socio-naturalistic approach to human development. In: (eds, G. Butterworth,

J. Rutkowska & M. Scaife) Evolution and Developmental Psychology. Harvester Press, Sussex. Sloman, A. (1978) The Computer Revolution in Philosophy. Harvester Press, Sussex. Sloman, A. (1984) The structure of the space of possible minds. In: (ed. S. Torrance) The Mind and

The Machine. Ellis Horwood, Sussex. Tennant, N. (1985) How is meaning possible? Philosophical Books, XXVI, No. 2. Wimsatt, W. (1980) Reductionist research strategies and their biases in the units of selection

controversy. In: (ed. T. Nickles) Scientific Discovery. Reidel, Holland.