Fodor How the Mind Works

download Fodor How the Mind Works

of 9

description

Brilliant article on how the mind works

Transcript of Fodor How the Mind Works

  • 86 Ddalus Summer 2006

    One could make a case that the histo-ry of cognitive science, insofar as itsbeen any sort of success, has consistedlargely of nding more and more thingsabout cognition that we didnt know anddidnt know that we didnt. Throwingsome light on how much dark there is,as Ive put it elsewhere. The professionalcognitive scientist has a lot of perplexityto endure, but he can be pretty sure thathes gotten in on the ground floor.

    For example, we dont know whatmakes some cognitive states conscious.(Indeed, we dont know what makes anymental state, cognitive or otherwise,conscious, or why any mental state, cog-nitive or otherwise, bothers with beingconscious.) Also, we dont know muchabout how cognitive states and processesare implemented by neural states and

    processes. We dont even know wheth-er they are (though many of us are pre-pared to assume so faut de mieux). Andwe dont know how cognition develops(if it does) or how it evolved (if it did),and so forth, very extensively.

    In fact, we have every reason to expectthat there are many things about cogni-tion that we dont even know that wedont know, such is our benighted condi-tion.

    In what follows, I will describe brieflyhow the notions of mental process andmental representation have developedover the last fty years or so in cognitivescience (or cogsci for short): where westarted, where we are now, and what as-pects of our current views are most like-ly to be in need of serious alteration. My opinions sometimes differ from themainstream, and where they do, I willstress that fact; those are, no doubt, theparts of my sketch that are least likely tobe true.

    The 1950s paradigm shift in theoriesof the cognitive mind, initiated largelyby Noam Chomskys famous review ofB. F. Skinners book Verbal Behavior, isusually described in terms of a conflictbetween behaviorism and mentalism,from which the latter emerged victori-ous. Behaviorists thought something

    Jerry Fodor

    How the mind works: what we still dont know

    Jerry Fodor is State of New Jersey Professor ofPhilosophy at Rutgers University. He is the authorof several publications in philosophy and cognitivescience, including Modularity of Mind (1983),A Theory of Content and Other Essays (1990),and The Mind Doesnt Work That Way: TheScope and Limits of Computational Psychology(2000).

    2006 by the American Academy of Arts & Sciences

  • was methodologically or ontologicallycontroversial about the claim that we(and, presumably, other advanced kindsof primates) often do the things we dobecause we believe and desire the thingswe do. Chomskys reply was, in essence,Dont be silly. Our behavior is charac-teristically caused by our mental states;therefore, a serious psychology must bea theory about what mental states existand what roles they play in causing ourbehavior. You put gas in the tank becauseyou believe that, if you dont, the car willgrind to a stop, and you dont want thecar to do so. How could anyone sane be-lieve otherwise?

    That was, to put it mildly, all to thegood. Behaviorism never was a plausibleview of the methodology of psychology,any more than instrumentalism was aplausible view of the methodology ofphysics. Unsurprisingly, the two died ofmuch the same causes. Many of the ar-guments Chomsky brought against theproposed reduction of the mind to beha-vior recall arguments that Carl Hempleand Hilary Putnam brought against theproposed reduction of electrons (to saynothing of tables and chairs) to ctionsor logical constructions out of sensoryexperience. Dont be silly, they said.Sensations and the like are mind-depen-dent; tables and chairs are not. You cansit on chairs but not on sensations; a for-tiori, chairs cant be sensations. Chom-skys realism about the mental was thuspart of a wider realist agenda in the phi-losophy of science. But its important todistinguish (as many of us did not backin those days) Chomskys objections toSkinners behaviorism from the ones heraised against Skinners associationism. In retrospect, the latter seem the moreimportant.

    Behaviorism was and remains an aber-ration in the history of psychology. Infact, the mainstream of theorizing about

    the mind (including both philosophicalempiricists and philosophical rational-ists, and the sensationist tradition ofpsychologists like Wilhelm Wundt andEdward Tichner) wasnt behavioristic.Rather, it was a mentalistic form of as-sociationism that took the existence ofmental representations (what were thenoften called Ideas) and their causalpowers entirely for granted. What asso-ciationism mainly cared about was dis-covering the psychological laws thatIdeas fall under. And the central thesiswhich, Hume said, was to psycholo-gy what gravitation was to Newtonianphysicswas that Ideas succeed oneanother in cognitive processes accordingto the laws of association.

    For nearly three hundred years, asso-ciationism was the consensus theory ofcognition among Anglophone philoso-phers and psychologists. (Its still theview assumed by advocates of connec-tionism, a movement in cognitive sci-ence that hopes to explain human intel-lectual abilities by reference to associ-ations among nodes in neural net-works, the latter corresponding, moreor less, to Ideas and the former corre-sponding, more or less, to minds thatcontain them. If, in fact, you take awaythe loose talk about neurological plau-sibility, the connectionists account of cognition is practically indistinguish-able from Humes.) Associationism was widely believed to hold, not just for thought but for language and brainprocesses as well: thoughts are chains of associated concepts, sentences arechains of associated words, and brainprocesses are chains of associated neu-ron rings. In all three cases, transitionsfrom one link in such a chain to the nextwere supposed to be probabilistic, withpast experience determining the proba-bilities according to whatever Laws ofAssociation happened to be in fashion.

    How themind works:what we stilldont know

    Ddalus Summer 2006 87

  • 88 Ddalus Summer 2006

    Jerry Fodor on body in mind

    These Anglophone theorists notwith-standing, its been clear, at least sinceKant, that the associationist picturecant be right. Thoughts arent mere se-quences of ideas; at a minimum, they are structured sequences of ideas. Tothink theres a red door isnt to thinkrst about red and then about a door;rather, its to think about a door that it isred. This is, I suppose, a truism, thoughperhaps not one that the cogsci com-munity has fully assimilated. Likewise,sentences arent just lists of words. In-stead, they have a kind of internal struc-ture such that some of their parts aregrouped together in ways that others of their parts are not. Intuitively, thegrouping of the parts of the red dooropened is: [(the) (red door)] (opened),not (the red) (door opened).

    So sentences have not just lexical con-tents but also constituent structures,consisting of their semantically inter-pretable parts (the red doesnt meananything in the red door opened, butthe red door does). One of the mainthings wrong with associationism wasthus its failure to distinguish betweentwo quite different (in fact, orthogonal)relations that Ideas can enter into: asso-ciation (a kind of causal relation) and constituency (a hierarchical kind of geo-metrical relation). Ironically, as far asanybody knows, the rst isnt of muchtheoretical interest. But the second, theconstituency relation, does a lot of theheaviest lifting in our current accountsof cognition.

    For example, as Chomsky famouslypointed out, sentences are productive.The processes that construct sentencesout of their parts must be recursive: theymust be able to apply to their own out-puts, thereby generating innite sets. Itturns out that these recursions are de-ned over the constituent structures ofthe expressions they apply to. Typically,

    they work by embedding a constituentof a certain type in another constituentof the same type, like a sentence within a sentence. (For example, the sentenceJohn met the guy from Chicago is somesort of construction out of the sentencesJohn met the guy and the guy is fromChicago, with the second sentence em-bedded in the rst.) The same sort ofstory goes for mental representations,since they are also productive: if therewerent boundlessly many thoughts toexpress, we wouldnt need boundlesslymany sentences to express them.

    Their potential for productivity isnt,of course, the only thing that distin-guishes constituent structures from as-sociative structures. The strength of theassociation between Ideas is traditional-ly supposed to depend largely on the fre-quency and spatiotemporal contiguity oftheir tokenings. In contrast, as Chomskyalso pointed out, the structural relationsamong the constituents of a complexrepresentation hold for novel represen-tations as well as for previously tokenedones, and are typically independent ofthe propinquity of the relata.

    In sum, by far the most important dif-ference between the traditional theoriesof mind and the ones those of us whoarent connectionists endorse is the shiftfrom an associationist to a constituent,or computational, view of cognition.

    Here, then, are the two basic hypothe-ses on which the current computationaltheory of cognition rests:

    First, mental representations are sen-tence-like rather than picture-like. Thisstands in sharp contrast to the tradition-al view in which Ideas are some kind ofimages. In sentences, theres a distinc-tion between mere parts and constitu-ents, of which the latter are the semanti-cally interpretable parts. By contrast, everypart of a picture has an interpretation: itshows part of what the picture shows.

  • Second, whereas associations are op-erations on parts of mental representa-tions, computations are operations de-ned on their constituent structures.

    So much for a brief (but, I think, rea-sonably accurate) summary of whatmost cognitive scientists now hold as aworking hypothesis about mental repre-sentations and mental processes (except,to repeat, connectionists, who somehownever got beyond Hume). Now, the ques-tion of interest is, for how much of cog-nition is this hypothesis likely to betrue? The available options are none of it, some of it, and all of it.

    My guess is: at best, not very much of it. This brings me to the heart of thisessay.

    Heres what Im worried about. Asweve been seeing, constituent structureis a species of the part/whole relation:all constituents are parts, though notvice versa. It follows that constituency is a local relation: to specify the parts ofa thing you dont need to mention any-thing thats outside of the thing. (To spec-ify the part/whole relation between acow and its left leg, you dont have totalk about anything outside the cow.Even if this cow were the only thing inthe whole world, it would bear the samerelation to its left leg that it bears to itsleft leg in this world. Only, in that world, the cow would be lonelier.) Likewise forrepresentationsmental representationsincluded. Since constituents are parts ofrepresentations, operations dened onconstituents apply solely in virtue of theinternal structure of the representations.

    The question thus arises: are theremental structures, with mental process-es dened on them, that arent local inthis sense? If there are, then we are introuble because, association having per-ished, computation is the only notion ofa mental process that we have; and, as

    weve just seen, computations are de-ned over local properties of the repre-sentations that they apply to.

    Well, I think theres pretty good rea-son to suppose that many of the mentalprocesses crucial to cognition are indeednot local. So I guess were in trouble. Itwouldnt be the rst time.

    There are at least two pervasive char-acteristics of cognitive processes thatstrongly suggest their nonlocality. One is their sensitivity to considerations ofrelevance; the other is their sensitivity tothe global properties of ones cognitivecommitments. It is very easy to run thetwo together, and its a common practicein cogsci literature to do so. For polemi-cal purposes, perhaps nothing much islost by that. But some differences be-tween them are worth exploring, so Illtake them one at a time.

    Consider the kind of thinking thatgoes on in deciding what one ought tobelieve or what one ought to do (thesame considerations apply both to pureand to practical reason). In both cases,reasoning is typically isotropic. In otherwords, any of ones cognitive commit-ments (including, of course, currentlyavailable experiential data) is relevant, in principle, to accepting or rejecting theoptionsthere is no way to determine,just by inspecting an empirical hypothe-sis, what will be germane to accepting or rejecting it. Relevance isnt like con-stituencyits not a local property ofthoughts.

    So how does one gure out whats rele-vant to deciding on a new belief or plan?That question turns out to be very hardto answer. There is an innite corpus ofprior cognitive commitments that mightprove germane, but one can actually visitonly some relatively small, nite subsetof them in the real time during whichproblems get solved. Relevance is long,but life is short. Something, somehow,

    Ddalus Summer 2006 89

    How themind works:what we stilldont know

  • 90 Ddalus Summer 2006

    Jerry Fodor on body in mind

    must lter what one actually thinksabout when one considers what next tobelieve or what next to do.

    Hence the infamous frame problemin theories of articial intelligence: howdo I decide what I should take to be rele-vant when I compute the level of con-dence I should invest in a hypothesis or aplan? Any substantive criterion of rele-vance I employ will inevitably risk omit-ting something that is, in fact, germane;and one of the things I want my estimateto do (all else equal) is minimize thisrisk. How on earth am I to arrange that?

    I think the frame problem arises be-cause we have to use intrinsically localoperations (computations, as cogsci currently understands that notion) tocalculate an intrinsically nonlocal re-lation (relevance). If thats right, theframe problem is a symptom of some-thing deeply inadequate about our cur-rent theory of mind.

    By contrast, its a widely prevalentview among cognitive scientists that theframe problem can be circumvented byresorting to heuristic cognitive strate-gies. This suggestion sounds interesting,but it is, in a certain sense, empty be-cause the notion of a heuristic procedureis negatively deneda heuristic is just a procedure that only works from timeto time. Therefore, everything dependson which heuristic procedure is alleged to circumvent the frame problem, andabout this the canonical literature tendsto be, to put it mildly, pretty causal.

    Here, for example, is Steven Pinker, in a recent article, explaining what heu-ristics investors use when they play thestock market: Real people tend to baseinvestment decisions on, among otherthings, what they hear that everyone elseis doing, what their brother-in-law ad-vises, what a cold-calling stranger with a condent tone of voice tells them, andwhat the slick brochures from large in-

    vesting rms recommend. People, inother words, use heuristics.1

    Pinker provides no evidence that thisis, in fact, the way that investors work;its a story hes made up out of wholecloth. At best, its hard to see why, if itstrue, some investors make lots moremoney than others. But never mind;whats really striking about Pinkers listis that he never considers that thinkingabout the stock market (or paying some-body else to think about it for you, ifyoure lazy like me) might be one of theheuristics that investors employ whenthey try to gure out whether to buy orsell. Ironically, thinking seems largely tohave dropped out of heuristic accountsof how the mind works. Skinner wouldhave been greatly amused.

    There have been, to be sure, caseswhen cognitive scientists have tried totell a story about the use of heuristicstrategies in cognition that amounts to more than the mere waving of hands.To my knowledge, the heuristic mostoften said to guide decisions about whataction to perform or belief to adopt issome version of if things went all rightwith what you did last time, do the sameagain this time. We owe a rather opaqueformulation to the philosopher Eric Lor-mand: A system should assume by de-fault that a fact persists, unless there is an axiom specifying that it is changed by an occurring event . . . . [G]iven that anevent E occurs in situation S, the systemcan use axioms to infer new facts exist-ing in S+1, and then simply copy theremainder of its beliefs about S over toS+1.2 Likewise, Peter Carruthers says

    1 Steven Pinker, So How Does the MindWork? Mind and Language 20 (1) (February2005): 124.

    2 Zenon W. Pylyshyn, ed., Robots Dilemma: TheFrame Problem in Articial Intelligence (Norwood,N.J.: Ablex, 1987), 66.

  • that theres no reason why the choices[about what to do next] couldnt bemade by higher-order heuristics, such asuse the one which worked last time.3The idea, then, is to adopt whicheverplan was successful when this situationlast arose. Cogsci literature refers to thisheuristic as the sleeping dog strategy.Last time, I tiptoed past the sleeping dog,and I didnt get bitten. So if I tiptoe pastthe sleeping dog again now, I probablywont get bitten this time either. So theplan Ill adopt is tiptoe past the sleeping dog.What could be more reasonable? Whatcould be less problematic?

    But, on second thought, this sugges-tion is no help since it depends crucial-ly on how one individuates situations,and how one individuates situations de-pends on what one takes as relevant todeciding when situations are of the samekind. Consider: What was it, precisely,that did happen last time? Was it that Itiptoed past a sleeping dog? Or was itthat I tiptoed past a sleeping brown dog?Or that I tiptoed past a sleeping pet ofFarmer Jones? Or that I tiptoed past thatsleeping pet of Farmer Jones? Or that Itiptoed past a creature that Farmer Joneshad thoughtfully sedated so that I couldsafely tiptoe past it? It could well be thatthese are all true of what I did last time.Nor, in the general case, is there any rea-son to suppose that I know, or have everknown, what it is about what I did lasttime that accounts for my success. So,when I try to apply the sleeping dog heu-ristic, Im faced with guring out whichof the true descriptions of the situationlast time is relevant to deciding what Iought to do this time. Keeping that inmind is crucial. If the dog I tiptoed past last time was sedated, Ive got no

    grounds at all for thinking that tippingmy toe will get me past it now. Philoso-phers have gotten bitten that way fromtime to time.4

    So Im back where I started: I want togure out what action my previous ex-perience recommends. What I need, inorder to do so, is to discern what aboutmy previous action was relevant to itssuccess. But relevance is a nonlocal rela-tion, and I have only local operations athand with which to compute it. So thesleeping dog strategy doesnt solve myrelevance problem; it only begs it. Youmight as well say: Well, you decided onan action that was successful last time;so just decide on a successful action thistime too. My stockbroker tells me hehas a surere investment heuristic: Buylow and sell high. It sounds all right, butsomehow it keeps not making me rich.Its well-nigh useless to propose thatheuristic processing is the solution to the problem of inductive relevance be-cause deciding how to choose, and how

    3 Peter Carruthers, Keep Taking the ModulesOut, Times Literary Supplement 5140 (October 5,2001): 30.

    4 Another way to put the same point: Whatcounts as the last time this situation arose de-pends on how I describe this situation. Was the last time this happened the last time that I tiptoed past a sleeping dog? Or was it the lasttime that I tiptoed past a brown sleeping dog?Or was it the last time that I tiptoed past a se-dated brown sleeping dog? What determineswhich heuristic I should use in this situationthus depends on what kind of situation I take itto be. And what kind of situation I take it to bedepends on what about my successful attemptsat dog-passing was relevant to their succeeding.Were very close here to Nelson Goodmansfamous point that, in inductive inference, howyou generalize your data depends crucially onwhat you take them to have in commonwhattheir relevant similarity is. The frame problemis thus an instance of a perfectly general prob-lem about the role of relevant similarity in em-pirical inferences. Lacking an account of that,the advice to do the same as you did last timeis, quite simply, empty.

    Ddalus Summer 2006 91

    How themind works:what we stilldont know

  • 92 Ddalus Summer 2006

    Jerry Fodor on body in mind

    to apply, a heuristic itself typically in-volves estimating degrees of inductiverelevance.

    Its remarkable, and more than a bitdepressing, how regularly what is takento be a solution of the frame problemproves to be simply one of its formula-tions. The rule of thumb for reading theliterature is: if someone thinks that hehas solved the frame problem, he doesnot understand it; and if someone eventhinks that he understands the frameproblem, he doesnt understand it. But itdoes seem clear that, whatever the solu-tion of the frame problem turns out tobe, it isnt going to be computationallylocal. Its constituent structure is all of amental representation that a mental pro-cess can see. But you cant tell from justthe constituent structure of a thoughtwhat tends to (dis)conrm it. Clearly,you have to look at a lot else as well. Theframe problem is how you tell what elseyou have to look at. I wish I knew. If youknow, I wish youd tell me.

    The frame problem concerns the sizeof a eld of cognitive commitments thatone has to search in order to make a suc-cessful decision. But there are also caseswhere the shape of the eld is the prob-lem. Many systems of beliefs that aregermane to estimating conrmation lev-els have global parameters; that is, theyare dened over the whole system of pri-or cognitive commitments, so computa-tions that are sensitive to such parame-ters are nonlocal on the face of them.

    Suppose I have a set of beliefs that Im considering altering in one way oranother under the pressure of experi-ence. Clearly, I would prefer that, all else equal, the alteration I settle on is thesimplest of the available ways to accom-modate the recalcitrant data. The global-ity problem, however, is that I cant eval-uate the overall simplicity of a belief sys-tem by summing the intrinsic simplici-

    ties of each of the beliefs that belong to it. There is, on the face of it, no suchthing as the intrinsic simplicity of a be-lief (just as there is no such thing as theintrinsic relevance of a datum). Nothinglocal about a representationnothingabout the relations between the repre-sentation and its constituent parts, forexampledetermines how much itwould complicate my current cognitivecommitments if I were to endorse it.

    Notice that, unlike the problems aboutrelevance, this sort of worry about local-ity holds even for very small systems ofbelief. It holds even for punctate systemsof belief (if, indeed, there can be suchthings). Suppose that all that I believe is P, but that I am now considering al-so adopting either belief Q or belief R.What I therefore want to evaluate, ifIm to maximize overall simplicity, iswhether the belief P&Q is simpler thanthe belief P&R. But I cant do that byconsidering P, Q, and R severallythecomplexity of P&Q isnt a function ofthe complexity of P and the complexityof Q taken separately. So it appears thatthe operations whereby I compute thesimplicity of P&Q cant be local.

    The same goes for other parametersthat anyone rational would like to max-imize, all else being equal. Take, for ex-ample, the relative conservatism of suchcommitments. Nobody wants to changehis mind unless he has to; and if one hasto, one prefers to opt for the bare mini-mum of change. The trouble is, onceagain, that conservatism is a global prop-erty of belief systems. On the face of it,you cant estimate how much adding Pwould alter the set of commitments Cby considering P and C separately; onthe face of it, conservatism (unlike, forexample, consistency) isnt a propertythat beliefs have taken severally.

    In short, it appears that many of theprinciples that control (what philoso-

  • phers call) the nondemonstrative xa-tion of beliefs have to be sensitive toparameters of whole systems of cogni-tive commitments.5 Computational ap-plications of these principles have to benonlocal. As a result, they cant literal-ly be computations in the sense of thatterm that our current cognitive sciencehas in mind.

    If you suppose (as Im inclined to) thatnondemonstrative inference is always aspecies of argument to the best availableexplanation, this sort of considerationwill be seen to apply very broadly in-deed: whats the best available explana-tion always depends on what alternative

    explanations are available; and, by de-nition, the presence or absence of alter-natives to a hypothesis isnt a local prop-erty of that hypothesis.

    I should, however, enter a caveat. Sup-pose that something you want to meas-ure is a property of complex beliefs butnot of their parts; for example, supposethat you want to assess the simplicity of P&Q relative to that of P&R. My pointhas been that, prima facie, the computa-tions you have to perform arent local;they must be sensitive to properties thatthe belief that P&Q has as such. Onecould, however, make the computationslocal by brute force. So while the com-plexity of P&Q isnt determined by localproperties of P together with the localproperties of Q, it is determined, trivial-ly, by local properties of the representa-tion P&Q. In effect, you can always pre-serve the locality of computations by in-flating the size of the units of computa-tion. The distance between Washingtonand Texas is a nonlocal property of thesestates, but its a local property of theNorthern Hemisphere.

    That sort of forced reduction of glob-al problems to local problems is, howev-er, cheating since it offers no clue abouthow to solve the local problems that theglobal ones are reduced to. In fact, youwould think that nobody sensible wouldeven consider it. To the contrary: recentdiscussions of conrmation (from, say,Duhem forward) have increasingly em-phasized the holism of nondemonstrativeinferences by claiming that, in the limit-ing case, whole theories are the properunits for their evaluation. This saves thelocality of the required computations byat, but only at the cost of making themwildly intractable. More to the point:even if we could somehow take wholetheories as the units in computing theconrmation of our hypotheses, thepatent fact is that we dont. Though we

    5 Its very striking how regularly the problemscognitive psychologists have when they try toprovide an explicit account of the nondemon-strative xation of belief exactly parallel theones that inductive logicians have when theytry to understand the (dis)conrmation of em-pirical theories. Pinker, among many others,objects to conceptualizing individual cognitionas, in effect, scientic theorizing writ small.Granted that several millennia of Western science have given us nonobvious truths involv-ing circuitous connections among ideas; whyshould theories of a single human mind be heldto the same standard? Pinker, So How Doesthe Mind Work? In fact, however, its increas-ingly apparent that the philosophy of scienceand the psychology of cognition are beatingtheir heads against the same wall. It is, after all, a truism that, by and large, scientists thinkmuch the same way that we do.

    The similarity between the two literaturescan be quite creepy given that they seem large-ly unaware of one anothers existence. Thus,Arthur Fine raises the question: how can some-one who is not a realist about the ontologicalcommitments of scientic theories explain theconvergence of the scientic community onquite a small number of explanatory options?He says its in part because we all follow theinstrumentally justied rule if it worked welllast time, try it again. David Papineau, ed.,The Philosophy of Science, Oxford Readings inPhilosophy (New York: Oxford University Press,1996), 78. He doesnt, however, even try to ex-plain the consensus about what it is.

    Ddalus Summer 2006 93

    How themind works:what we stilldont know

  • dont alter our cognitive commitmentsone by one, its also not true that every-thing we believe is up for grabs all ofthe time, which is what the idea that the units of conrmation are whole the-ories claims if it is taken literally. Thelong and short is: in science and else-where, it appears that the processes bywhich we evaluate nondemonstrativeinferences for simplicity, coherence,conservatism, and the like are both sen-sitive to global properties of our cogni-tive commitments and tractable. Whatcognitive science would like to under-stand, but doesnt, is how on earth thatcould be so.

    There is quite possibly somethingdeeply wrong with the cognitive psy-chology that we currently have avail-able, just as there was something deep-ly wrong with the associative cognitivepsychology that couldnt acknowledgerecursion or the constituent structure of linguistic and mental representations.What, then, are we to do?

    Actually, I dont know. One possibili-ty is to continue to try for an unvacuousheuristic account of how we might com-pute relevance and globality. I haventheard of any, but its perfectly possiblethat there are some out there somewhereor that there will be tomorrow, or theday after. I dont believe it, but hope isnotorious for springing eternal.

    Alternatively, what our cognitive psy-chology needs may be a new notion ofcomputation, one that doesnt have lo-cality built into it. That is, of course, alot easier to say than to provide. The cur-rent computational account of mentalprocesses is at the core of our cognitivescience. The notion of a computation is what connects our theory of mentalrepresentations to our theory of mentalprocesses; it does for our cognitive sci-ence what the laws of association prom-

    ised (but failed) to do for our empiricistforebears. As things stand, we have noidea at all how to do without it. But atleast we may be starting to understandits intrinsic limitations. In the long run,that could lead to revising it, or rejectingit, or, best of all, replacing it with sometheory that transcends its limitationsa consummation devoutly to be wished.

    So the good news is that our notions of mental representation and mentalprocess are much better than Humes.

    The bad news is that they arent nearlygood enough.

    Steven Pinker recently wrote a bookcalled How the Mind Works. It is a longbook. In fact, it is a very long book. Forall that, my view is that he doesnt actu-ally know how the mind works. Nor doI. Nor does anybody else. And I suspect,such is the state of the art that, if Godwere to tell us how it works, none of uswould understand Him.

    94 Ddalus Summer 2006

    Jerry Fodor on body in mind