Andy Clark - Mindware. an Introduction to the Philosophy of Cognitive Science (2001)

112
- AN INTRODUCTION TO THE PHILOSOPHY OF COGNITIVE SCIENCE Andy Clark University of Sussex New York Oxford OXFORD UNIVERSITY PRESS 2001

description

Cognitive science philosophy guide

Transcript of Andy Clark - Mindware. an Introduction to the Philosophy of Cognitive Science (2001)

  • -

    AN

    INT

    RO

    DU

    CT

    ION

    TO

    TH

    E P

    HIL

    OS

    OP

    HY

    OF

    CO

    GN

    ITIV

    E S

    CIE

    NC

    E

    Andy C

    lark U

    niversity of Sussex

    New

    York

    Oxford

    OXFORD UN

    IVERSITY PRESS

    2001

  • CONTENTS

    Preface: About Mindware

    viii

    Acknowledgments

    x

    Resources xii

    Introduction: (Not) Like a Rock 1

    Meat M

    achines: Mindw

    are as Software

    7

    Symbol System

    s 28

    Patterns, Contents, and Causes 43

    Connectionism

    62

    Perception, Action, and the B

    rain 84

    Robots and A

    rtificial Life 103

    Dynam

    ics 120

    Cognitive Technology: Beyond the Naked B

    rain 140

    (Not Really a) Conclusion 160

    APPEN

    DIX

    I Som

    e Backdrop: D

    ualism, B

    ehaviorism, Functionalism

    , and B

    eyond 162

    APPEN

    DIX

    I1 C

    on~ciousness and the Meta-H

    ard Problem

    171

    References 189

    Index 203

  • "Mindw

    are" (the term) is just a convenient label for that u

    nruly rag-bag of stuff

    we intuitively co

    unt as m

    ental. Beliefs, hopes, fears, thoughts, reasoning, im

    agery, feelings-the

    list is long and the puzzle is deep. The puzzle is, just what is all this

    stuff with w

    hich we populate o

    ur m

    inds? What are beliefs, thoughts, and reaso

    ns,

    and how

    do they take their place among the other things that m

    ake up the natural

    world?

    Mindware (the book) is w

    ritten with these three aim

    s in (of course) m

    ind: To introduce so

    me of the research program

    s that are trying (successfully, I believe) to locate the place of m

    indfulness in nature. To do so

    briefly, by sketching the major

    elements of key research program

    s, and then prom

    pting the reader to accessible original so

    urces for the full flesh and fire. A

    nd, above all, to do so challengingly, by

    devoting the bulk of the treatment to short, substantive critical discussions that try

    to touch som

    e deep and tender n

    erves and that reach o

    ut to include front-line re- search in both cognitive science and philosophy.

    The idea, in short, is to provide just enough of a sketch of the central research

    programs to then initiate and pursue a w

    ide range of critical discussions of the con

    -

    ceptual terrain. These discussions do not pretend to be u

    nbiased, exhaustive, or

    even

    to cover all the ground of a standard introductory text (although the m

    ater- ial in the tw

    o appendices goes a little way tow

    ard filling in som

    e gaps). Instead, the goal is to highlight challenging o

    r problematic issues in a w

    ay likely to engage the reader in active debate. Each chapter opens w

    ith a brief sketch of a research tradi- tion o

    r perspective, followed by short critical discussions of sev

    eral key issues. Ar-

    eas covered include artificial intelligence (A.I.), co

    nnectionism

    , neu

    roscience, ro

    -

    botics, dynamics, and artificial life, w

    hile discussion ranges across both standard

    philosophical territory (levels of description, types of explanation, mental cau

    sa-

    tion, the nature and the status of folk psychology) a

    nd the just-visible conceptual

    landscape of cutting edge cognitive science (emergence, the interplay between per-

    ception, cognition, and action, the relation betw

    een life and mind, m

    ind as an in-

    viii

    Preface ix

    trinsically embodied and en

    vironmentally em

    bedded phenomena). If these term

    s seem

    alien and empty, don't w

    orry. They are just placeholders for the discussions

    to com

    e.

    The text has, deliberately, a rather strong narrative structure. I am

    telling a story about the last three o

    r four decades of research into the nature of m

    ind. It is a story told from

    a specific perspective, that of a philosopher, actively engaged in w

    ork and co

    nversation w

    ith cognitive scientists, and especially engaged with w

    ork

    in artificial neu

    ral netw

    orks, cognitive neu

    roscience, robotics, and em

    bodied, sit- uated cognition. The n

    arrative reflects these engagements and is thus dense w

    here m

    any are skimpy and (at tim

    es) skimpy w

    here others are dense. I embrace this

    consequence, because I hope that m

    y peculiar com

    bination of interests affords a useful a

    nd perhaps less frequently enco

    untered ro

    ute into many of the central top-

    ics and discussions. I hope that the text will be u

    seful in both basic and m

    ore ad-

    van

    ced level courses both in philosophy of m

    ind and in the v

    arious cognitive sciences.

    The project is clearly ambitious, taking the reader all the w

    ay from the first

    waves of artificial intelligence through to co

    ntemporary n

    euro

    science, robotics, and

    the coadaptive dance of m

    ind, culture, and technology. In pushing an introduc-

    tory text to these outer lim

    its, I am betting o

    n o

    ne thing: that a good w

    ay to in- troduce people to a living discussion is to m

    ake them a part of it and n

    ot hide the dirty laundry. There is m

    uch that is u

    nclear, m

    uch that is ill u

    nderstood, and m

    uch

    that will, n

    o doubt, so

    on prove to be m

    istaken. There are places where it is n

    ot yet clear

    what

    the right questions

    are, let alone the

    answ

    ers. But the goal is

    worthy-a

    better understanding of o

    urselves and of the place of hum

    an thought in the n

    atural order. The m

    odest hope is just to engage the new

    reader in an o

    n-

    going quest and to make her part of this frustrating, fascinating, m

    ultivoiced con

    -

    versation.

    A word of caution in closing. Philosophy of cognitive science has so

    mething

    of the flavor of a random w

    alk on

    a rubber landscape. No o

    ne know

    s quite where

    they are going, and every step anyone takes threatens to change the w

    hole of the su

    rrou

    nding scenery. There is, shall w

    e say, flux. So if you find these topics inter- esting, do, do check o

    ut the current editions of the journals, and visit so

    me w

    eb sites.' Y

    ou'll be amazed how

    things change.

    Andy Clark St. Louis

    'Sites change rapidly, so it is u

    nwise to give lists. A

    better bet is to search using key w

    ords such as phi-

    losophy, cognitive science, and con

    nectionism

    . Or ask your tutor for his or her favorite sites. Useful journals include M

    inds and Machines, Cognitive Science, Behavioral and Brain Sciences (hard), M

    ind and Language (rather philosophical), Philosophical Psychology, Connection Science (technical], and Journal of Consciousness Studies. Also m

    ainstream philosophy journals su

    ch as Mind, Journal ofPhilosophy, and

    Synthese. The journal Trends in Cognitive Sciences is a particularly useful sou

    rce of user-friendly review

    articles, albeit on

    e in which explicitly philosophical treatm

    ents are the exception rather than the rule.

  • ACKNOWLEDGM

    ENTS

    This book grew o

    ut of a variety of u

    ndergraduate classes taught in both England and the U

    nited States. In England, I am

    indebted to students and colleagues in phi- losophy and in the school of Cognitive and Com

    puting Sciences, at the University

    of Sussex. In the United States, I am

    indebted to students and colleagues in Phi- losophy, in the Philosophy/NeurosciencelPsychology program

    , and in the Hew

    lett freshm

    an MindlBrain program

    , all at Washington U

    niversity in St. LOU

    IS Various friends, colleagues, and m

    entors, both at these institutions and elsewhere, deserve

    very special thanks. Their views and criticisms have helped shape ev

    erything in this book (though, as is cu

    stomary, they are n

    ot to be blamed for the faults and lapses).

    I am thinking of (in n

    o particular o

    rder) Daniel D

    ennett, Paul and Pat Church- land, M

    argaret Boden, Brian Cantwell Sm

    ith, Tim V

    an Gelder, M

    ichael Morris,

    Bill Bechtel, Michael W

    heeler, David Chalm

    ers, Rick Grush, A

    aron Sloman, Susan

    Hurley, Peter Carruthers, John H

    augeland, Jesse Prinz, Ron Chrisley, Brian Kee-

    ley, Chris Peacocke, and Martin D

    avies. I owe a special debt to friends and col-

    leagues working in n

    euro

    science, robotics, psychology, artificial life, cognitive an-

    thropology, econom

    ics, and beyond, especially David V

    an Essen, Charles Anderson,

    Douglass N

    orth, Ed Hutchins, R

    andy Beer, Barbara Webb, Lynn A

    ndrea Stein, M

    aja Mataric, M

    elanie Mitchell, D

    avid Cliff, Chris Thornton, Esther Thelen, Julie Rutkow

    ski, and Linda Smith.

    Most of the present text is n

    ew, but a few chapters draw

    on

    material from

    pub- lished articles: Chapter 4, Section 4.2 (c), incorporates so

    me m

    aterial from "The w

    orld, the flesh

    and the artificial neu

    ral netw

    ork"-to appear in J. Cam

    pbell and G. Oliveri

    (eds.), Language, Mind and M

    achines (Oxford, England: Oxford U

    niversity Press).

    Chapter 5, Section 5.1 and Chapter 8, Section 8.1, include material from

    We

    re

    brain, body and w

    orld collide." Daedalus, 127(2), 257-280,

    1998.

    Acknowledgments

    xi

    Chapter 6, Section 6.1, draws o

    n m

    y entry "Em

    bodied, situated and distributed cognition." In W

    . Bechtel and G. Graham

    (eds.), A Companion to Cognitive

    Science (Oxford, England: Blackwell, 1998). Chapter 7, Section 7.1, reproduces case studies o

    riginally presented in two papers:

    "The dynamical challenge." Cognitive Science 21(4), 451481,1997, and "Tim

    e and m

    ind," Journal of Philosophy 95(7), 354-376, 1998. Chapter 8 includes so

    me m

    aterial from "M

    agicwords: H

    ow language augm

    ents hum

    an com

    putation." In P. Carruthers and J. Boucher (eds.), Language and Thought (Cambridge, England: Cam

    bridge University Press, 1998).

    Sincere thanks to the editors and publishers for permission to u

    se this material

    here. Sources of figures are credited in the legends. Thanks to Beth Stufflebeam

    , Tamara Casanova, K

    atherine McCabe, a

    nd Kim

    - berly M

    ount for invaluable help in preparing the man

    uscript. A

    nd to Lolo, the cat, for sitting o

    n it during all stages of production.

    Thanks also to George G

    raham a

    nd a bevy of anonym

    ous referees, whose co

    m-

    ments and suggestions have m

    ade an

    eno

    rmo

    us difference to the finished product.

    And finally, essentially, but so

    very inadequately, thanks beyond measu

    re to m

    y wife and colleague, Josefa Toribio, and to my parents, Christine and Jam

    es Clark. As always, your love and support m

    eant the world.

  • RESOURCES

    Each chapter ends w

    ith specific suggestions for further reading. B

    ut it is also wo

    rth highlighting a n

    um

    ber of basic reso

    urc

    es and collections:

    Bechtel, W., a

    nd Graham

    , G. (1998). A Companion to Cognitive Science. O

    xford, England: Blackwell. (Encyclopedia-style entries o

    n all the im

    portant topics, with a u

    seful his- torical introduction by Bechtel, A

    brahamsen, and G

    raham.)

    Boden, M

    . (1990). The Philosophy of Artificzal Intelligence. O

    xford, England: Oxford U

    ni- verslty Press. (Seminal papers by Turing, Searle, N

    ewell and Sim

    on, and M

    arr, with

    som

    e new

    er contributions by D

    ennett, Dreyfus and D

    rejhs, P.M. Churchland, a

    nd others.)

    Boden, h4. (1996). The Philosophy ofArtificia1 Life. Oxford, England: O

    xford University

    Press. (Nice introductory essay by Langton, and a useful w

    indow o

    n so

    me early de-

    bates in this area.) H

    augeland, r. (1997). Mind Design 11. Cam

    bridge, MA

    : MIT Press. (Fantastic collection,

    including a fine introduction by Haugeland; sem

    inal papers by Turing, Dennett,

    Navell and Sim

    on, Minsky, D

    reyfus, and Searle; a com

    prehensive introduction to co

    nnectionism

    in papers by Rum

    elhart, Smolensky, Churchland, Rosenberg, and

    Clark, seminal critiques by Fodor a

    nd Pylyshyn, Ramse); Stich, and G

    aron; and a hint of n

    ew frontiers from

    Brooks and Van G

    elder. Quite indispensable.) Lycan, W

    . (1990). Mind a

    nd Cognition: A Reader. Cam

    bridge, MA: Blackw

    ell. (Great value-a

    large and w

    ell-chosen collection concentrating o

    n the earlier debates o

    ver

    functionalism, instrum

    entalism, elim

    inativism, and the language of thought, w

    ith a useful section o

    n c

    on

    sciousness a

    nd qualia.) hlacD

    onald, C., and MacD

    onald, G. (1995). Connectionism. Debates on

    Psycholog~cal Ex- planation. O

    xford, England: Blackwell. (A co

    mprehensive sam

    pling of the debates betw

    een connectionism

    and classicism, w

    ith contributions by Sm

    olensky, Fodor and

    Pylyshyn (and rephes by each), Ramsey et a]., Stich and W

    arfield, and m

    any others.)

    Tw

    o recent textbooks have c

    ontents that nicely c

    om

    plement the present, c

    ognitive scientifically o

    riented, perspective:

    Resources

    xiii

    Braddon-M

    itchel, D., and Jackson, F. (1 996). Philosophy of Mind a

    nd Cognition. Oxford,

    England: Blackwell. (Excellent introductory text co

    vering the m

    ore traditionally

    philosophical territory of identity theory, functionalism, a

    nd debates about content.

    Kim

    , J. (1996). Philosophy of Mind. Boulder, CO

    : Westview

    . (A truly excellent text, co

    ver-

    ing behaviorism, identity theory, m

    achine functionalism, a

    nd debates about con

    -

    sciousness and content.)

  • INTRO

    DUCTION

    (Not) Like a Rock

    Here's how January 21, 2000 panned out for three different elem

    ents of the nat-

    ural order.

    Element I: A Rock

    Here is a day in the life of a small, gray-white rock nestling am

    idst the ivy in my

    St. Louis backyard. It stayed put. Some things happened to it: there w

    as rain, and it becam

    e wet and shiny; there w

    as wind, and it was subtly eroded; m

    y cat chased a squirrel nearby, and this m

    ade the rock sway. That's about it, really. There is n

    o

    reason to believe the rock had any thoughts, or that any of this felt like anything to

    the rock. Stuff happened, but that was all.

    Element 2: A Cat

    Lolo, my cat, had a rather different kind of day. About 80%

    of it was spent, as usual,

    asleep. But there were forays into the w

    aking, wider world. Around 7 A.M.

    some in-

    ner stirring led Lolo to exit the house, making straight for the catflap from

    the warm

    perch of the living ro

    om

    sofa. Outside, bodily functions doubtless dominated, at

    least at first. Later, following a brief trip back inside (unerringly routed via the cat- flap and the food tray), squirrels w

    ere chased and dangers avoided. Other cats w

    ere dealt with in w

    ays appropriate to their rank, station, g~rth, and meanness. There w

    as a great deal of further sleep~ng.

    Element 3: M

    yself M

    y day was (I think) rather m

    ore like Lolo's than like the rock's. We both (Lolo and

    I) pursued food and warm

    th. But my day included, I suspect, rather m

    ore o

    utright 1

  • 2

    INTRODUCTION

    contem

    plotion. The kind of spiraling meta-contem

    plation, in fact, that has som

    etimes

    gotten philosophy a bad nam

    e. Martin A

    mis captured the spirit w

    ell:

    I experienced thrilling self-pity. "What will that m

    ind of your get up to next?" I said, recognizing the self-congratulat~on behind this thought and the self-congratulation behind that recognition, and the self-congratulation behind recogn~zing that recog- nition. Steady o

    n. (Martin Am

    is, The Rachel Papers, p. 96) I certainly did so

    me of that. I had thoughts, ev

    en "trains of thought" (reason-

    able sequences of thinkings such as

    "It's 1 P.M. Tim

    e to eat. What's in the fridge?"

    and so o

    n). But there were also thoughts about thoughts, as I sat back and observed

    my own trains of thought, alert for colorful ex

    amples to im

    port into this text.

    What, then, distinguishes cat from

    rock, a

    nd (perhaps) person from cat? W

    hat are the m

    echanisms that m

    ake thought and feeling possible? A

    nd what further tricks

    or artifices give m

    y ow

    n kind of m

    indfulness its peculiar self-aware tinge? Such

    questions seem to focus attention o

    n three different types of phenom

    ena:

    1. The feelings that characterize daily experience (hunger, sadness, desire, and so

    on)

    2. The flow of thoughts a

    nd reasons

    3. The meta-flow

    of thoughts about thoughts (and thoughts about feelings), of re- flection o

    n reaso

    ns, a

    nd so o

    n.

    Most of the research program

    s covered in this text have co

    ncentrated o

    n the

    middle option. They have tried to explain how

    my thought that it is 1 P.M

    . co

    uld lead to m

    y thought about lunch, and how

    it could cau

    se my subsequent lunch-

    seeking actions. All three types of phenomena are, how

    ever, the subject of what

    philosophers call "mentalistic discourse." A typical ex

    ample of m

    entalistic discourse is the appeal to beliefs (and desires) to explain actions. The m

    ore

    technical phrase "propositional attitude psychology" highlights the standard shape of su

    ch expla- n

    ations: such explanations pair m

    ental attitudes (believing, hoping, fearing, etc.) w

    ith specific propositions ("that it is raining," "that the coffee is in the kitchen,"

    "that the squirrel is up the tree," etc.) so as to explain intelligent action. Thus in a

    sentence such as

    "Pepa hopes that the wine is chilled," the that-construction in-

    troduces a proposition ("the wine is chilled") tow

    ard which the agent is supposed

    to exhibit som

    e attitude (in this case, hoping). Other attitudes (such as believing,

    desiring, fearing, and so

    on) m

    ay, of cou

    rse, be taken to the same proposition. O

    ur ev

    eryday understandings of each other's behavior involve hefty doses of proposi-

    tional attitude ascription: for exam

    ple, I may explain

    ~epa's reluctance to open the

    wine by saying "Pepa believes that the w

    ine is not yet chitled a

    nd desires that it re- m

    ain in the fridge for a few m

    ore

    minutes."

    Introduction 3

    Such ways of speaking (and thinking) pay huge dividends. They support a su

    r-

    prising degree of predictive success, a

    nd are the co

    mm

    on

    curren

    cy of many of o

    ur

    social a

    nd practical projects. In this vein, the philosopher Jerry Fodor suggests that

    com

    mo

    nsen

    se psychology is ubiquitous, almost invisible (because it w

    orks so

    well),

    and practically indispensable. For ex

    ample, it en

    ables us to m

    ake precise plans on

    the basis of som

    eone's 2-m

    onth-old statement that they w

    ill arrive on

    flight 594 o

    n Friday, N

    ovember 20, 1999. Such plans often w

    ork out-a

    truly amazing fact

    given the nu

    mber of physical v

    ariables involved. They wo

    rk out (when they do)

    because the statement reflects a

    n intention (to arrive that day, o

    n that flight) that

    is som

    ehow a

    n active shaper of m

    y behavior. I desire that I should arrive on

    time.

    You know

    that I so desire. 4nd o

    n that basis, w

    ith a little cooperation from

    the w

    orld at large, m

    iracles of coo

    rdination can o

    ccur. O

    r as Fodor mo

    re colorfully

    puts it:

    If you want to know where m

    y physical body will be next Thursday, mechanics-our

    best science of middle-sized objects after all, and reputed to be pretty good in its field-

    is no use to you at all. Far the best w

    ay to find out (usually in practice, the only w

    ay to fm

    d out) is: ask m

    e! (Fodor, 1987, p. 6, original emphasis)

    Comm

    onsense psychology thus works, a

    nd with a v

    engeance. But why? W

    hy is it that treating each other as having beliefs, hopes, intentions, a

    nd the like allows

    us su

    ccessfully to explain, predict, and u

    nderstand so m

    uch daily behavior? Beliefs,

    desires, and so

    on

    are, after all, invisible. We see (what w

    e take to be) their effects. B

    ut no

    on

    e has ev

    er actually seen a belief. Such things are (currently? perm

    anently?] unobservable. Com

    monsense psychology posits these u

    nobservables, a

    nd looks to be co

    mm

    itted to a body of law-like relations involving them

    . For exam

    ple, we ex

    -

    plain Fred's jumping up and dow

    n by saying that he is happy because his sister just w

    on

    the Nobel Prize. Behind this explanation lurks a

    n im

    plicit belief in a law-

    like regularity, viz. "if so

    meo

    ne desires x

    a

    nd x o

    ccurs, then (all other things be-

    ing equal) they feel happy." All this makes co

    mm

    on

    sense psychology look like a

    theory about the invisible, but causally potent, roots of intelligent behavior. W

    hat, then, can

    be making the theory true (assuming that it is)? W

    hat is a belief (or a hope, o

    r a fear) such that it can

    cause a hum

    an being (or perhaps a cat, dog, etc.) to act in a

    n appropriate w

    ay? O

    nce upon a time, perhaps, it w

    ould have been reaso

    nable to respond to the

    challenge by citing a special kind of spirit-substance: the imm

    aterial but causally

    empow

    ered seat of the mental [for so

    me critical discussion, see Churchland (1984),

    pp. 7-22, and A

    ppendix I of the present text]. Our co

    ncern

    s, however, lie squarely

    with attem

    pts that posit nothing extra-nothing

    beyond the properties and o

    rga- nization of the m

    aterial brain, body, and w

    orld. The goal is a fully m

    aterialistic story in w

    hich mindw

    are emerges as n

    othing but the playing out of o

    rdinary phys- ical states a

    nd processes in the familiar physical w

    orld. Insofar as the m

    ental is in any w

    ay special, according to these view

    s, it is special because it depends on

    som

    e

  • 4 IN

    TRODUCTIO

    N

    particular and unusu

    al ways in w

    hich ordinary physical stuff can

    be built, arranged, and o

    rganized. Views of this latter kind are broadly speaking m

    onistic: that is to say, they posit

    only o

    ne basic kind of stuff (the m

    aterial stuff) and attempt to explain the dis-

    tinctive properties of mental phenom

    ena in terms that are co

    ntinuous with, o

    r at least appropriately grounded in, o

    ur best u

    nderstanding of the workings of the

    nonm

    ental universe. A co

    mm

    on, but still inform

    ative, com

    parison is with the o

    nce-

    lively (sic) debate between vitalists a

    nd nonvitalists. The vitalist held that living

    things were quite fundam

    entally different from the rest of inanim

    ate nature, co

    ur-

    tesy of a special extra force or ingredient (the "vital spark"), that w

    as missing else-

    where. This is itself a kind of dualism

    . The demonstration of the fundam

    ental unity

    of organic and inorganic chem

    istry (and the absence, in that fundament, of any-

    thing resembling a vital spark) w

    as thus a victory-as far as w

    e can tell-for

    a kind of m

    onism

    . The animate w

    orld, it seem

    s, is the result of nothing but the fancy co

    m-

    bination of the same kinds of ingredients and forces responsible for inanim

    ate na-

    ture. As it was w

    ith the animate, so

    materialists (which is to say, n

    early all those w

    orking in co

    ntemporary cognitive science, the present author included) believe

    it must be w

    ith the mental. The m

    ental world, it is anticipated, m

    ust prove to de-

    pend on n

    othing but the fancy com

    bination and organization of o

    rdinary physi- cal states and processes.

    Notice, then, the problem

    . The mental certainly seem

    s special, unusu

    al, and different. Indeed, as w

    e saw, it is special, u

    nusu

    al, and different: thoughts give w

    ay to other thoughts a

    nd actions in a way that respects reasons: the thought that the

    forecast was su

    n (to adapt the fam

    ous but less upbeat exam

    ple) causes m

    e to ap- ply su

    nscreen

    , to don a Panama hat, and to think

    "just another day in paradise."

    And there is a qualitative feel, a

    "som

    ething it is like" to have a certain kind of m

    ental life: I experience the stabbings of pain, the stirrings of desire, the variety of

    tastes, colors, and sounds. It is the burden of m

    aterialism to so

    mehow

    get to grips w

    ith these various special features in a w

    ay that is continuous w

    ith, or appropri-

    ately grounded in, the way w

    e get to grips with the rest of the physical w

    orld-by so

    me u

    nderstanding of material structure, o

    rganization, and causal flow

    . This is a tall o

    rder, indeed. But, as Jerry Fodor is especially fond of pointing o

    ut, there is at least o

    ne good idea floating aro

    und-albeit

    on

    e that targets just o

    ne of the tw

    o special properties just m

    entioned: reason-respecting flow

    . The idea, in a supercom

    pressed nutshell, is that the pow

    er of a thought (e.g., that the forecast is su

    n) to cause further thoughts a

    nd actions (to apply sunscreen

    ,

    to think "an

    other day in paradise") is fully explained by what are broadly speak-

    ing structural properties of the system in w

    hich the thought occu

    rs. By a structural property I here m

    ean sim

    ply a physical or o

    rganizational property: som

    ething w

    hose nature is explicable without invoking the specific thought-content involved.

    An exam

    ple will help. Consider the w

    ay a pocket calculator outputs the su

    m of tw

    o n

    um

    bers given a sequence of button pushings that we interpret as inputting

    "2"

    l ntroduction 5

    "+"

    "2." The calculator need n

    ot (and does not) u

    nderstand anything about nu

    m-

    bers for this trick to work. It is sim

    ply structured so that those button pushings

    will typically lead to the output

    "4" as su

    rely as a river will typically find the path of least resistance dow

    n a mo

    untain. It is just that in the form

    er case, but not the

    latter, there has been a process of design such that the physical stuff becam

    e orga-

    nized so as its physical u

    nfoldings would reflect the arithm

    etical constraints gov-

    erning sensible (arithmetic-respecting) transitions in n

    um

    ber space. Natural selec-

    tion and lifetime learning, to co

    mplete the (supercompressed) picture, are then

    imagined to have sculpted o

    ur brains so

    that certain structure-based physical un-

    folding~ respect the constraints o

    n sen

    sible sequences of thoughts and sensible

    thought-action transitions. Recognition of the predator thus causes ru

    nning, hid-

    ing, and thoughts of escape, whereas recognition of the food cau

    ses eating, vigi- lance, a

    nd thoughts of where to find m

    ore. O

    ur whole reaso

    n-respecting m

    ental life, so

    the story goes, is just the unfolding of w

    hat is, at bottom, a physical and

    structural story. Mindfulness is just m

    atter, nicely orchestrated.

    (As to that other distinctive property, "qualitative feel," let's just say-and

    see

    Appendix 11-that

    it's a problem. M

    aybe that too is just a property of matter, nicely

    orchestrated. But how

    the orchestration yields the property is in this case m

    uch less

    clear, even

    in outline. So w

    e'll be looking where the light is.)

    In the next eight chapters, I shall expand and pursue that sim

    ple idea of mind-

    ware (selected aspects!) as m

    atter, nicely orchestrated. The chase begins w

    ith a no-

    tion of mind as a kind of so

    uped-up pocket calculator (mind as a familiar kind of

    com

    puter, but built out of m

    eat rather than silicon). It proceeds to the vision of m

    ind as dependent on

    the operation of a radically different kind of com

    putational device (the kind know

    n as artificial neu

    ral netw

    orks). And it culm

    inates in the con-

    temporary (and co

    ntentious) research programs that highlight the co

    mplex inter-

    actions among brains, bodies, and en

    vironmental su

    rroundings (work o

    n robot-

    ics, artificial life, dynamics, and situated cognition).

    The narrative is, let it be said, biased. It reflects m

    y ow

    n view

    of what w

    e have learned in the past 30 o

    r 40 years of cognitive scientific research. What w

    e have learned, I suggest, is that there are m

    any deeply different ways to put flesh o

    nto that broad, m

    aterialistic framew

    ork, and that som

    e once-prom

    ising incarnations face deep and u

    nexpected difficulties. In particular, the sim

    ple notion of the brain

    as a kind of symbol-crunching co

    mputer is probably too sim

    ple, and too far re-

    moved from

    the neu

    ral and ecological realities of com

    plex, time-critical interac-

    tion that sculpted animal m

    inds. The story I tell is thus a story of (a kind of) in- n

    er symbol flight. But it is a story of progress, refinement, and ren

    ewal, n

    ot one of

    abandonment and decay. The sciences of the m

    ind are, in fact, in a state of rude health, of ex

    uberant flux. Time, then, to start the story, to seek the o

    rigins of mind

    in the whirr and buzz of w

    ell-orchestrated matter.

  • Mindware as Soft

    1.1 Sketches The co

    mputer scientist M

    arvin Minsky o

    nce de-

    scribed the human brain as a m

    eat machine-no

    mo

    re n

    o less. It is, to be su

    re, an

    ugly phrase. But it is also a striking im

    age, a com

    pact expression of both the genuine scientific ex

    citement a

    nd the rather gung-ho m

    aterialism that tended to char-

    acterize the early years of cognitive scientific re- search. M

    indware-our

    thoughts, feelings, hopes, fears, beliefs, a

    nd intellect-is cast as n

    othing but the operation of the biological brain, the m

    eat ma-

    chine in ou

    r head. This notion of the brain as a

    meat machine is interesting, for it im

    mediately in-

    vites us to focus n

    ot so m

    uch o

    n the m

    aterial (the meat) as o

    n the m

    achine: the w

    ay the material is o

    rganized and the kinds of operation it supports. The sam

    e ma-

    chine (see Box 1.1) can, after all, often be m

    ade of iron, or steel, o

    r tungsten, or

    whatever. W

    hat we co

    nfront is thus both a rejection of the idea of mind as im

    - m

    aterial spirit-stuff and an

    affirmation that m

    ind is best studied from a kind of

    engineering perspective that reveals the n

    ature of the machine that all that w

    et, w

    hite, gray, and sticky stuff happens to build.

    What ex

    actly is meant by casting the brain as a m

    achine, albeit on

    e m

    ade out

    of meat? There exists a historical trend, to be su

    re, of trying to understand the

    workings of the brain by an

    alogy with v

    arious currently fashionable technologies:

    the telegraph, the steam engine, a

    nd the telephone switchboard are all said to have

    had their day in the sun

    . But the "meat m

    achine" phrase is intended, it should no

    w

    be clear, to do mo

    re than hint at so

    me ro

    ugh analogy. For w

    ith regard to the very

    special class of machines know

    n as com

    puters, the claim is that the brain (and, by

  • 8 CHAPTER i

    / M

    EAT MACHINES

    not u

    nproblematic extension, the m

    ind) actually is som

    e such device. It is n

    ot that the brain is so

    mehow

    like a com

    puter: everything is like ev

    erything else in som

    e

    respect or other. It is that n

    eural tissues, synapses, cell assem

    blies, and all the rest

    are just nature's

    rather wet a

    nd sticky way of building a hunk of honest-to-G

    od co

    mputing m

    achinery. Mindw

    are, it is then claimed, is found "in" the brain in just

    the way that softw

    are is found "in" the co

    mputing system

    that is run

    ning it. The attractions of su

    ch a view can

    hardly be overstated. It m

    akes the mental

    special without m

    aking it ghostly. It makes the m

    ental depend on

    the physical, but in a rather co

    mplex a

    nd (as we shall see) liberating w

    ay. And it provides a ready-

    made an

    swer to a profound puzzle: how

    to get sensible, reaso

    n-respecting behav-

    ior out of a hunk of physical m

    atter. To flesh out this idea of n

    on

    mysterious

    reason

    -respecting behavior, we n

    ext review so

    me cru

    cial developments1 in the his-

    tory (and prehistory) of artificial intelligence.

    'The next few paragraphs draw o

    n Newel1 a

    nd Simon's (1976) discussion of the developm

    ent of the Physical Sym

    bol Hypothesis (see Chapter 2 follow

    ing), on

    John Haugeland's (1981a), a

    nd on

    Glym

    our, Ford, a

    nd Hayes' (1995).

    Meat M

    achines 9

    One key developm

    ent was the appreciation of the pow

    er and scope of form

    al logics. A decent historical acco

    unt of this developm

    ent wo

    uld take us too far afield,

    touching perhaps on

    the pioneering efforts in the seventeenth century by Pascal

    and Leibniz, as w

    ell as on

    the twentieth-century co

    ntributions of Boole, Frege, Rus- sell, W

    hitehead, and others. A u

    seful historical accou

    nt can be found in G

    lymour,

    Ford, and H

    ayes (1995). The idea that shines through the history, however, is the

    idea of finding and describing

    "laws of reaso

    nn-an

    idea w

    hose clearest expression em

    erged first in the arena of form

    al logics. Formal logics a

    re system

    s com

    prising sets of sym

    bols, ways of joining the sym

    bols so as to express co

    mplex propositions,

    and rules specifying how

    to legally derive new

    symbol co

    mplexes from

    old ones.

    The beauty of formal logics is that the steadfast application of the rules guarantees

    that you will nev

    er legally infer a false con

    clusion from true prem

    ises, even

    if you have n

    o idea w

    hat, if anything, the strings of symbols actually m

    ean. Just follow

    the rules a

    nd truth will be preserved. The situation is thus a little (just a little) lie

    a person, incompetent in practical m

    atters, who is n

    on

    etheless able to successfully

    build a cabinet or bookshelf by follow

    ing written instructions for the m

    anipula- tion of a set of preprovided pieces. Such building behavior can

    look as if it is rooted

    in a deep appreciation of the principles and laws of w

    oodw

    orking: but in fact, the person is just blindly m

    aking the moves allow

    ed or dictated by the instruction set.

    Formal logics show

    us how

    to preserve at least on

    e kind of sem

    antic (mean- ing-involving: see Box 1.2) property w

    ithout relying on

    anyone's actually appreci- ating the m

    eanings (if any) of the symbol strings involved. The seem

    ingly ghostly and ephem

    eral wo

    rld of meanings a

    nd logical implications is respected, a

    nd in a certain sen

    se recreated, in a realm w

    hose operating procedures do not rely o

    n m

    ean-

    ings at all! It is recreated as a realm of m

    arks or "tokens," recognized by their phys-

    ical ("syntactic") characteristics alone and m

    anipulated according to rules that re-

    fer only to those physical characteristics (characteristics su

    ch as the shape of the sym

    bol-see Box 1.2). As N

    ewell a

    nd Simon co

    mm

    ent:

    Logic . . . w

    as a game played w

    ith meaningless tokens according to certain purely syn-

    tactic rules. Thus progress was first m

    ade by walking aw

    ay from all that seem

    ed rele- vant to m

    eaning and human sym

    bols. (Newell and Simon, 1976, p. 43)

    Or, to put it in the m

    ore

    famous w

    ords of the philosopher John H

    augeland:

    If you take care of the syntax, the semantics will take care of itself: (Haugeland, 198la,

    p. 23, original emphasis)

    This shift from m

    eaning to form (from sem

    antics to syntax if you will) also

    begins to suggest an

    attractive liberalism co

    ncerning actual physical structure. For

    what m

    atters, as far as the identity of these formal system

    s is con

    cerned, is n

    ot, e.g., the precise shape of the sym

    bol for "and." The shape could be

    "AN

    D o

    r "and" o

    r "&

    " or

    "A" o

    r whatever. All that m

    atters is that the shape is used co

    nsistently

    and that the rules are set up so

    as to specify how to treat strings of sym

    bols joined by that shape: to allow

    , for exam

    ple, the derivation of "A" from

    the string "A a

    nd

  • 10 CHAPTER i

    / M

    EAT MACHINES

    8." Logics are thus first-rate exam

    ples of formal systems in the sen

    se of Haugeland

    (1981a, 1997). They are systems w

    hose essence lies n

    ot in the precise physical de- tails but in the w

    eb of legal moves a

    nd transitions. M

    ost games, H

    augeland notes, are form

    al systems in ex

    actly this sense. Y

    ou can

    play chess on

    a board of wo

    od or m

    arble, using pieces shaped like anim

    als, m

    ovie stars, o

    r the crew of the star ship Enterprise. Y

    ou could ev

    en, H

    augeland suggests, play chess u

    sing helicopters as pieces and a grid of helipads o

    n top of tau

    buildings as the board. All that matters is again the w

    eb of legal moves a

    nd the physical distinguishability of the tokens.

    Thinking about formal system

    s thus liberates us in tw

    o very pow

    erful ways at

    a single stroke. Semantic relations (such as truth preservation: if

    "A and B" is true,

    "A" is true) are seen to be respected in virtue of procedures that m

    ake no

    intrin- sic reference to m

    eanings. And the specific physical details of any su

    ch system are

    seen to be u

    nimportant, since w

    hat matters is the golden w

    eb of mo

    ves a

    nd tran- sitions. Sem

    antics is thus made u

    nm

    ysterious without m

    aking it brute physical. W

    ho says you can't have your cake a

    nd eat it? The n

    ext big development w

    as the formalization (Turing, 1936) of the n

    otion of co

    mputation itself. Turing's w

    ork, w

    hich predates the development of the dig-

    Meat M

    achines 11

    ital com

    puter, introduced the foundational notion of (what has since co

    me to be

    known as) the Turing m

    achine. This is an

    imaginary device co

    nsisting of a

    n infi-

    nite tape, a simple processor (a "finite state m

    achine"), and a readlw

    rite head. The tape acts as data store, u

    sing som

    e f~

    ed

    set of sym

    bols. The readlwrite head can

    read a symbol off the tape, m

    ov

    e itself on

    e square backw

    ard or forw

    ard on

    the tape, and w

    rite onto the tape. The finite state m

    achine (a kind of central processor) has en

    ough m

    emo

    ry to recall what sym

    bol was just read a

    nd what state it (the finite

    state machine) w

    as in. These two facts together determ

    ine the next action, w

    hich is carried o

    ut by the readlwrite head, a

    nd determine also the n

    ext state of the fi- nite state m

    achine. What Turing show

    ed was that so

    me su

    ch device, performing a

    sequence of simple co

    mputations governed by the sym

    bols on

    the tape, could co

    m-

    pute the answ

    er to any sufficiently well-specified problem

    (see Box 1.3). W

    e thus confront a quite m

    arvelous co

    nfluence of ideas. Turing's wo

    rk clearly suggested the n

    otion of a physical m

    achine whose syntax-follow

    ing properties w

    ould en

    able it to solve any well-specified problem

    . Set alongside the earlier wo

    rk o

    n logics a

    nd formal system

    s, this am

    ou

    nted to nothing less than

    . . . the em

    ergence of a new level of analysis, independent of physics yet m

    echanistic in spirit

    . . . a science of structure and function'divorced from

    material substance.

    (Pylyshyn, 1986, p. 68) Thus w

    as classical cognitive science conceived. The vision finally becam

    e flesh, how

    ever, only because of a third (and final) innovation: the actual co

    nstruction of

    general purpose electronic com

    puting machinery a

    nd the development of flexible,

    high-level programm

    ing techniques. The bedrock machinery (the digital c

    om

    puter) w

    as designed by John vo

    n N

    eumann in the 1940s a

    nd with its advent all the pieces

    seemed to fall finally into place. For it w

    as no

    w clear that o

    nce realized in the phys-

    ical medium

    of an

    electronic com

    puter, a formal system

    could ru

    n o

    n its o

    wn, w

    ith- o

    ut a human being sitting there deciding how

    and w

    hen to apply the rules to ini- tiate the legal transform

    ations. The well-program

    med electronic co

    mputer, as John

    Haugeland nicely points o

    ut, is really just an

    automatic ("self-moving") form

    al sys- tem

    : It is like a chess set that sits there and plays chess by itself, without any intervention from

    the players, or an automatic form

    al system that w

    rites out its ow

    n proofs and theorem

    s without any help from

    the mathem

    atician. (Haugeland, 1981a, p. 10; also H

    augeland, 1997, pp. 11-12) O

    f course, the m

    achine needs a program

    . And program

    s were, in those days (but

    see Chapter 4), written by good old-fashioned hum

    an beings. But on

    ce the pro- gram

    was in place, a

    nd the power o

    n, the m

    achine took care of the rest. The tran- sitions betw

    een legal syntactic states (states that also, under interpretation, m

    eant

    som

    ething) no

    longer required a human operator. The physical w

    orld suddenly in-

    cluded clear, nonev

    olved, no

    no

    rganic exam

    ples of what D

    aniel Dennett w

    ould later

    dub "syntactic engines"-quasiautonom

    ous system

    s whose sheer physical m

    ake-

  • 12 CHAPTER I

    / M

    EAT MACHINES

    Meat M

    achines '3

    up ensu

    red (under interpretation) so

    me kind of

    ongoing reaso

    n-respecting be-

    havior. No w

    onder the early researchers w

    ere jubilant! Kew

    ell and Simon nicely

    capture the mood:

    It is not my aim

    to su

    rprise or shock you.

    . . . But the sim

    plest way I c

    an

    sum

    marize

    is to say that there are now

    in the wo

    rld machines that think, that learn a

    nd that cre

    -

    ate. Moreover, their ability to

    do these things is going to increase rapidly until-in

    a

    visible future-the ra

    nge of problems they c

    an

    handle will be co

    -extensive with the

    range to

    which the hum

    an mind has been applied. (Newel1 a

    nd Simon, 1958, p. 6,

    quoted in Dreyfus a

    nd Dreyfus, 1990, p. 312)

    This jubilant mo

    od deepened as advanced programm

    ing techniques2 brought forth im

    pressive problem-solving displays, w

    hile the broader theoretical and philo- sophical im

    plications (see Box 1.4) of these early successes co

    uld hardly have been m

    ore striking. The o

    nce-m

    ysterious realm of m

    indware (represented, adm

    ittedly, by just tw

    o of its many denizens: truth preservation and abstract problem

    solving:) looked ripe for co

    nquest and understanding. M

    ind was n

    ot ghostly stuff, but the operation of a form

    al, com

    putational system im

    plemented in the m

    eatware of the

    brain. Such is the heart of the matter. M

    indware, it w

    as claimed, is to the n

    eural m

    eat m

    achine as software is to the co

    mputer. The brain m

    ay be the standard (local, earthly, biological) im

    plementation-but

    cognition is a program-level thing. M

    ind

    'For ex

    ample, list-processing languages, as pioneered in Newel1 and Sim

    on's Logic Theorist program

    in 1956 and perfected in McCarthy's LISP aro

    und 1960, en

    couraged the use of m

    ore co

    mplex

    "recur-

    sive programm

    ing" strategies in which sym

    bols point to data structures that contain sym

    bols pointing to further data structures and so

    on. They also m

    ade full use of the fact that the same electronic m

    em-

    ory co

    uld store both program and data, a feature that allow

    ed programs to be m

    odified and operated o

    n in the sam

    e ways as data. LISP even boasted a u

    niversal function, EVAL, that made it as pow

    erful, m

    odulo finite mem

    ory lim

    itations, as a Universal Turing M

    achine.

  • 14 CHAPTER 1

    / M

    EAT MACHINES

    Meat M

    achines 15

    is thus ghostly enough to float fairly free of the gory n

    euro

    scientific details. But it is n

    ot so ghostly as to escape the n

    ets of mo

    re abstract (formal, com

    putational) scientific investigation. This is an

    appealing story. But is it correct? Let's w

    orry.

    1.2 Discussion

    (A brief note of reassu

    rance: m

    any of the topics treated below recu

    r again and again in subsequent chapters. A

    t this point, we lack m

    uch of the detailed background

    needed to really do them

    justice. But it is time to test the w

    aters.)

    A. WHY TREAT TH

    OU

    GH

    T AS COM

    PUTA

    TION

    ?

    Why treat thought as co

    mputation? The principal reaso

    n (apart from

    the fact that it seem

    s to work!) is that thinkers are physical devices w

    hose behavior patterns are reaso

    n respecting. Thinkers act in w

    ays that are usefully u

    nderstood as sensitively

    guided by reasons, ideas, and beliefs. Electronic co

    mputing devices show

    us o

    ne

    way in

    which this

    strange "dual profile" (of physical substance and

    reason-

    respecting behavior) can actually co

    me about.

    The notion of reaso

    n-respecting behavior, how

    ever, bears imm

    ediate amplifi-

    cation. A nice ex

    ample of this kind ofbehavior is given by Zenon Pylyshyn. Pylyshyn

    (1986) describes the case of the pedestrian who w

    itnesses a car crash, run

    s to a tele- phone, and punches o

    ut 91 1. We co

    uld, as Pylyshyn notes, try to explain this be-

    havior by telling a purely physical story (maybe involving specific neu

    rons, o

    r even

    quantum ev

    ents, whatever). But su

    ch a story, Pylyshyn argues, will n

    ot help us u

    n-

    derstand the behavior in its reason-guided aspects. For exam

    ple, suppose we ask:

    what w

    ould happen if the phone w

    as dead, or if it w

    as a dial phone instead of a touch-tone phone, o

    r if the accident occu

    rred in England instead of the United

    States? The neu

    ral story underlying the behavioral response w

    ill differ widely if the

    agent dials 999 (the emergency code in England) and n

    ot 91 1, or m

    ust ru

    n to find

    a working phone. Y

    et com

    mo

    n sen

    se psychological talk makes sen

    se of all these options at a stroke by depicting the agent as seeing a crash and w

    anting to get help.

    What w

    e need, Pylyshyn pow

    erfully suggests, is a scientific story that remains in

    touch with this m

    ore abstract and reaso

    n-involving characterization. A

    nd the sim-

    plest way to provide o

    ne is to im

    agine that the agent's brain contains states ("syrn-

    bols") that represent the event as a car crash and that the co

    mputational state-

    transitions occu

    rring inside the system (realized as physical ev

    ents in the brain) then lead to n

    ew sets of states (more sym

    bols) whose proper interpretation is, e.g.,

    "seek help," "find a telephone," a

    nd so o

    n. The interpretations thus glue inner

    states to sensible real-w

    orld behaviors. Cognizers, it is claimed, "instantiate

    . . . rep-

    resentation physically as cognitive codes and . . . their behavior is a cau

    sal conse-

    quence of operations carried out o

    n those codes" (Pylyshyn, 1986, p. xiii).

    The same argum

    ent can be found in, e.g., Fodor (1987), co

    uched as a point

    about content-determ

    ined transitions in trains of thought, as when the thought

    "it

  • 16 CHAPTER i

    MEAT M

    ACHINES

    is raining" leads to the thought "let's go indoors." This, for Fodor (but see Chap-

    ters 4 on

    ward), is the essen

    ce of human rationality. H

    ow is su

    ch rationality me-

    chanically possible? A good empirical hypothesis, Fodor suggests, is that there are

    neu

    ral symbols (inner states apt for interpretation) that m

    ean, e.g.,

    "it is raining" and w

    hose physical properties lead in context to the generation of other sym

    bols that m

    ean "let's go indoors." If that is how

    the brain works then the brain is in-

    deed a com

    puter in exactly the sen

    se displayed earlier. And if su

    ch were the case,

    then the mystery co

    ncerning reaso

    n-guided (content-determined) transitions in

    thought is resolved:

    If the mind is a so

    rt of com

    puter, we begin to see how

    . . . there could be

    non-

    arbitrary content-relations among causally related thoughts. (Fodor, 1987, p. 19)

    Such arguments aim

    to show that the m

    ind must be understood as a kind of

    com

    puter implem

    ented in the wetw

    are of the brain, on

    pain of failing empirically

    to account for rational transitions am

    ong thoughts. Reason-guided action, it seem

    s,

    makes good scientific sen

    se if we im

    agine a neu

    ral econom

    y organized as a syntax-

    driven engine that tracks the shape of sem

    antic space (see, e.g., Fodor, 1987, pp. 19-20).

    The mindw

    arelsoftware equation is as beguiling as it is, at tim

    es, distortive. One

    imm

    ediate concern

    is that all this emphasis o

    n algorithm

    s, symbols, a

    nd programs

    tends to promote a so

    mew

    hat misleading vision of crisp level distinctions in nature.

    The impact of the theoretical independence of algorithm

    s from hardw

    are is an ar-

    tifact of the long-term n

    eglect of issues concerning real-w

    orld action taking and

    the time co

    urse of co

    mputations. For an

    algorithm o

    r program as su

    ch is just a se- quence of steps w

    ith no

    inbuilt relation to real-world tim

    ing. Such timing depends

    crucially o

    n the particular w

    ay in which the algorithm

    is implem

    ented on a real

    device. Given this basic fact, the theoretical independence of algorithm

    from hard-

    ware is u

    nlikely to have made m

    uch of an

    impact o

    n N

    ature. We m

    ust expect to

    find biological com

    putational strategies closely tailored to getting useful real-tim

    e results from

    available, slow

    , wetw

    are com

    ponents. In practice, it is thus unlikely

    that we will be able to fully appreciate the form

    al organization of n

    atural systems

    without so

    me quite detailed reference to the n

    ature of the neu

    ral hardware that

    provides the supporting implem

    entation. In general, attention to the nature of real

    biological hardware looks likely to provide both im

    portant clues about and con-

    straints on

    the kinds of com

    putational strategy used by real brains. This topic is

    explored in more depth in Chapters 4 through 6.

    Furthermore, the claim

    that mindw

    are is software is-to

    say the least-merely

    schematic. For the space of possible types of explanatory story, all broadly co

    m-

    putational (but see Box 1.5), is very large indeed. The co

    mm

    ents by Fodor and by

    Meat M

    achines 17

    WHAT IS COM

    PUTATION?

  • 18 CHAPTER i /

    MEAT M

    ACHINES

    Pylyshyn do, it is true, suggest a rather specific kind of com

    putational story (one pursued in detail in the n

    ext chapter). But the bare explanatory schema, in w

    hich sem

    antic patterns emerge from

    an u

    nderlying syntactic, com

    putational organiza-

    tion, covers a staggeringly w

    ide range of cases. The range includes, for exam

    ple, standard artificial intelligence (A.I.) approaches involving sym

    bols and rules, "co

    n-

    nectionist" approaches that m

    imic so

    mething of the behavior of n

    eural assem

    blies (see Chapter 4), a

    nd even

    Heath R

    obinsonesque devices involving liquids, pulleys, and an

    alog com

    putations. Taken very liberally, the co

    mm

    itment to u

    nderstanding m

    ind as the operation of a syntactic engine can am

    ou

    nt to little more than a bare

    assertion of physicalism-the

    denial of spirit-stuK3

    To make m

    atters worse, a v

    ariety of different com

    putational stories may be

    told about one and the sam

    e physical device. Depending o

    n the grain of an

    alysis

    3Given o

    ur n

    otion of com

    putation (see Box 1.5), the claim is just a little stronger, since it also requires

    the presence of systematically interpretable inner states, i.e., internal representations.

    Meat M

    achines 19

    used, a single device m

    ay be depicted as carrying out a co

    mplex parallel search o

    r

    as serially transforming an

    input xinto an o

    utput y. Clearly, what grain w

    e choose will be determ

    ined by what questions w

    e hope to answ

    er. Seeing the transition as involving a n

    ested episode of parallel search may help explain specific erro

    r pro- files o

    r why certain problem

    s take longer to solve than others, yet treating the process as a sim

    ple unstructured transform

    ation of x to y may be the best choice

    for understanding the larger scale o

    rganization of the system. There w

    ill thus be a co

    nstant interaction betw

    een ou

    r choice of explanatory targets and ou

    r choice of grain and level of co

    mputational description. In general, there seem

    s little reason

    to expect a single type or level of description to do all the w

    ork w

    e require. Ex- plaining the relative speed at w

    hich we solve different problem

    s, and the kinds of interference effects w

    e experience when trying to solve sev

    eral problems at o

    nce

    (e.g., remem

    bering two closely sim

    ilar telephone nu

    mbers), m

    ay well require ex

    -

    planations that involve very specific details about how

    inner representations are stored a

    nd structured, whereas m

    erely accounting for, e.g., the bare facts about ra-

    tional transitions between co

    ntent-related thoughts may require o

    nly a coarser

    grained com

    putational gloss. [It is for precisely this reason that co

    nnectionists (see

    Chapter 4) describe themselves as exploring the m

    icrostructure of cognition.] The explanatory aspirations of psychology and cognitive science, it seem

    s clear, are suf- ficiently w

    ide and various as to require the provision of e

    xpla~~ations at a v

    ariety of different levels of grain and type.

    In sum

    , the image of m

    indware as softw

    are gains its most fundam

    ental appeal from

    the need to acco

    mm

    odate reason-guided transitions in a w

    orld of m

    erely physical flux. At the m

    ost schem

    atic level, this equation of mindw

    are and software

    is useful a

    nd revealing. But w

    e should not be m

    isled into believing either (1) that "softw

    are" nam

    es a single, clearly understood level of n

    eural o

    rganization or (2)

    that the equation of mindw

    are and software provides any deep w

    arrant for cogni- tive science to ignore facts about the biological brain.

    C. M

    IMICK

    ING

    , MO

    DELIN

    G, A

    ND

    BEHAVIOR

    Computer program

    s, it often seems, offer o

    nly shallow and brittle sirnulacrum

    s of the kind of u

    nderstanding that humans (and other anim

    als) man

    age to display. Are these just teething troubles, o

    r do the repeated shortfalls indicate som

    e fun- dam

    ental problem w

    ith the com

    putational approach itself? The worry is a good

    one. There are, alas, all too m

    any ways in w

    hich a given com

    puter program m

    ay m

    erely mim

    ic, but not illum

    inate, various aspects of o

    ur m

    ental life. There is, for ex

    ample, a sym

    bolic A.I. program that does a very fine job of m

    imicking the v

    er-

    bal responses of a paranoid schizophrenic. The program ("PARRY," Colby, 1975;

    Boden, 1977, Chapter 5) uses tricks such as scan

    ning input sentences for key words

    (such as "m

    other") and responding with can

    ned, defensive o

    utbursts. It is capa- ble, at tim

    es, of fooling experienced psychoanalysts. But no

    on

    e would claim

    that

  • 20

    CHAPTER i /

    MEAT M

    ACHINES

    it is a useful psychological m

    odel of paranoid schizophrenia, still less that it is (when up and ru

    nning o

    n a co

    mputer) a paranoid schizophrenic itself!

    Or co

    nsider a chess co

    mputer su

    ch as Deep Blue. D

    eep Blue, although capa- ble of o

    utstanding play, relies heavily on

    the brute-force technique of using its su

    -

    perfast com

    puting resources to ex

    amine all potential o

    utcomes for up to sev

    en

    moves ahead. This strategy differs m

    arkedly from that of hum

    an grandmasters,

    who seem

    to rely much m

    ore o

    n stored know

    ledge and skilled pattern recognition (see Chapter 4). Y

    et, viewed from

    a certain height, Deep Blue is n

    ot a bad simula-

    tion of human chess co

    mpetence. D

    eep Blue and the human grandm

    aster are, af- ter all, m

    ore likely to agree o

    n a particular m

    ove (as a response to a given board

    state) than are the human grandm

    aster and the human n

    ovice! At the level of gross

    input-output profiles, the human grandm

    aster and D

    eep Blue are thus clearly sim-

    ilar (not identical, as the difference in underlying strategy-brute

    force versu

    s pat- tern recognition-som

    etimes

    shines through). Yet o

    nce again, it is hard to av

    oid the im

    pression that all that the machine is achieving is top-level m

    imicking: that

    there is som

    ething amiss w

    ith the underlying strategy that either renders it u

    nfit as a substrate for a real intelligence, o

    r else reveals it as a kind of intelligence v

    ery alien to o

    ur o

    wn.

    This last caveat is im

    portant. For we m

    ust be careful to distinguish the ques-

    tion of whether su

    ch and such a program

    constitutes a good m

    odel of human

    intelligence from the question of w

    hether the program (when up and ru

    nning)

    displays som

    e kind of real, but perhaps nonhum

    an form of intelligence and u

    nder- standing. PARRY and D

    eep Blue, one feels, fail o

    n both co

    unts. Clearly, n

    either co

    nstitutes a faithful psychological m

    odel of the inner states that underlie hum

    an perform

    ance. And so

    mething about the basic style of these tw

    o com

    putational so-

    lutions (canned sentences activated by key words, a

    nd brute-force look-ahead) even

    makes us u

    neasy w

    ith the (otherwise charitable) thought that they might n

    onethe-

    less display real, albeit alien, kinds of intelligence and awaren

    ess.

    How

    , though, are we to decide w

    hat kinds of com

    putational substructure might be appropriate? Lacking, as w

    e must, first-person know

    ledge of what (if anything)

    it is like to be PARRY or D

    eep Blue, we have o

    nly a few options. We could insist

    that all real thinkers must solve problem

    s using ex

    actly the same kinds of co

    mpu-

    tational strategy as human brains (too anthropocentric, su

    rely). We co

    uld hope, optim

    istically, for som

    e future scientific understandlng of the fundamentals of cog-

    nition that will allow

    us to recognize (on broad theoretical grounds) the shape of alternative, but genuine, w

    ays in which v

    arious com

    putational organizations m

    ight support cognition. O

    r we co

    uld look to the gross behavior of the systems in ques-

    tion, insisting, for exam

    ple, on a broad and flexible range of responses to a m

    ulti- plicity of en

    vironmental dem

    ands and situations. D

    eep Blue and PARRY

    would

    then fail to make the grade n

    ot merely because their inner o

    rganizations looked alien to u

    s (an ethically dangerous move) but because the behavioral repertoire

    they support is too limited. D

    eep Blue cannot recognize a m

    ate (well, only a check-

    Meat M

    achines 21

    mate!), n

    or co

    ok an o

    melette. PARRY can

    not decide to becom

    e a hermit o

    r take up the harm

    onica, and so

    on.

    This move to behavior is n

    ot without its o

    wn problem

    s and dangers, as we

    will see in Chapter 3. But it should now

    be clearer why so

    me influential theorists

    (especially Turing, 1950) argued that a sufficient degree of behavioral su

    ccess

    should be allowed to settle the issue and to establish o

    nce and for all that a candi-

    date system is a genuine thinker (albeit o

    ne w

    hose inner workings m

    ay differ greatly from

    ou

    r ow

    n). Turing proposed a test (now known as the Turing Test) that in-

    volved a hum

    an interrogator trying to spot (from verbal responses) w

    hether a hid- den co

    nversant w

    as a human o

    r a machine. A

    ny system capable of fooling the in-

    terrogator in ongoing, open-ended

    conversation, Turing proposed, should be

    counted as an

    intelligent agent. Sustained, top-level verbal behavior, if this is right,

    is a sufficient test for the presence of real intelligence. The Turing Test invites con-

    sideration of a wealth of issues that w

    e cannot dw

    ell on

    here (several surface in

    Chapter 3). It may be, for ex

    ample, that Turing's o

    riginal restriction to a verbal

    test leaves too much scope for

    "tricks and cheats" and that a better test would fo-

    cus m

    ore heavlly o

    n real-w

    orld activity (see Harnad, 1994).

    It thus remains u

    nclear w

    hether we should allow

    that surface behaviors (how-

    ever com

    plex) are sufficient to distinguish (beyond all theoretical doubt) real think- ing from

    mere m

    imicry. Practically speaking, how

    ever, it seems less m

    orally dan-

    gerous to allow behavioral profiles to lead the w

    ay (imagine that it is discovered that you and you alone have a m

    utant brain that uses brute-force, D

    eep Blue-like strategies w

    here others use quite different techniques: has science discovered that

    you are not a co

    nscious, thinking, reaso

    ning being after all?).

    D.

    CONSCIOUSNESS, INFO

    RMA

    TION

    , AN

    D PIZZA

    "If on

    e had to describe the deepest motivation for m

    aterialism, o

    ne m

    ight say that it is sim

    ply a terror of consciousness" (Searle, 1992, p. 55). O

    h dear. If I had my

    way, I w

    ould give in to the terror and just n

    ot mention co

    nsciousness at all. But it

    is worth a w

    ord o

    r two n

    ow

    (and see Appendix 11) for tw

    o reasons. O

    ne is because it is all too easy to see the facts about co

    nscious experience (the "seco

    nd aspect of the problem

    of m

    indfulness" described in the Introduction) as co

    nstituting a

    knock-down refutation of the strongest v

    ersion of the com

    putationalist hypothe- sis. The other is because co

    nsideration of these issues helps to highlight im

    portant differences betw

    een informational a

    nd "merely physical" phenom

    ena. So here goes. H

    ow co

    uld a device made of silicon be co

    nscious? H

    OW

    could it feel pain, joy,

    fear, pleasure, and foreboding? It certainly seems u

    nlikely that such ex

    otic capac- ities should flourish in su

    ch an u

    nusu

    al (silicon) setting. But a mom

    ent's reflec- tion should co

    nvince you that it is equally am

    azing that such capacities should

    show up in, of all things, m

    eat (for a sustained reflection o

    n this them

    e, see the skit in Section 1.3). It is true, of co

    urse, that the o

    nly known cases of co

    nscious

  • 22

    CHAPTER i

    /

    MEAT M

    ACHINES

    awaren

    ess on

    this planet are cases of consciousness in carbon-based o

    rganic life form

    s. But this fact is rendered som

    ewhat less im

    pressive on

    ce we realize that all

    earthly life forms share a c

    om

    mo

    n chem

    ical ancestry a

    nd lines of descent. In any case, the question, at least as far as the central thesis of the present chapter is co

    n-

    cerned, is n

    ot whether o

    ur local carbon-based o

    rganic structure is crucial to all

    possible versions of co

    nscious aw

    areness (though it so

    unds anthropocentric in the

    extreme to believe that it is), but w

    hether meeting a certain abstract co

    mputational

    specification is eno

    ugh to guarantee such co

    nscious

    awaren

    ess. Thus even

    the philosopher John Searle, w

    ho is famous for his attacks o

    n the equation of m

    ind- w

    are with softw

    are, allows that

    "consciousness m

    ight have been evolved in system

    s that are n

    ot carbon-based, but use so

    me other so

    rt of chemistry altogether" (Searle,

    1992, p. 91). What is at issue, it is w

    orth repeating, is n

    ot whether other kinds of

    stuff and substance m

    ight support con

    scious awaren

    ess but whether the fact that

    a system exhibits a certain co

    mputational profile is en

    ough (is

    "sufficient") to en

    -

    sure that it has thoughts, feelings, a

    nd conscious experiences. For it is cru

    cial to the strongest v

    ersion of the com

    putationalist hypothesis that where o

    ur m

    ental life is co

    ncern

    ed, the stuff doesn't matter. That is to say, m

    ental states depend solely on

    the program-level, co

    mputational profile of the system

    . If con

    scious awaren

    ess were

    to turn out to depend m

    uch m

    ore

    closely than this on

    the nature of the actual

    physical stuff out of w

    hich the system is built, then this global thesis w

    ould be ei-

    ther false or (depending o

    n the details) sev

    erely com

    promised.

    Matters are co

    mplicated by the fact that the term

    "co

    nscious

    awaren

    ess" is so

    mething of a w

    easel wo

    rd, covering a v

    ariety of different phenomena. Som

    e use

    it to mean

    the high-level capacity to reflect on

    the contents of o

    ne's o

    wn

    thoughts. O

    thers have no

    mo

    re in m

    ind that the distinction between being aw

    ake and being

    asleep! But the relevant sen

    se for the present discussion (see Block, 1997; Chalmers,

    1996) is the on

    e in w

    hich to be con

    scious is to be a subject of experience-to feel

    the toothache, to taste the bananas, to smell the croissant, a

    nd so o

    n. To experi-

    ence so

    me xis thus to do m

    ore

    than just register, recognize, or respond to x. Elec-

    tronic detectors can register the presence of sem

    tex and other plastic explosives.

    But, I hope, they have no

    experiences of so doing. A sniffer dog, how

    ever, may be

    a different kettle of fish. Perhaps the dog, like us, is a subiect of experience; a haven

    of what philosophers call

    "qualiaX-the

    qualitative sensations that m

    ake life rich, interesting, o

    r intolerable. Some theorists (notably John Searle) believe that co

    m-

    putational accou

    nts fall down at precisely this point, a

    nd that as far as we can

    tell it is the im

    plementation, n

    ot the program, that explains the presence of su

    ch qual- itative aw

    areness. Searle's direct attack o

    n co

    mputationalism

    is treated in the next

    chapter. For no

    w, let u

    s just look at two popular, but flaw

    ed, reason

    s for endors- ing su

    ch a skeptical con

    clusion. The first is the observation that

    "simulation is n

    ot the same as instantation."

    A rainstorm, sim

    ulated in a com

    putational medium

    , does not m

    ake anything ac- tually w

    et. Likewise, it m

    ay seem obvious that a sim

    ulation, in a com

    putational

    Meat M

    achines 23

    medium

    , of the brain states involved in a bout of black depression will n

    ot add o

    ne single iota (thank heaven) to the su

    m of real sadness in the w

    orld.

    The second w

    orry (related to, but n

    ot identical to the first) is that many feel-

    ings and em

    otions look to have a clear chemical o

    r hormonal basis a

    nd hence (hence?) m

    ay be resistant to reproduction in any merely electronic m

    edium. Sure,

    a silicon-based agent can play chess a

    nd stack crates, but can it get drunk, get a

    n

    adrenaline high, experience the effects of ecstasy and acid, a

    nd so o

    n? The (genuine) intuitive appeal of these co

    nsiderations n

    otwithstanding, they

    by no

    mean

    s con

    stitute the knock-down argum

    ents they may at first appear. For

    everything here depends o

    n w

    hat kind of phenomenon co

    nsciousness turns o

    ut to be. Thus suppose the skeptic argues as follow

    s: "ev

    en if you get the o

    verall inner

    com

    putational profile just right, and the system

    behaves just like you and I, it w

    ill still be lacking the inner baths of chem

    icals, hormones, a

    nd neu

    rotransmitters, etc.

    that flood ou

    r brains and bodies. M

    aybe without these all is darkness within-it

    just looks like the "agent" has feelings, em

    otions, etc., but really it is just [what H

    augeland (1981a) terms] a

    "hollow shell." This possibility is vividly expressed in

    John Searle's exam

    ple of the person who, hoping to cu

    re a degenerative brain dis- ease, allow

    s parts of her brain to be gradually replaced by silicon chips. The chips preserve the input-output functions of the real brain co

    mponents. O

    ne logical pos- sibility here, Searle suggests, is that

    "as the silicon is progressively implanted into

    your dwindling brain, you find that the area of your co

    nscious experience is shrink-

    ing, but that this shows n

    o effect o

    n your external behavior" (Searle, 1992, p. 66).

    In this scenario (which is m

    erely on

    e of several that Searle co

    nsiders), your actions

    and w

    ords co

    ntinue to be generated as usu

    al. Your loved o

    nes are glad that the op-

    eration is a su

    ccess! But from

    the inside, you experience a growing darkness u

    ntil, o

    ne day, n

    othing is left. There is no

    consciousness there. Y

    ou are a zom

    bie. The im

    aginary case is problematic, to say the least. It is n

    ot even

    clear that we

    here confront a genuine logical possibility. [For detailed discussion see Chalm

    ers (1996) a

    nd Dennett (199la)-just

    look up zom

    bies in the indexes!] Certainly the alternative scen

    ario in which you continue your co

    nscious m

    ental life with n

    o ill

    effects from the silicon su

    rgery strikes many cognitive scientists (myself included)

    as the mo

    re plausible outcom

    e. But the "shrinking co

    nsciousness" nightm

    are does help to focus o

    ur attention o

    n the right question. The question is, just w

    hat is the role of all the horm

    ones, chemicals, a

    nd organic m

    atter that build no

    rmal hum

    an brains? There are tw

    o very different possibilities here and, so

    far, no

    on

    e knows

    which is co

    rrect. One is that the chem

    icals, etc. affect ou

    r con

    scious experiences only by affecting the w

    ay information flow

    s and is processed in the brain. If that

    were the case, the sam

    e kinds of modulation m

    ay be achieved in other media by

    other mean

    s. Simplistically, if so

    me chem

    ical's effect is, e.g., to speed up the pro- cessing in so

    me areas, slow

    it down in others, and allow

    mo

    re inform

    ation leak- age betw

    een adjacent sites, then perhaps the same effect m

    ay be achieved in a purely

    electronic medium

    , by som

    e series of modulations a

    nd modifications of cu

    rrent

  • 24 CHAPTER i

    / M

    EAT MACHINES

    flow. M

    ind-altering "drugs," for silicon-based thinkers, may thus take the form

    of black-m

    arket software packages-packages

    that temporary induce a n

    ew pattern

    of flow and functionality in the old hardw

    are. There rem

    ains, however, a seco

    nd possibility: perhaps the experienced nature

    of ou

    r mental life is n

    ot (or is not just) a function of the flow

    of information. Per-

    haps it is to som

    e degree a direct effect of som

    e still-to-be-discovered physical cause

    or ev

    en a kind of basic property of so

    me types of m

    atter (for extended discussion of these a

    nd other possibilities, see Chalmers, 1996). If this w

    ere true, then getting the inform

    ation-processing profile exactly right w

    ould still fall to guarantee the

    presence of conscious experience.

    The frog at the bottom of the beer glass is thus rev

    ealed. The bedrock, un-

    solved problem is w

    hether conscious aw

    areness is an

    informational phenomenon.

    Consider the difference. A lunch o

    rder is certainly an inform

    ational phenomenon.

    You can

    phone it, fax it, E-mail it-whatever

    the medium

    , it is the same lunch o

    r-

    der. But no

    on

    e ever faxes you your lunch. There is, of co

    urse, the infam

    ous In- ternet Pizza Server. Y

    ou specify size, consistency, and toppings and aw

    ait the on

    -

    screen arrival of the feast. But as Jam

    es Gleick recently co

    mm

    ented, "By the tim

    e a heavily engineered softw

    are engine delivers the final product, you begin to sus-

    pect that they've actually forgotten the difference between a pizza and a

    picture of a pizza" (Gleick, 1995, p. 44). This, indeed, is Searle's accu

    sation in a nutshell.

    Searle believes that the conscious m

    ind, like pizza, just ain't an ~

    nformational phe- nom

    enon. The stuff, like the topping, really co

    unts. This co

    uld be the case, notice,

    even

    if many of the other central characteristics of m

    indware rew

    ard an u

    nder- standing that is indeed m

    ore inform

    ational than physical. Fodor's focus on rea-

    son-guided state-transitions, for ex

    ample, is especially w

    ell designed to focus at- tention aw

    ay from qualitative experience and o

    nto capacities (such as deciding to stay indoors w

    hen it is raining) that can be visibly guaranteed o

    nce a suitable for-

    mal, functional profile is fixed.

    We are n

    ow

    eyeball to eyeball with the frog. To the extent that m

    ind is an in-

    formational phenom

    enon, we m

    ay be confident that a good en

    ough co

    mputational

    simulation will yield an

    actual instance of mindfulness. A good sim

    ulation of a cal- culator is an

    instance of a calculator. It adds, subtracts, does all the things we ex

    -

    pect a calculator to do. Maybe it ev

    en follow

    s the same hidden procedures as the

    original calculator, in w

    hich case we have w

    hat Pylyshyn (1986) terms

    "strong equivalence"-equivalence

    at the level of an u

    nderlying program. If a phenom

    e- n

    on

    is informational, strong equivalence is su

    rely sufficient4 to guarantee that we

    confront n

    ot just a m

    odel (simulation) of som

    ething, but a new ex

    emplar (in-

    4Sufficient, but probably not n

    ecessary. xis sufficient for y if when x

    obtains, y always follow

    s. Being a banana is thus a sufficient co

    ndition for being a fruit. x is necessary for y 1f, should x fail to obtain, y

    cann

    ot be the case. Being a banana is thus not a n

    ecessary condition for being a fruit-being

    an apple

    will do just as w

    ell.

    Meat M

    achines 25

    stantiation) of that very thing. For n

    oninform

    ational phenomena, su

    ch as "being

    a pizza," the rules are different, and the flesh com

    es into its ow

    n. Is co

    nsciousness

    like calculation, or is it m

    ore like pizza? The jury is still o

    ut.

    1.3 A D

    iversion [This is extracted from

    a story by Terry Bisson called "A

    lienINation" first pub-

    lished in Om

    ni (1991). Reproduced by kind permission of the author.]

    "They're made o

    ut of meat."

    "Meat?"

    "Meat. They're m

    ade out of m

    eat." "M

    eat?" "There's n

    o doubt about it. W

    e picked several from

    different parts of the planet, took them

    aboard ou

    r recon v

    essels, probed them all the w

    ay through. They're co

    mpletely m

    eat." "That's im

    possible. What about the radio signals? The m

    essages to the stars." "They u

    se the radio waves to talk, but the signals don't co

    me from

    them. The

    signals com

    e from m

    achines." "So w

    ho made the m

    achines? That's who w

    e want to co

    ntact." "They m

    ade the machines. That's w

    hat I'm trying to tell you. M

    eat made the

    machines."

    "That's ridiculous. How

    can m

    eat make a m

    achine? You're asking m

    e to be- lieve in sentient m

    eat." " I , m

    not asking you, I'm

    telling you. These creatures are the only sentient race

    in the sector and they're made o

    ut of meat."

    "Maybe they're like the O

    rfolei. You know

    , a carbon-based intelligence that goes through a m

    eat stage." "N

    ope. They're born meat and they die m

    eat. We studied them

    for several of

    their life spans, which didn't take too long. D

    o you have any idea of the life span of m

    eat?" "Spare m

    e. Okay, m

    aybe they're only part m

    eat. You know

    , like the Weddilei.

    A meat head w

    ith an electron plasm

    a brain inside." "N

    ope. We thought of that, since they do have m

    eat heads like the Weddilei.

    But I told you, we probed them

    . They're meat all the w

    ay through." "N

    o brain?" "O

    h, there is a brain all right. It's just that the brain is made o

    ut of meat!"

    "So . . . w

    hat does the thinking?" "Y

    ou're n

    ot understanding, are you? The brain does the thinking. The m

    eat." "Thinking m

    eat! You're asking m

    e to believe in thinking meat!"

    "Yes, thinking m

    eat! Conscious meat! Loving m

    eat. Dream

    ing meat. The m

    eat is the w

    hole deal! Are you getting the picture?"

    "Om

    igod. You're serious then. They're m

    ade out of m

    eat."

  • 26 CHAPTER i

    / M

    EAT MACHINES

    "Finally, Yes. They are indeed made o

    ut of meat. A

    nd they've been trying to get in touch w

    ith us for alm

    ost a hundred of their years." "So w

    hat does the meat have in m

    ind?" "First it w

    ants to talk to us. Then I im

    agine it wants to explore the u

    niverse, co

    ntact other sentients, swap ideas a

    nd information. The u

    sual."

    "We're supposed to talk to m

    eat?" "That's the idea. That's the m

    essage they're sending out by radio. H

    ello. Any-

    on

    e o

    ut there? Anyone hom

    e? That sort of thing."

    "They actually do talk, then. They use w

    ords, ideas, co

    ncepts?"

    "Oh, yes. Except they do it w

    ith meat."

    "I thought you just told me

    they used radio."

    "They do, but what do you think is o

    n the radio? M

    eat sou

    nds. You know

    how

    when you slap o

    r flap meat it m

    akes a noise? They talk by flapping their m

    eat at each other. They can

    even

    sing by squirting air through their meat."

    "Om

    igod. Singing meat. This is altogether too m

    uch. So w

    hat do you advise?" "O

    fficially or u

    nofficially?"

    "Both." "O

    fficially, we are required to co

    ntact, welcom

    e, and log in a

    ny and all sen

    -

    tient races or m

    ulti beings in the quadrant, without prejudice, fear, o

    r favor. Un-

    officially, I advise that we erase the reco

    rds and forget the w

    hole thing." "I w

    as hoping you wo

    uld say that." "It seem