Conscious Realism and the Mind-Body Problem

36
c 2008 Imprint Academic Mind & Matter Vol. 6(1), pp. 87–121 Conscious Realism and the Mind-Body Problem Donald D. Hoffman Department of Cognitive Science University of California at Irvine, USA Abstract Despite substantial efforts by many researchers, we still have no scientific theory of how brain activity can create, or be, con- scious experience. This is troubling, since we have a large body of correlations between brain activity and consciousness, correlations normally assumed to entail that brain activity creates conscious experience. Here I explore a solution to the mind-body problem that starts with the converse assumption: these correlations arise because consciousness creates brain activity, and indeed creates all objects and properties of the physical world. To this end, I develop two theses. The multimodal user interface theory of perception states that perceptual experiences do not match or approximate properties of the objective world, but instead provide a simplified, species-specific, user interface to that world. Conscious realism states that the objective world consists of conscious agents and their experiences; these can be mathematically modeled and em- pirically explored in the normal scientific manner. 1. Introduction What is the relationship between consciousness and biology ? This question, a version of the classic mind-body problem, has in some form troubled philosophers at least since the time of Plato, and now troubles scientists. Indeed, a list of the top 125 open questions in Science puts the mind-body problem at number two, just behind the question (Miller 2005): What is the universe made of ? The mind-body problem, as Science formulates it, is the question: What is the biological basis of conscious- ness ? One reason for this formulation is the large body of empirical corre- lations between consciousness and brain activity. For instance, damage to cortical area V1 is correlated with the loss of conscious visual percep- tion (Celesia et al. 1991). If V1 is intact but certain extrastriate cortical regions are damaged, there is again a loss of conscious visual perception (Horton and Hoyt 1991). Damage to the lingual and fusiform gyri are cor- related with achromatopsia, a loss of color sensation (Collins 1925, Critch- ley 1965), and magnetic stimulation of these areas is correlated with chro- matophenes, conscious experiences of unusual colors (Sacks 1995, p. 28;

Transcript of Conscious Realism and the Mind-Body Problem

Page 1: Conscious Realism and the Mind-Body Problem

c© 2008 Imprint Academic Mind & Matter Vol. 6(1), pp. 87–121

Conscious Realismand the Mind-Body Problem

Donald D. HoffmanDepartment of Cognitive Science

University of California at Irvine, USA

Abstract

Despite substantial efforts by many researchers, we still haveno scientific theory of how brain activity can create, or be, con-scious experience. This is troubling, since we have a large body ofcorrelations between brain activity and consciousness, correlationsnormally assumed to entail that brain activity creates consciousexperience. Here I explore a solution to the mind-body problemthat starts with the converse assumption: these correlations arisebecause consciousness creates brain activity, and indeed creates allobjects and properties of the physical world. To this end, I developtwo theses. The multimodal user interface theory of perceptionstates that perceptual experiences do not match or approximateproperties of the objective world, but instead provide a simplified,species-specific, user interface to that world. Conscious realismstates that the objective world consists of conscious agents andtheir experiences; these can be mathematically modeled and em-pirically explored in the normal scientific manner.

1. Introduction

What is the relationship between consciousness and biology ? Thisquestion, a version of the classic mind-body problem, has in some formtroubled philosophers at least since the time of Plato, and now troublesscientists. Indeed, a list of the top 125 open questions in Science putsthe mind-body problem at number two, just behind the question (Miller2005): What is the universe made of ? The mind-body problem, as Scienceformulates it, is the question: What is the biological basis of conscious-ness ?

One reason for this formulation is the large body of empirical corre-lations between consciousness and brain activity. For instance, damageto cortical area V1 is correlated with the loss of conscious visual percep-tion (Celesia et al. 1991). If V1 is intact but certain extrastriate corticalregions are damaged, there is again a loss of conscious visual perception(Horton and Hoyt 1991). Damage to the lingual and fusiform gyri are cor-related with achromatopsia, a loss of color sensation (Collins 1925, Critch-ley 1965), and magnetic stimulation of these areas is correlated with chro-matophenes, conscious experiences of unusual colors (Sacks 1995, p. 28;

Page 2: Conscious Realism and the Mind-Body Problem

88 Hoffman

Zeki 1993, p. 279). Damage to area V5 is correlated with akinetopsia, aloss of motion sensation (Zihl et al. 1983, 1991; Rizzo et al. 1995); mag-netic inhibition of V5 is also correlated with akinetopsia (Zeki et al. 1991).In many tasks in which subjects view a display inducing binocular rivalry,so that they consciously perceive the stimulus presented to one eye andthen periodically switch to consciously perceive the stimulus presented tothe other eye, there are changes in cortical activity precisely correlatedwith changes in conscious perception (Alais and Blake 2004), changes thatcan be measured with fMRI (Lumer et al. 1998, Tong et al. 1998), EEG(Brown and Norcia 1997), MEG (Tononi et al. 1998), and single unitrecording (Leopold and Logothetis 1996). Such correlated activity canbe found in ventral extrastriate, parietal, and prefrontal cortices (Rees etal. 2002).

Such correlations, and many more not mentioned here, persuade mostresearchers that brain activity causes, or is somehow the basis for, con-sciousness. As Edelman (2004, p. 5) puts it: “There is now a vast amountof empirical evidence to support the idea that consciousness emerges fromthe organization and operation of the brain.” Similarly, Koch (2004,pp. 1–2) argues:

The fundamental question at the heart of the mind-body problemis, what is the relation between the conscious mind and the electro-chemical interactions in the body that give rise to it ? How do[conscious experiences] emerge from networks of neurons ?

Consensus on this point shapes the current scientific statement of themind-body problem. It is not the neutral statement that opened thissection, viz.: What is the relationship between consciousness and biol-ogy ? Instead, as Science makes clear, it is a statement that indicates theexpected nature of the solution: What is the biological basis of conscious-ness ? Given this consensus, one would expect that there are promisingtheories about the biological basis of consciousness, and that research isproceeding to cull and refine them. Indeed such theories are numerous,both philosophical and scientific, and the volume of empirical work, brieflyhighlighted above, is large and growing.

For instance, following the demise of behaviorism in the 1950s, therehave been many philosophical theories. Type physicalist theories as-sert that mental state types are numerically identical to certain neuralstate types (Place 1956, Smart 1959); token physicalist theories assertinstead that each mental state token is numerically identical to some neu-ral state token (Fodor 1974). Reductive functionalist theories assert thatthe type identity conditions for mental states refer only to relations, typ-ically causal relations, between inputs, outputs, and each other (Blockand Fodor 1972). Non-reductive functionalist theories make the weakerclaim that functional relations between inputs, outputs and internal sys-

Page 3: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 89

tem states give rise to mental states but are not identical with such states(Chalmers 1996). Representationalist theories (e.g., Tye 1996, 2000) iden-tify conscious experiences with certain tracking relationships, i.e., withcertain causal covariations, between brain states and states of the phys-ical world. The “biological naturalism” theory of Searle (1992, 2004)claims that consciousness can be causally reduced to neural processes,but cannot be eliminated and replaced by neural processes.

This brief overview does not, of course, begin to explore these theo-ries, and it omits important positions, such as the emergentism of Broad(1925), the anomalous monism of Davidson (1970), and the superveniencetheory of Kim (1993). However it is adequate to make one obvious point.The philosophical theories of the mind-body problem are, as they adver-tise, philosophical and not scientific. They explore the conceptual possi-bilities where one might eventually formulate a scientific theory, but theydo not themselves formulate scientific theories. The token identity the-ories, for instance, do not state precisely which neural state tokens areidentical to which mental state tokens. The non-reductive functionalisttheories do not state precisely which functional relations give rise, say, tothe smell of garlic versus the smell of a rose, and do not give principledreasons why, reasons that lead to novel, quantitative predictions. Thesecomments are not, of course, intended as criticisms of these theories, butsimply as observations about their intended scope and limits.

It is from the scientists that we expect theories that go beyond state-ments of conceptual possibilities, theories that predict, from first prin-ciples and with quantitative precision, which neural activities or whichfunctional relations cause which conscious experiences. Scientists haveproduced several theories of consciousness.

For instance, Crick and Koch (1990, cf. Crick 1994) proposed that cer-tain 35-75 Hz neural oscillations in cerebral cortex are the biological basisof consciousness. Subsequently Crick and Koch (2005) proposed that theclaustrum may be responsible for the unified nature of conscious experi-ence. Edelman and Tononi (2000, p. 144; cf. Tononi and Sporns 2003)proposed that “a group of neurons can contribute directly to conscious ex-perience only if it is part of a distributed functional cluster that, throughreentrant interactions in the thalamocortical system, achieves high inte-gration in hundreds of milliseconds.” Baars (1988) proposed that con-sciousness arises from the contents of a global workspace, a sort of black-board by which various unconscious processors communicate informationto the rest of the system. Hameroff and Penrose (1996, cf. Penrose 1994)proposed that quantum coherence and quantum-gravity-induced collapsesof wave functions are essential for consciousness. Stapp (1993, 1996) pro-posed that the brain evolves a superposition of action templates, and thecollapse of this superposition gives rise to conscious experience.

Again, this brief overview does not begin to explore these theories

Page 4: Conscious Realism and the Mind-Body Problem

90 Hoffman

and, for brevity, omits some. But the pattern that emerges is clear. Thetheories so far proposed by scientists are, at best, hints about where tolook for a genuine scientific theory. None of them remotely approaches theminimal explanatory power, quantitative precision, and novel predictivecapacity expected from a genuine scientific theory. We would expect, forinstance, that such a theory could explain, in principle, the difference inexperience between, e.g., the smell of a rose and the taste of garlic. How,precisely, is the smell of a rose generated by a 40 Hz oscillation, a reentrantthalamocortical circuit, information integration, a global-workspace entry,the quantum state of microtubules, or the collapse of evolving templates ?What precise changes in these would transform experience from the smellof a rose to the taste of garlic ? What quantitative principles accountfor such transformations ? We are not asking about advanced features ofconsciousness, such as self-consciousness, that are perhaps available to fewspecies. We are asking about an elementary feature available, presumably,to a rat. But no current theory has tools to answer these questions andnone gives guidance to build such tools. None begins to dispel the mysteryof conscious experience. As Pinker (1997, p. 564) points out, “. . . howa red-sensitive neuron gives rise to the subjective feel of redness is nota whit less mysterious than how the whole brain gives rise to the entirestream of consciousness.”

In short, the scientific study of consciousness is in the embarrassingposition of having no scientific theory of consciousness. This remarkablesituation provokes several responses. The first concludes that, althoughconsciousness arises naturalistically from brain activity, humans lack thecognitive capacities required to formulate a scientific theory. As McGinn(1989) puts it, “we know that brains are the de facto causal basis of con-sciousness, but we have, it seems, no understanding whatever of how thiscan be so.” Pinker (1997) agrees. After asking how conscious experiencearises from physical systems he answers (Pinker 1997, pp. 146–147):

Beats the heck out of me. I have some prejudices, but no idea ofhow to begin to look for a defensible answer. And neither doesanyone else. The computational theory of mind offers no insight;neither does any finding in neuroscience, once you clear up the usualconfusion of sentience with access and self-knowledge.

A second response concludes that we must keep experimenting untilwe find the empirical fact that leads to a theoretical breakthrough. Thisis a defensible position and, indeed, the position of most researchers inthe field.

A third response claims there is no mind-body problem, on at leasttwo different grounds: There is no mind to reduce to body, or no body towhich mind can be reduced. The first of these two arguments is sometimesasserted by eliminative materialists, who claim that nothing in reality cor-

Page 5: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 91

responds to our folk psychological notions of consciousness (Churchland1981, Churchland 1986, Dennett 1978). As neuroscience progresses wewill not reduce such notions to neural activity; we will abandon them,much as we abandoned phlogiston. We will instead adopt the language ofneurophysiology.

The second argument, that there is no body to which mind can bereduced, is made most notably by Chomsky (1980, 2000), who arguesthat there has been no coherent formulation of the mind-body problemsince Newton introduced action-at-a-distance and, thereby, destroyed anyprincipled demarcation between the physical and non-physical. Chomskyconcludes that consciousness is a property of organized matter, no morereducible than rocks or electromagnetism (Chomsky 2000, p. 86). How-ever, what counts as matter is no clearer than what counts as physical.And why should we expect, in the non-dualistic setting that Chomsky en-dorses, that consciousness is a property of matter rather than vice versa ?

This is a natural point of departure for the theory developed here. Thedualistic formulation of the mind-body problem, in which consciousnessarises from non-conscious neurobiology or physics, has failed to producea scientific theory. But the search space of scientific theories is large, andit is reasonable, given the failure of explorations in the dualistic region,to explore elsewhere. That is the intent here: to explore a non-dualistic,but mathematically rigorous, theory of the mind-body problem, one thatdoes not assume consciousness is a property of organized matter. To thisend, we first develop a non-dualistic theory of perception that questionsa key assumption of current perceptual theories.

2. Perception as Faithful Depiction

Current scientific theories of perception fall into two main classes:direct and indirect (see, e.g., Fodor and Pylyshyn 1981, Hoffman 1998,Palmer 1999).

Indirect theories, which trace their lineage through Helmholtz (1910/1962) and Alhazen (956–1039; cf. Sabra 1978), typically claim that a goalof perception is to match, or at least approximate, useful properties of anobjective physical world (Marr 1982). The physical world is taken to beobjective in the sense that it does not depend on the perceiver for its exis-tence. According to indirect theories, the information transduced at sen-sory receptors is not sufficiently rich, by itself, to determine a unique andcorrect match or approximation. Therefore the perceiver must infer prop-erties of the world using constraining assumptions. For instance, in theperception of a three-dimensional shape from visual motion, the perceivermight use a rigidity assumption: If the image data could have arisen, inprinciple, by projection of the motion of a rigid three-dimensional body,

Page 6: Conscious Realism and the Mind-Body Problem

92 Hoffman

then the visual system infers that the image data are, in fact, the projec-tion of that rigid body (Ullman 1979). This inference might be couched inthe mathematical framework of regularization theory (Poggio et al. 1985)or Bayesian inference (Knill and Richards 1996).

Direct theories, which trace their origin to Gibson (1950, 1966, 1979/1986), agree with indirect theories that a goal of perception is to match anobjective physical world, but argue that the sensory data are sufficientlyrich that perceivers can, without inference, pick up true properties of theworld, especially affordances, directly from these data.

The debate between direct and indirect theories raises interesting is-sues (Fodor and Pylyshyn 1981, Ullman 1980). But what is pertinent hereis that both agree on this: A goal of perception is to match or approximatetrue properties of an objective physical environment. We can call this thehypothesis of faithful depiction (HFD). This hypothesis is widespread andrarely questioned in the scientific study of perception.

For instance, Stoffregen and Bardy (2001) state:

We analyze three hypotheses about relations between ambient ar-rays and physical reality: (1) that there is an ambiguous relationbetween ambient energy arrays and physical reality, (2) that thereis a unique relation between individual energy arrays and physicalreality, and (3) that there is a redundant but unambiguous relation,within or across arrays, between energy arrays and physical reality.

The first hypothesis is endorsed by indirect theories, and the second bysome direct theories. They conclude in favor of the third hypothesis,viewing it as an extension of standard direct theories. Nowhere do theyquestion the assumption of faithful depiction that is shared by all three;nor do any of the more than 30 commentaries on their article.

Yuille and Buelthoff (1996, p. 123) endorse HFD: “We define visionas perceptual inference, the estimation of scene properties from an im-age or sequence of images.” The commitment to HFD is clear in suchterms as ‘estimate’, ‘recover’, and ‘reconstruct’, which appear repeatedlythroughout the literature of computational vision.

Lehar (2003, p. 375) endorses HFD: “The perceptual modeling ap-proach reveals the primary function of perception as that of generatinga fully spatial virtual-reality replica of the external world in an internalrepresentation.”

Searle (2004, p. 171) endorses HFD: “In visual perception, for exam-ple, if I see that the cat is on the mat, I see how things really are (andthus achieve mind-to-world direction of fit) only if the cat’s being on themat causes me to see the situation that way (world-to-mind direction ofcausation).”

Purves and Lotto (2003) appear, on first reading, to reject HFD. Theyreject, for instance, “the seemingly sensible idea that the purpose of vision

Page 7: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 93

is to perceive the world as it is. . . ” (p. 5). They suggest instead that(p. 287)

what observers actually experience in response to any visual stim-ulus is its accumulated statistical meaning (i.e., what the stimulushas turned out to signify in the past) rather than the structure ofthe stimulus in the image plane or its actual source in the present.

Thus Purves and Lotto do not, in fact, recommend rejection of HFD toutcourt. They simply recommend rejecting a version of the hypothesis thatfocuses exclusively on the present stimulus and the present state of thephysical world. The purpose of vision is to perceive the world, not just asit is, but as it has been.

Noe and Regan (2002) also appear, on first reading, to reject HFD.They reject, for instance, the position that “. . . the visual system buildsup a detailed internal representation of the three-dimensional environ-ment on the basis of successive snapshot-like fixations of the scene . . . ”(p. 575). They propose instead that “what one sees is the aspect of thescene to which one is attending – with which one is currently interact-ing. . . ” (p. 575). Thus Noe and Regan also do not reject HFD tout court.They claim that “perceivers are right to take themselves to have accessto environmental detail and to learn that the environment is detailed”(p. 576) and that “the environmental detail is present, lodged, as it is,right there before individuals and that they therefore have access to thatdetail by the mere movement of their eyes or bodies” (p. 578). Thus theysupport a version of HFD that is careful to observe the limits of perceptualattention and the critical role of sensorimotor interactions.

HFD is so universally accepted that it appears in textbooks. Forinstance, Palmer (1999, p. 6, his italics) endorses HFD as follows:

Evolutionarily speaking, visual perception is useful only if it is rea-sonably accurate . . . Indeed, vision is useful precisely because it isso accurate. By and large, what you see is what you get. When thisis true, we have what is called veridical perception . . . perceptionthat is consistent with the actual state of affairs in the environment.This is almost always the case with vision. . .

I, too, endorsed HFD, describing the central questions about visualperception as follows (Hoffman 1983, p. 154): “First, why does the visualsystem need to organize and interpret the images formed on the retinas ?Second, how does it remain true to the real world in the process ? Third,what rules of inference does it follow ?” But I now think HFD is false.Our perceptual systems do not try to approximate properties of an ob-jective physical world. Moreover evolutionary considerations, properlyunderstood, do not support HFD, but require its rejection.

I propose that perception is like a multimodal user interface (Hoffman1998, 2003). A successful user interface does not, in general, resemble

Page 8: Conscious Realism and the Mind-Body Problem

94 Hoffman

what it represents. Instead it dumbs down and reformats in a manneruseful to the user. Because it simplifies, rather than resembles, a user in-terface usefully and swiftly informs the actions of the user. The featuresin an interface usually differ from those in the represented domain, withno loss of effectiveness. A perceptual user interface, simplifying and re-formatting for the niche of an organism, gives that organism an adaptiveadvantage over one encumbered with constructing a complex approxima-tion to the objective world. The race is to the swift; a user interface makesone swift by not resembling the world.

This is not what textbooks or most perceptual experts say and there-fore invites spelling out. I begin by discussing user interfaces and virtualworlds.

3. User Interfaces

Suppose you wish to delete a file on your PC. You find the icon for thefile, click on it with your mouse, drag it to the recycle-bin icon, and release.Quick and easy. The file icon might be blue and square. The recycle binmight be shaped like a trash can. All for ease of use. Of course whatgoes on behind the icons is quite complex: A central processor containingmillions of transistors executes binary commands encoded as voltages inmegabytes of memory, and directs the head on a hard drive to changethe magnetic structure of a disk revolving thousands of times per minute.Fortunately, to delete a file you do not need to know anything about thiscomplexity. You just need to know how to move colorful icons.

The icons, and the entire graphical-windows interface, are designed tohelp the user by hiding the complexity of the computer (see, e.g., Schnei-derman 1998). This is accomplished, in part, by friendly formatting. Thewindows interface and its contents are designed not to resemble the actualcomplexity of the computer and its inner workings, but instead to presentneeded information to the user in a format that is friendly, i.e., that is easyand natural to use. Although the actual file in the computer is a complexarray of voltages and magnetic fields with no simple geometry, the file iconis a rectangle because this is a simple symbol easily interpreted by humanusers. Nothing about the shape of the file icon resembles the shape of thefile itself. This is no failure of the icon, no misrepresentation of reality. Itis, instead, what makes the icon useful.

Few souls delight to search the guts of a computer with voltmeter andmagnetometer to find a file. We prefer to find a rectangular blue icon in apretty display. But nothing about the file itself, the voltages and magneticfields inside the computer, is blue. Is this a gross misrepresentation bythe icon ? Of course not. The color of the icon is not intended to resembleanything about the file but simply to indicate, say, what kind of file it

Page 9: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 95

is or how recently it was modified. The icon sits at some spot on thedisplay, perhaps in the upper right. But this does not mean that the fileitself is in the upper right of the computer. The location of an icon onthe display is, in part, simply a convenient way to keep track of it. Thereis, in short, no resemblance between properties of the icon and propertiesof the file. This is no problem, no failure of veridicality. It is the intendedconsequence of friendly formatting.

The interface also helps the user by means of concealed causality. Notonly is the structural complexity of the computer hidden behind icons,but also its causal complexity. When you drag the file icon to the recyclebin and release, does moving the file icon to the recycle bin icon causedeletion of the file ? No. Icons have no causal powers within the computer.They are patterns of pixels on the display, and send no signals back tothe computer. The complex causal chain within the computer that deletesthe file is hidden, behind the interface, from the user. And nothing in themovement of the file icon to the recycle-bin icon resembles anything in thiscausal chain. Is this a failure or misrepresentation of the interface ? Tothe contrary, it is the reason for the interface. Hiding causal complexityhelps the user to quickly and easily delete a file, create a new one, modifyan illustration, or format a disk, without distraction by a myriad of causaldetails.

Although the icons of the interface have no causal powers they arenonetheless useful by providing clued conduct. The icons effectively in-form actions of the user, allowing the user to trigger the appropriate, buthidden, causal chains.1 In the case of deleting a file, the icon of the fileinforms the user how to click the mouse, and the icon of the recycle bininforms the user how to release the mouse, so that appropriate causalchains are triggered inside the computer, resulting in deletion of the file.Icons inform an effective perception-action loop, without themselves hav-ing causal powers in the computer.

To the extent that a user interface succeeds in providing friendly for-matting, concealed causality, and clued conduct, it will also offer ostensibleobjectivity. Usually the user can act as if the interface is the total reality ofthe computer. Indeed some users are fooled; we hear humorous stories ofa child or grandparent who wondered why an unwieldy box was attachedto the screen. Only for more sophisticated purposes, such as debugginga program or repairing hardware, does dissolution of this illusion becomeessential.

1Here, and throughout the paper, the verb trigger means “to initiate a sequence ofactions, typically causal and complex.” To say, for instance, that stress triggers cardio-vascular disease means that stress initiates a complex causal sequence of biochemicalinteractions that eventuate in the disease.

Page 10: Conscious Realism and the Mind-Body Problem

96 Hoffman

4. Virtual Worlds

Suppose you and a friend play virtual tennis at an arcade. You donyour helmet and bodysuit, and find yourself in Roland-Garros stadium,home of the French Open. After admiring the clay court and stadium, youserve to open the first set, and are soon immersed in play. The stadium,court, net, ball, and racquet that you experience are all, of course, part of asophisticated user interface, one that exhibits the four qualities describedin the last section. First, it sports friendly formatting. You see red clay,a yellow ball, a graphite tennis racquet, and a green stadium. These aremuch easier to interpret and use than the complex supercomputer andmegabytes of software that control the game.

It conceals causality and clues conduct. When you hit a killer dropvolley, it might appear that the head of the racquet caused the ball tosneak across the net. But of course the racquet and ball are just pixels inthe user interface, and send no signals back to the supercomputer. Theracquet and ball serve only to inform your actions and these, transmit-ted back via the body suit, trigger a complex but hidden causal sequencewithin the supercomputer, resulting in the proper updating of registerscorresponding to the positions of racquet and ball. A good programmercould update these registers directly. But this would be so slow and cum-bersome that even the deftest coder would lose the match to a modestlytalented player who simply acted on the user interface. That is the power,and purpose, of the interface.

Finally, the commercial success of the game depends, in large part,on its ostensible objectivity. Customers want to play tennis, blissfullyignorant of the supercomputer and software hard at work in a back room.Tennis is, for them, the reality. Nothing in their tennis reality resemblesthe hidden supercomputer, the true causal nexus that makes the gamepossible. Customers can play as if the tennis ball and racquet had causalpowers, even though this is merely a convenient, and entertaining, fiction.

5. Perception as a Multimodal User Interface

I reject HFD, the hypothesis that a goal of perception is to match orapproximate properties of an objective physical world. Instead I proposethe hypothesis of multimode user interfaces (MUI): The conscious percep-tual experiences of an agent are a multimodal user interface between thatagent and an objective world.

To say that a world is objective means that the world’s existence doesnot depend on the agent. MUI theory claims nothing about the ontologyof that objective world. It requires no resemblance between propertiesof the interface and the world. As virtual tennis illustrates, they canbe as dissimilar as tennis balls and integrated circuits. MUI is a weaker

Page 11: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 97

hypothesis than HFD: Both say perception represents an objective world;but HFD claims, in addition, that perception resembles that objectiveworld. MUI theory makes no such claim.

For instance, if you experience a rock or tree, HFD claims that, bar-ring illusion, there must be a rock or tree in the objective world whoseproperties approximate those of your experience. MUI theory is not com-mitted to this claim. It allows countless possibilities for what in theobjective world triggered your experience. Chances are there is no matchbetween properties of experience and the objective world. Instead percep-tual experiences are, in the typical case, much less complex and differentlyformatted than the objective properties that trigger them. This failureto match, due to adaptive simplification and reformatting, is key to theusefulness of perceptual experiences. Concern about veridicality of per-ception is a category error. The proper concern is whether perceptionusefully informs action.

According to MUI theory, the objects of everyday experience – tables,chairs, mountains, moon – are not public. If, for instance, I hand you aglass of water, it is natural, but false, to assume that the glass I once heldis the same glass you now hold. Instead, according to MUI theory, theglass I held was, when I observed it, an icon of my MUI, and the glassyou now hold is, when you observe it, an icon of your MUI, and they arenumerically distinct. There are two glasses of water, not one. And if athird person watches the transaction, there are three glasses.

This claim seems, to most, absurd, and straightforwardly refuted.Searle (2004, pp. 275ff), for instance, argues against the denial of publicphysical objects as follows: First, we all assume, quite naturally, that wesometimes communicate successfully. This requires that we have publicmeanings in a public language, so that we can both mean, or intend, thesame thing by utterances such as “this glass of water”. But this requiresthat we have publicly available objects of reference, e.g., a publicly avail-able glass of water, so that when I say “this glass of water” I am referringto the same object as you do when you say “this glass of water”. Thisimplies that we share perceptual access to the same object, which makes ita public object. Thus, concludes Searle, there are public physical objectsand the correct philosophy of perception is direct realism.

This argument is seen false by counterexample. Bob and Tom, playingvirtual tennis, can talk meaningfully about “the tennis ball” they hit; theycan agree that Tom hit “the tennis ball” out of court, thus losing a point.There is, patently, no public tennis ball. Instead, a supercomputer inthe back room feeds signals to the helmet displays of Bob and Tom andeach, in consequence, constructs his own tennis-ball experience. But Bob’stennis-ball experience is numerically distinct from Tom’s. And there isno other tennis ball around to serve the role of public tennis ball. Thuspublic physical objects are not required for meaningful communication.

Page 12: Conscious Realism and the Mind-Body Problem

98 Hoffman

This counterexample is instructive, for it shows why Searle’s argu-ment fails. Bob and Tom can speak meaningfully about “the tennis ball”because their experiences are properly coordinated. Searle assumes thatsuch coordination requires a public tennis ball. But this assumption isfalse: the coordination in the counterexample is accomplished not by apublic tennis ball, but by a hidden supercomputer.

According to MUI theory, everyday objects such as tables, chairs andthe moon exist only as experiences of conscious observers. The chair Iexperience only exists when I look, and the chair you experience onlyexists when you look. We never see the same chair. We only see the chairicons we each construct each time we look.

There are several arguments for the absurdity of this claim. First, thatchair cannot exist only when I look at it. For I can look away and stilltouch it. So it still exists. Or I can look away and you can look at it, andconfirm to me that it is still there. So again it still exists.

But this argument is easily refuted by the virtual-tennis counterexam-ple. Bob can claim that the tennis ball he and Tom are hitting exists evenwhen he does not look at it. After all, he can look away and still touchthe tennis ball. Or he can look away and Tom can look at it. So, Bob canclaim, the tennis ball still exists even when he does not look at it. ButBob’s claim is patently false.

A second argument: If you think that this train thundering down thetracks is just an icon of your user interface, and does not exist when youdo not perceive it, then why don’t you step in front of it ? You will soonfind out that it is more than an icon. And I will see, after you are gone,that it still exists.

This argument confuses taking something literally and taking it seri-ously. If your MUI functions properly, you should take its icons seriously,but not literally. The point of the icons is to inform your behavior in yourniche. Creatures that do not take their well-adapted icons seriously havea pathetic habit of going extinct. The train icon usefully informs yourbehaviors, including such laudable behaviors as staying off of train-trackicons. The MUI theorist is careful about stepping before trains for thesame reason that computer users are careful about dragging file icons tothe recycle bin.

A third argument: Look, if that wall is just an icon I construct, whycan’t I walk through it ? Shouldn’t it do what I want ?

Not at all. You construct the subjective Necker cube that you see inFigure 1. But it doesn’t do everything you want. For instance, sometimesyou see a cube with corner A in front and sometimes a different cube withcorner B in front. But try to make yourself switch, at will and instantly,between the two cubes and you will find that your cube constructions arestubborn (for a model of this, see Atmanspacher et al. 2004). Or try tosee the edges of the cube as wiggly rather than straight. No chance. The

Page 13: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 99

fact that we construct our icons does not entail that they do whateverwe wish. We are triggered to construct icons by our interactions with theobjective world (whatever its nature might be) and, once so triggered,we construct our icons according to certain probabilistic rules (see, e.g.,Hoffman 1998). The objective world and our rules for icon constructionmake the icons stubborn. Still, these icons exist only in our consciousperceptions.

A

B

Figure 1: The subjective Necker cube (reproduced from Bradley andPetry 1977).

A fourth argument: Of course tables, chairs and the moon are justour icons, and exist only in our conscious experiences. But what’s new ?Physicists have long told us that the apparent solidity of a table is anillusion. It is mostly empty space with quarks and leptons darting about.Our perception of a table’s surface approximates the envelope of thisactivity, and in this sense HFD is correct: There are no objective tables,just objective particles.

The mistake here is analogous to a computer user who admits that fileicons on the display are just conventional symbols, not the actual files, butthen puts a magnifying glass over an icon, sees its pixels, and concludesthat these pixels are the actual file. File icons are indeed composed ofpixels, but these pixels are part of the interface, not elements of the file.Similarly, tables are indeed composed of quarks and leptons, but quarksand leptons are part of the MUI, not elements of the objective world. TheMUI may be hierarchically organized, but different levels of this hierarchyare part of the MUI, not of the objective world.

Placing subatomic particles in the MUI rather than in the objectiveworld is compatible with quantum theory. Indeed, the Copenhagen inter-pretation of quantum theory asserts that the dynamical properties of suchparticles have real values only in the act of observation (see, e.g., Albert1992, Wheeler and Zurek 1983, Zurek 1989). That is, they are part of theobserver’s MUI. Quantum physics does not contradict MUI theory.

Page 14: Conscious Realism and the Mind-Body Problem

100 Hoffman

A fifth argument: Ideas similar to MUI theory are found in variousforms of idealism. But, as Searle (2004, p. 48) says,

idealism had a prodigious influence in philosophy, literally for cen-turies, but as far as I can tell it has been as dead as a doornailamong nearly all the philosophers whose opinions I respect, formany decades, so I will not say much about it.

This is a simple misunderstanding. MUI theory is not idealism. It doesnot claim that all that exists are conscious perceptions. It claims that ourconscious perceptions need not resemble the objective world, whatever itsnature is.

A sixth objection runs as follows: MUI theory implausibly claims thateverything we see is not real, but created by an interface between us andthe world.

This objection highlights an ambiguity of the word real. To say thatsomething is real can mean either that it exists, or that it exists inde-pendent of any observers. A headache is real in the first sense, but notin the second: If I have a headache, then I am inclined to say that theheadache is real, and to feel cross with anyone who says otherwise; howeverI would not claim that the headache exists independent of me, nor thatanyone else could experience my headache, nor that I could experience theheadache of anyone else. Each of us has our own private headaches, andeach such headache is real, but entirely dependent for its existence on theobserver who has it. I typically have little idea what causes a headache,and therefore little reason to assert that my headache resembles these un-known causes. Indeed, it almost surely does not. But the headache is not,thereby, a mystical veil between me and its unknown causes; instead, itis a simple guide to useful behavior, such as taking an aspirin, and sparesme the further headache of ascertaining the complex causes of its genesis.

MUI theory does not claim that everything we see is unreal, but saysinstead that all sensory perceptions are real in the sense that headachesare real: They exist and are observer dependent. They exist so long asthey are experienced.

This sixth objection also highlights a similar ambiguity of the wordworld : This word can refer to a sensory world or to an observer-indepen-dent world. When we speak of the visual world, we use world in the firstsense. The visual world is observer-dependent; it disappears, for instance,when we close our eyes. Similarly, our auditory worlds are silenced if weplug our ears, and our olfactory worlds cease if we pinch the nose. Theword world can also refer to entities hypothesized to be objective, i.e.,to exist independent of any observation. HFD asserts that our sensoryworlds resemble or approximate an objective world. MUI theory rejectsthis assertion.

MUI theory does not claim that our sensory perceptions are created by

Page 15: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 101

an interface between us and the world, as in the old sense datum theories.Instead, MUI theory simply acknowledges that our sensory worlds of spaceand time, objects, motions, colors, sounds, touches, tastes, smells andpains are observer-dependent and are not likely, on evolutionary grounds,to resemble the objective world, whatever form that world might have.This point is simple, but can be counterintuitive since we habitually as-sume, from early childhood, that the objective world resembles our sensoryworlds.

A seventh objection is that MUI theory is logically faulty, because it issimply not true that real user interfaces do not imitate the physical world;on the contrary, they do their best to reproduce a physical-like world.

This objection is correct in noting that the user interface on a typi-cal computer employs icons that imitate shapes and colors familiar fromeveryday sensory perception. However, these icons do not imitate thediodes, resistors, voltages and magnetic fields inside the computer thatthey represent. The icons purposely hide all this complexity, so that com-puter users can get on with their work.

The idea that our sensory perceptions in everyday life are useful pre-cisely because they do not resemble what they represent is, for most peo-ple, counterintuitive. Fortunately, the recent introduction and widespreadpopularity of user interfaces on personal computers gives a ready-to-handmetaphor that most can grasp: the typical computer user understandsthat icons of the interface are useful precisely because they simplify, andin no way resemble, the complex world of hardware and software theyrepresent.

An eighth objection focuses on the notion of resemblance, as follows:MUI theory recognizes that a virtual replica of the world must share somecausality with its target (a virtual tennis ball must behave causally likethe real one, more or less). However MUI theory does not see that this isa kind of isomorphism between the world and the user interface. It seemsto consider only pictorial isomorphisms as relevant. This is not the case.

This objection is correct in noting that a tennis ball in a realisticvirtual-reality game behaves much like a normal tennis ball. But the pointof the virtual-reality example is not the relation between virtual tennisballs and normal tennis balls, but rather the relation between virtualtennis balls and supercomputers. The point is that the virtual tennisball in no way resembles, pictorially or otherwise, the structural or causalproperties of the supercomputer that is running the virtual tennis game.Then, by analogy, the reader is invited to envision the possibility that anormal tennis ball might in no way resemble, pictorially or otherwise, thestructural or causal properties of whatever observer-independent entitiesit represents.

So the analogy offered here is as follows: Virtual tennis ball is to super-computer as normal tennis ball is to the observer-independent world. The

Page 16: Conscious Realism and the Mind-Body Problem

102 Hoffman

supercomputer is vastly more complex, structurally and causally, than avirtual tennis ball; the observer-independent world is, in all likelihood,vastly more complex, structurally and causally, than a normal tennis ball.In mathematical terms, the functions relating the supercomputer to thevirtual tennis ball, or the observer-independent world to the normal ten-nis ball, are not isomorphisms or bijections, but are instead many-to-onemaps that lose much information.

A ninth objection questions the entire metaphor of virtual reality: Thewhole issue of virtual reality is dependent on the creation of real stimuli(for instance, a head mounted display projects real lights and real colors tothe subject’s head). There is no evidence about the possibility of creatinga super virtual reality world (like that in the Matrix movie). There is noempirical ground on which an argument can be built.

The evidence that our sensory worlds might be virtual worlds thatin no way resemble an observer-independent world comes from quantumphysics. There are many interpretations of quantum theory, and this isno place to enumerate them. Suffice it to say that proponents of thestandard interpretation, the Copenhagen interpretation, often respond tothe empirical evidence for quantum entanglement and violation of Bell’sinequalities by rejecting local realism, and in particular by claiming thatdefinite physical properties of a system do not exist prior to being ob-served; what does exist in observer-independent reality is, on their view,unknown. Which definite physical properties are instantiated at any in-stant depends entirely on how and what we choose to observe, i.e., on theparticular observables we choose. If we choose to observe momentum, weget a value of momentum. But this value did not exist before we observed,and ceases to exist if we next choose to measure, say, position.

Thus the possibility that our sensory worlds might be virtual worlds,akin to a user interface, comports well with the empirical evidence ofquantum physics, and is endorsed by some physicists. This is not to say,of course, that quantum theory requires this interpretation. Proponentsof decoherence approaches, for instance, reject this interpretation. Andmost proponents of the Copenhagen interpretation embrace it only forthe microscopic realm, not the macroscopic; but this saddles them withthe unsolved problem of providing a principled distinction between mi-croscopic and macroscopic.

6. Conscious Realism

MUI theory, we have seen, makes no claim about the nature of theobjective world. In this section I propose a theory that does: consciousrealism. One could accept MUI theory and reject conscious realism. Butthey fit well, and together provide a novel solution to the mind-body

Page 17: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 103

problem. Conscious realism is a proposed answer to the question of whatthe universe is made of. Conscious realism asserts that the objective world,i.e., the world whose existence does not depend on the perceptions of aparticular observer, consists entirely of conscious agents.

Conscious realism is a non-physicalist monism. What exists in theobjective world, independent of my perceptions, is a world of consciousagents, not a world of unconscious particles and fields. Those particles andfields are icons in the MUIs of conscious agents, but are not themselvesfundamental denizens of the objective world. Consciousness is fundamen-tal. It is not a latecomer in the evolutionary history of the universe, arisingfrom complex interactions of unconscious matter and fields. Conscious-ness is first; matter and fields depend on it for their very existence. So theterms “matter” and “consciousness” function differently for the consciousrealist than they do for the physicalist. For the physicalist, matter andother physical properties are ontologically fundamental; consciousness isderivative, arising from or identified with complex interactions of mat-ter. For the conscious realist, consciousness is ontologically fundamental;matter is derivative, and among the symbols constructed by consciousagents.

According to conscious realism, when I see a table, I interact with asystem, or systems, of conscious agents, and represent that interaction inmy conscious experience as a table icon. Admittedly, the table gives melittle insight into those conscious agents and their dynamics. The table isa dumbed-down icon, adapted to my needs as a member of a species in aparticular niche, but not necessarily adapted to give me insight into thetrue nature of the objective world that triggers my construction of thetable icon. When, however, I see you, I again interact with a consciousagent, or a system of conscious agents. And here my icons give deeperinsight into the objective world: they convey that I am, in fact, interactingwith a conscious agent, namely you.

Conscious realism is not panpsychism nor does it entail panpsychism.Panpsychism claims that all objects, from tables and chairs to the sunand moon, are themselves conscious (Hartshorne 1937/1968, Whitehead1929/1979), or that many objects, such as trees and atoms, but perhapsnot tables and chairs, are conscious (Griffin 1998). Conscious realism,together with MUI theory, claims that tables and chairs are icons in theMUIs of conscious agents, and thus that they are conscious experiences ofthose agents. It does not claim, nor entail, that tables and chairs are con-scious or conscious agents. By comparison, to claim, in the virtual-tennisexample, that a supercomputer is the objective reality behind a tennis-ball icon is not to claim that the tennis-ball icon is itself a supercomputer.The former claim is, for purposes of the example, true but the latter isclearly false.

Conscious realism is not the transcendental idealism of Kant (1781/

Page 18: Conscious Realism and the Mind-Body Problem

104 Hoffman

2003). Exegesis of Kant is notoriously difficult and controversial. Thestandard interpretation has him claiming, as Strawson (1966, p. 38) putsit, that “reality is supersensible and that we can have no knowledge of it”.We cannot know or describe objects as they are in themselves, the noume-nal objects, we can only know objects as they appear to us, the phenome-nal objects (see also Prichard 1909). This interpretation of Kant precludesany science of the noumenal, for if we cannot describe the noumenal thenwe cannot build scientific theories of it. Conscious realism, by contrast,offers a scientific theory of the noumenal, viz., a mathematical formula-tion of conscious agents and their dynamical interactions. This differencebetween Kant and conscious realism is, for the scientist, fundamental. Itis the difference between doing science and not doing science. This fun-damental difference also holds for other interpretations of Kant, such asthat of Allison (1983).

Many interpretations of Kant have him claiming that the sun andplanets, tables and chairs, are not mind-independent, but depend for theirexistence on our perception. With this claim of Kant, conscious realismand MUI theory agree. Of course many current theorists disagree. Forinstance, Stroud (2000, p. 196), discussing Kant, says:

It is not easy to accept, or even to understand, this philosophicaltheory. Accepting it presumably means believing that the sun andthe planets and the mountains on earth and everything else thathas been here so much longer than we have are nonetheless in someway or other dependent on the possibility of human thought andexperience. What we thought was an independent world wouldturn out on this view not to be fully independent after all. It isdifficult, to say the least, to understand a way in which that couldbe true.

But it is straightforward to understand a way in which that couldbe true. There is indeed something that has been here so much longerthan we have. But that something is not the sun and the planets andthe mountains on earth. It is dynamical systems of interacting consciousagents. The sun and planets and mountains are simply icons of our MUIthat we are triggered to construct when we interact with these dynamicalsystems. The sun you see is a momentary icon, constructed on the fly eachtime you experience it. Your sun icon does not match or approximatethe objective reality that triggers you to construct a sun icon. It is aspecies-specific adaptation, a quick and dirty guide, not an insight intothe objective nature of the world.

One reader commented that conscious realism and MUI theory entailnot just that the objects of our experience are created by subjects, but alsothat particles and all the rest are so created. Eventually the theory willclaim that natural selection and time are a creation of the user interface.It is more noumenic than Kant.

Page 19: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 105

This comment is correct, pace Kant. Space, time, particles, and there-fore natural selection are all within the user interface. But this claimcomports well with recent attempts in physics to construct a theory ofeverything – including space, time and particles – from more fundamentalconstituents, such as quantum information and quantum computing (e.g.,Lloyd 2006), loop quantum gravity (Smolin 2006), and others (e.g., Cal-lender and Huggett 2001). Space-time, classically conceived as a smoothmanifold, appears untenable at the Planck scale. Instead there appear tobe “pixels” of space and time. The intuition that space-time is a funda-mental constituent of an observer-independent reality seems destined tobe overturned by theories of quantum gravity.

The ontology of conscious realism proposed here rests crucially onthe notion of conscious agents. This notion can be made mathematicallyprecise and yields experimental predictions (Bennett et al. 1989, 1991;Bennett et al. 1993a,b; Bennett et al. 1996). Space precludes presentingthe mathematics here, but a few implications of the definition of consciousagent should be made explicit. First, a conscious agent is not necessarilya person. All persons are conscious agents, or heterarchies of consciousagents, but not all conscious agents are persons. Second, the experiencesof a given conscious agent might be utterly alien to us; they may constitutea modality of experience no human has imagined, much less experienced.Third, the dynamics of conscious agents does not, in general, take placein ordinary four-dimensional space-time. It takes place in state spaces ofconscious observers, and for these state spaces the notion of dimensionmight not even be well-defined. Certain conscious agents might employ afour-dimensional space-time as part of their MUI. But again, this is notnecessary.

From these comments it should be clear that the definition of a con-scious agent is quite broad in scope. Indeed, it plays the same role for thefield of consciousness that the notion of a Turing machine plays for thefield of computation (Bennett et al. 1989).

7. The Mind-Body Problem

We now use MUI theory and conscious realism to sketch a solutionto the mind-body problem. Exactly what that problem is depends, ofcourse, on one’s assumptions. If one adopts physicalism, then the centralscientific problem is to describe precisely how conscious experience arisesfrom, or is identical to, certain types of physical systems.

As we discussed before, there are no scientific theories of the physicalistmind-body problem. If one adopts conscious realism, then the centralmind-body problem is to describe precisely how conscious agents constructphysical objects and their properties.

Page 20: Conscious Realism and the Mind-Body Problem

106 Hoffman

Here there is good news. We have substantial progress on the mind-body problem under conscious realism, and there are real scientific the-ories. We now have mathematically precise theories about how one typeof conscious agent, namely human observers, might construct the visualshapes, colors, textures, and motions of objects (see, e.g., Hoffman 1998;Knill and Richards 1996, Palmer 1999).

One example is Ullman’s (1979) theory of the construction of three-dimensional objects from image motion. This theory is mathematicallyprecise and allows one to build computer-vision systems that simulate theconstruction of such objects. There are many other mathematically pre-cise theories and algorithms for how human observers could, in principle,construct three-dimensional objects from various types of image motions(e.g., Faugeras and Maybank 1990, Hoffman and Bennett 1986, Hoffmanand Flinchbaugh 1982, Huang and Lee, 1989, Koenderink and van Doorn1991, Longuet-Higgins and Prazdny 1980). We also have precise theoriesfor constructing three-dimensional objects from stereo (Geiger et al. 1995,Grimson 1981, Marr and Poggio 1979), shading (Horn and Brooks 1989),and texture (Aloimonos and Swain 1988, Witkin 1981). Researchers de-bate the empirical adequacy of each such theory as a model of humanperception, but this is just normal science.

Almost without exception the authors of these perceptual theories arephysicalists who accept HFD and conceive of their theories as specifyingmethods by which human observers can reconstruct or approximate thetrue properties of physical objects that, they assume, exist objectively,i.e., independently of the observer (a claim about physical objects thatis explicitly denied by conscious realism). But each of these perceptualtheories can equally well be reinterpreted simply as specifying a methodof object construction, not reconstruction. The mathematics is indiffer-ent between the two interpretations. It does not require the hypothesisof independently existing physical objects. It is perfectly compatible withthe hypothesis of conscious realism, and the mind-dependence of all ob-jects. So interpreted, the large and growing literature in computationalvision, and computational perception more generally, is concrete scientificprogress on the mind-body problem, as this problem is posed by consciousrealism. It gives mathematically precise theories about how certain con-scious agents construct their physical worlds. The relationship betweenthe conscious and the physical is thus not a mystery, but the subject ofsystematic scientific investigation and genuine scientific theories.

What one gives up in this framework of thinking is the belief thatphysical objects and their properties exist independently of the consciousagents that perceive them. Piaget claimed that children, at about ninemonths of age, acquire object permanence, the belief that physical objectsexist even when they are not observed (Piaget 1954; but see Baillargeon1987). Conscious realism claims that object permanence is an illusion. It

Page 21: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 107

is a useful fiction that substitutes for a situation which, for the child, istoo subtle to grasp: Something continues to exist when the child stopsobserving, but that something is not the physical object that the childsees when it observes. That something is, instead, a complex dynamicalsystem of conscious agents that triggers the child to create a physical-object icon when the child interacts with that system. For the child it ismuch simpler, and rarely problematic, to simply assume that the physicalobject it perceives is what continues to exist when it does not observe.Indeed, only when one faces the subtleties of, e.g., quantum theory orthe mind-body problem, does the utility of the illusion of object perma-nence finally break down, and a more sophisticated, and comprehensive,ontology become necessary.

With physicalist approaches to the mind-body problem, one faces a dif-ficult question of causality: If conscious experience arises somehow frombrain activity, and if the physical world is causally closed, then how, pre-cisely, does conscious experience cause anything ? It seems, for instance,that I eat pistachio ice cream because I feel hungry and I like the tasteof pistachio. Do my conscious experiences in fact cause my eating be-haviors ? No, say non-reductive functionalists, such as Chalmers (1996),who claim that functional properties of the brain give rise to, but are notidentical with, conscious experiences. Instead they often endorse epiphe-nomenalism: Brain activity gives rise to conscious experiences but, sincethe physical realm is causally closed, conscious experiences themselveshave no causal consequences. It seems like I eat pistachio because ittastes good, but this is an illusion. Moreover, I believe that I consciouslyexperience the taste of pistachio, but I would believe this whether or notI in fact consciously experience this taste. This is a radical claim andclose to an outright reductio of the position. Reductive functionalists, bycontrast, do not endorse epiphenomenalism, since they claim that con-scious experiences are identical to certain functional states of the brain,and conscious experiences therefore possess the causal properties of thosefunctional states. However, reductive functionalism has recently been dis-proved by the “scrambling theorem” which shows that, if one grants thatconscious experiences can be represented mathematically, then consciousexperiences and functional relations are not numerically identical (Hoff-man 2006).

Conscious realism leads to a different view of causality, a view I callepiphysicalism: Conscious agents are the only locus of causality, and suchagents construct physical objects as elements of their MUIs; but physi-cal objects have no causal interactions among themselves, nor any othercausal powers. Physical objects, as icons of a conscious agent’s MUI, caninform, but do not cause, the choices and actions of a conscious agent.When a cue ball hits an eight ball and sends it careening to the cornerpocket, the cue ball does not cause the movement of the eight ball any

Page 22: Conscious Realism and the Mind-Body Problem

108 Hoffman

more than the movement of a file icon to the recycle bin causes the binto open or a file to be deleted. A useful user interface offers, as discussedabove, concealed causality and ostensible objectivity. It allows one to act,in all but the most sophisticated situations, as if the icons had causalpowers, and in complete ignorance of the true causal chains. The percep-tual conclusions of one conscious observer might be among the premisesof a second conscious observer and, thereby, inform but not cause theperceptions of the second (Bennett et al. 1989). Attractors in the asymp-totic stochastic behavior of a system of conscious agents might be amongthe premises of other conscious agents and thereby inform, but not cause,their behavior (Bennett et al. 1989).

So, in particular, epiphysicalism entails that the brain has no causalpowers. The brain does not cause conscious experience; instead, certainconscious agents, when so triggered by interactions with certain othersystems of conscious agents, construct brains (and the rest of humananatomy) as complex icons of their MUIs. The neural correlates of con-sciousness are many and systematic not because brains cause conscious-ness, but because brains are useful icons in the MUIs of certain consciousagents. According to conscious realism, you are not just one consciousagent, but a complex heterarchy of interacting conscious agents, whichcan be called your instantiation (Bennett et al. 1989 give a mathematicaltreatment). One complex symbol, created when certain conscious agentswithin this instantiation observe the instantiation, is a brain.

Does this view entail that we should stop the scientific study of neuralcorrelates of consciousness ? No. If we wish to understand the com-plex heterarchy of conscious agents in human instantiations, we must usethe data that our MUIs provide, and that data takes the form of brainicons. Brains do not create consciousness; consciousness creates brainsas dramatically simplified icons for a realm far more complex, a realmof interacting conscious agents. When, for instance, we stimulate pri-mary visual cortex and see phosphenes, the cortex does not cause thephosphenes. Instead, certain interactions between conscious agents causethe phosphenes, and these interactions we represent, in greatly simplifiedicons, as electrodes stimulating brains.

One objection to conscious realism and MUI theory runs as follows:It is completely obscure how this user interface could present its content.If the physical world is not accessible and completely out of reach, whereis the user interface creating its virtual world ? On which mental screen ?What is the stuff its content is made of ?

The key to this objection is the concept “the physical world”. Theobjection assumes a physicalist ontology, in which the physical world isan observer-independent world comprising, inter alia, space-time, matterand fields. If one assumes a physicalist ontology, then it is indeed ob-scure how our sensory experiences, which constitute our user interface,

Page 23: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 109

can be understood. This is just the classic, physicalist, mind-body prob-lem: Is there a Cartesian theater in the brain that mysteriously displaysour experiences, or are there multiple drafts in multiple brain areas thatcan mysteriously turn into experiences ? What stuff are these experiencesmade of, if the fundamental constituents of the universe are mindless andphysical ? This physicalist mind-body problem is still a mystery, awaitingits first genuine scientific theory.

Conscious realism, in direct contradiction to physicalism, takes ourconscious experiences as ontologically fundamental. If experiences are on-tologically fundamental, then the question simply does not arise of whatscreen they are painted on or what stuff they are made of. Compare:If space-time and leptons are taken to be ontologically fundamental, assome physicalists do, then the question simply does not arise of whatscreen space-time is painted on or what stuff leptons are made of. Toask the question is to miss the point that these entities are taken to beontologically fundamental. Something fundamental does not need to bedisplayed on, or made of, anything else; if it did, it would not be funda-mental. Every scientific theory must take something as fundamental; notheory explains everything. Conscious realism takes conscious experiencesas fundamental. This might be counterintuitive to a physicalist, but it isnot ipso facto a logical error.

A related objection is as follows: MUI theory claims that the con-scious perceptual experiences of an agent are a multimodal user interfacebetween that agent and an objective world. If the user interface is pro-viding a completely independent world, how should it be multimodal ?Where are the different sensory modalities coming from ? Are they cre-ated internally ? Internally to what ? MUI theory claims that there is nobrain or body since they are just placeholders inside the user interface.

The answer here, again, is that conscious experiences, in all their qual-itative varieties, are fundamental. Because they are fundamental, they arenot existentially dependent on the brain, or any other physical system.Different qualitative modalities of conscious experience are part of thebasic furniture of the universe.

Is this a flight from science to mysticism ? Not if we give a mathemat-ically precise theory of conscious experiences, conscious agents, and theirdynamics, and then make empirically testable predictions. This is thereason for the previous references to mathematical models of consciousagents. Science is a methodology, not an ontology. The methodology ofscience is just as applicable to the ontology of conscious realism as to thatof physicalism.

Another objection notes that there seems to be a difference when Imeet an object and when I meet someone else. If I meet an object (orwhatever it is, since by the MUI hypothesis, we cannot know), a simplifiedversion of it is created by my super-user interface. If I meet another

Page 24: Conscious Realism and the Mind-Body Problem

110 Hoffman

conscious agent, we both see each other and we both interact together.However, the other conscious agent should be equally inaccessible to me,like the noumenic object. How do we get outside of our epistemic jail, thesuper-user interface ?

To answer this, consider what you see when you look into a mirror.All you see is skin, hair, eyes, lips. But as you stand there, looking atyourself, you know first hand that the face you see in the mirror showslittle of who you really are. It does not show your hopes, fears, beliefs, ordesires. It does not show your consciousness. It does not show that youare suffering a migraine or savoring a melody. All you see, and all thatthe user interfaces of others can see, is literally skin deep. Other peoplesee a face, not the conscious agent that is your deeper reality. They can,of course, infer properties of you as a conscious agent from your facialexpressions and your words; a smile and a laugh suggest certain consciousstates, a frown and a cry others. Such inferences are the way we avoidan epistemic jail, but all such inferences are unavoidably fallible. Whenwe look at a rock, rather than a face, we get much less information aboutthe conscious agents that triggered us to construct the rock. This is nosurprise. The universe is complex, perhaps infinitely so. Thus our userinterfaces, with their endogenous limits, necessarily give us less insightinto some interactions with that universe, and more into others. Whenwe look at ourselves in the mirror, we see first hand the limitations ofour user interface and the presence, behind that interface, of a consciousagent.

8. Evolution

One major objection to conscious realism invokes evolution. We nowknow, the argument goes, that the universe existed for billions of yearsbefore the first forms of life, and probably many millions more before thefirst flickers of consciousness. Natural selection, and other evolutionaryprocesses first described by Darwin, then shaped life and consciousnessinto “endless forms, most beautiful and most wonderful”. This contradictsthe claim of conscious realism, viz., that consciousness is fundamentaland that matter is simply a property of certain icons of conscious agents.There are four responses to this objection.

First, although it is true that evolutionary theory has been interpreted,almost exclusively, within the framework of a physicalist ontology, themathematical models of evolution do not require this ontology. They canbe applied equally well to systems of conscious agents and, indeed, suchan application of evolutionary game theory (Maynard-Smith 1982, Skyrms2000) is quite natural. Systems of conscious agents can undergo stochas-tic evolution, and conscious agents can be synthesized or destroyed in the

Page 25: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 111

process (Bennett et al. 1989, 2002). There is simply no principled reasonwhy evolution requires physicalism. Evolutionary changes in genes andbody morphology can be modeled by evolution whether those genes andbodies are viewed as mind-dependent or mind-independent. The math-ematics does not care. Nor does the fossil evidence. A dinosaur bonedated to the Jurassic can be interpreted along physicalist lines as a mind-independent object or, with equal ease, as a mind-dependent icon thatwe construct whenever we interact with a certain long-existing system ofconscious agents.

For the conscious realist there is, no doubt, interesting and funda-mental work to be done here: We want a rigorous mathematical theoryof the evolution of conscious agents which has the property that, whenthis evolution is projected onto the relevant MUIs, it gives us back thecurrent physicalist model of evolution. That is, we must exhibit physi-calist evolutionary models as special cases, in fact projections, of a richerand more comprehensive evolutionary theory. But this is nothing spe-cial about evolution. We want the same for all branches of science. Forinstance we want, where possible, to exhibit current laws of physics asprojections of more general laws or dynamics of conscious agents. Somecurrent laws of physics, or of other sciences, might be superseded or dis-carded as the science of conscious realism advances, but those that surviveshould be exhibited as limiting cases or projections of the more completelaws governing conscious agents and their MUIs.

Second, according to conscious realism it simply is not true that con-sciousness is a latecomer in the history of the universe. Consciousnesshas always been fundamental, and matter derivative. The picture of anevolving unconscious universe of space-time, matter and fields that, overbillions of years, fitfully gives birth first to life, then to consciousness, isfalse. The great psychological plausibility of this false picture derives fromour penchant to commit a reification fallacy, to assume that the icons wecreate are in fact objects independent of us and fundamental in the uni-verse. We embrace this fallacy because our MUI successfully informs ourbehavior and has ostensible objectivity, because we construct the icons ofour MUI so quickly and efficiently that most of us never discover that wein fact construct them, and because we first commit the fallacy in infancyand are rarely, if ever, encouraged to challenge it. The illusion of objectpermanence starts by nine months, and does not go easy.

Third, standard evolutionary theory itself undercuts the reificationfallacy that underlies HFD. Natural selection prunes perceptual systemsthat do not usefully guide behavior for survival, but natural selection doesnot prune perceptual systems because they do not approximate objectivereality (see, e.g., Radnitzky and Bartley 1987). The perceptual systemsof roaches, we suspect, give little insight into the complexities of objectivereality. The same for lice, maggots, nematodes and an endless list of

Page 26: Conscious Realism and the Mind-Body Problem

112 Hoffman

creatures that thrived long before the first hominoid appeared and willprobably endure long after the last expires. Perceptual systems arisewithout justification from random mutations and, for 99 percent of allspecies that have sojourned the earth, without justification they havedisappeared in extinction. The perceptual icons of a creature must quicklyand successfully guide its behavior in its niche, but they need not givetruth. The race is to the swift, not to the correct. As Pinker (1997,p. 561) puts it:

We are organisms, not angels, and our minds are organs, not pipelinesto the truth. Our minds evolved by natural selection to solve prob-lems that were life-and-death matters to our ancestors, not to com-mune with correctness. . .

Shepard (2001, p. 601) hopes otherwise:

Possibly we can aspire to a science of mind that, by virtue of theevolutionary internalization of universal regularities in the world,partakes of some of the mathematical elegance and generality oftheories of that world.

It is, one must admit, logically possible that the perceptual icons of Homosapiens, shaped by natural selection to permit survival in a niche, mightalso just happen to faithfully represent some true objects and propertiesof the objective world. But this would be a probabilistic miracle, a cosmicjackpot against odds dwarfing those of the state lottery. The smart moneyis on humble icons with no pretense to objectivity.

But this last response might not go far enough, for it grants that nat-ural selection, understood within a physicalist framework, can shape con-scious experience. Perhaps it cannot. Natural selection prunes functionalpropensities of an organism relevant to its reproductive success. But thescrambling theorem proves that conscious experiences are not identicalwith functional propensities (Hoffman 2006). Thus natural selection act-ing on functional propensities does not, ipso facto, act as well on consciousexperiences. A non-reductive functionalist might counter that, althoughconscious experiences are not identical to functional properties, neverthe-less conscious experiences are caused by functional properties, and thusare subject to shaping by natural selection. The problem with this, aswe have discussed, is that no one has turned the idea of non-reductivefunctionalism into a genuine scientific theory, and the failure appears tobe principled. Thus the burden of proof is clearly on those who wishto claim that natural selection, understood within a physicalist frame-work, can shape conscious experience. Understood within the frameworkof conscious realism, natural selection has no such obstructions to shapingconscious experiences.

A second evolutionary objection raised against MUI theory and con-scious realism finds it strange that criteria of efficiency should control the

Page 27: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 113

user interface. Efficiency with respect to what if, as MUI theory claims,there is no way to access the real world ? The logic here is a little bit likethat of Descartes. Where he suggested that the mental world is similar tothe physical one, MUI theory suggests that the mental world is built insuch a way to be a useful schema of the physical one. Useful with respectto what ? And why should we need a simplified version ?

In answering this objection, we must again be careful how we use ourterms. In particular, as discussed before, the phrase real world could meanthe real worlds of our sensory perceptions, whose existence is observer-dependent. Or it could mean a world that is objective, in the sense thatit is observer-independent. It is the latter interpretation that is probablyintended by the objection. If so, then MUI theory does not claim thereis no access to the real world, but rather that our access is via sensorysystems that radically simplify, and probably in no way resemble, thatreal world. There is access, just no resemblance.

Similarly, when this objection speaks of the physical world, it presum-ably assumes a physicalist ontology, with physical objects and propertiesthat are observer-independent. If so, MUI theory and conscious realismtogether do not claim that our sensory worlds are built to be a usefulschema of the physical world, for they reject the ontology of physical-ism. If there is no observer-independent physical world, then there is noreason to build schemas of it. MUI theory asserts, instead, that the phys-ical world, the world of space-time, objects, matter and so on, is itself asensory user interface that is observer-dependent. This might be counter-intuitive to a physicalist, but it is not logically self-contradictory. It canbe made mathematically precise, and is consistent with quantum theory.

With these provisos, we can now address the main question of thisobjection, which is why criteria of efficiency and usefulness should controlthe user interface. The reason is that, according to conscious realism,there is a reality independent of any particular observer, and to interactintelligently or appropriately with that reality one’s sensory perceptionsmust be a useful and efficient guide to that reality. Conscious realismis not solipsism. There is a reality independent of my perceptions, andmy perceptions must be a useful guide to that reality. This reality con-sists of dynamical systems of conscious agents, not dynamical systems ofunconscious matter. Moreover, this reality is quite complex. So if mysensory systems are to be efficient, they must dramatically simplify thiscomplexity, and yet still provide a useful guide.

A third objection to MUI theory runs as follows: Inexplicably, thetable I see is created by my personal user interface, but your table iscreated in a way that is coherent with my own. An ironic reader wouldask whether they are using the same operating system.

To answer this, it is important to note that MUI theory does notrequire that your user interface be functionally identical to mine. Evo-

Page 28: Conscious Realism and the Mind-Body Problem

114 Hoffman

lutionary considerations suggest that they might be functionally similar,since we are of the same species. This is the reason this paper sometimesemploys the phrase “species-specific user interface”. But evolutionaryconsiderations also suggest that our interfaces will differ slightly in func-tion, since random variations are essential for the operation of naturalselection. Functional coherence, then, between our user interfaces is notunexpected. However, the scrambling theorem establishes that functionalcoherence, or even functional identity, does not logically entail identity,or even similarity, between our conscious experiences (Hoffman 2006).

9. Conclusion

Abraham Pais, describing his interactions with Einstein, wrote (Pais1979, p. 907):

Einstein never ceased to ponder the meaning of the quantum theory. . . We often discussed his notions on objective reality. I recall thatduring one walk Einstein suddenly stopped, turned to me and askedwhether I really believed that the moon exists only when I look atit.

MUI theory says that the moon you see is, like any physical objectyou see, an icon constructed by your visual system. Perception is notobjective reporting but active construction. A perceptual constructionlasts only so long as you look, and then is replaced by new constructionsas you look elsewhere. Thus the answer to Einstein’s question, accordingto MUI theory, is that the moon you see only exists when you look atit. Of course the moon Jack sees might continue to exist even when themoon Jill sees ceases to exist because she closes her eyes. But the moonJack sees is not numerically identical to the moon Jill sees. Jack sees hismoon, Jill sees hers. There is no public moon.

Something does exist whether or not you look at the moon, and thatsomething triggers your visual system to construct a moon icon. But thatsomething that exists independent of you is not the moon. The moonis an icon of your MUI, and therefore depends on your perception forits existence. The something that exists independent of your perceptionsis always, according to conscious realism, systems of conscious agents.Consciousness is fundamental in the universe, not a fitfully emerging late-comer.

The mind-body problem is, for the physicalist, the problem of gettingconsciousness to arise from biology. So far no one can build a scientifictheory of how this might happen. This failure is so striking that it leadssome to wonder if Homo sapiens lacks the necessary conceptual appara-tus. For the conscious realist, the mind-body problem is how, precisely,conscious agents create physical objects and properties. Here we have a

Page 29: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 115

vast and mathematically precise scientific literature, with successful im-plementations in computer vision systems.

To a physicalist, the conscious-realist mind-body problem might ap-pear to be a bait and switch that dodges hard and interesting questions:What is consciousness for ? When and how did it arise in evolution ? Howdoes it now arise from brain activity ? Now, the switch from the ontologyof physicalism to the ontology of conscious realism changes the relevantquestions. Consciousness is fundamental. So to ask what consciousnessis for is to ask why something exists rather than nothing. To ask howconsciousness arose in a physicalist evolution is mistaken. Instead we askhow the dynamics of conscious agents, when projected onto appropriateMUIs, yields current evolutionary theory as a special case. To ask howconsciousness arises from brain activity is also mistaken. Brains are com-plex icons representing heterarchies of interacting conscious agents. Soinstead we ask how neurobiology serves as a user interface to such heter-archies. Conscious realism, it is true, dodges some tough mysteries posedby physicalism, but it replaces them with new, and equally engaging,scientific problems.

Nobody explains everything. If you want to solve the mind-body prob-lem you can take the physical as given and explain the genesis of consciousexperience, or take conscious experience as given and explain the gene-sis of the physical. Explaining the genesis of conscious experience fromthe physical has proved, so far, intractable. Explaining the genesis of thephysical from conscious experience has proved quite feasible. This is goodnews: We do not need a mutation that endows a new conceptual appa-ratus to transform the mind-body problem from a mystery to a routinescientific subject, we just need a change in the direction in which we seekan explanation. We can start with a mathematically precise theory ofconscious agents and their interactions. We can, according to the normsof methodological naturalism, devise and test theories of how consciousagents construct physical objects and their properties, even space andtime themselves. In the process we need relinquish no method or resultof physicalist science, but instead we aim to exhibit each such result as aspecial case in a more comprehensive, conscious realist, framework.

Acknowledgments

For helpful comments on previous drafts I thank Mike Braunstein,Mike D’Zmura, Michael Gokhale, Perry Hoberman, David Hoffman, JulieKwak, John Pyles, Barron Ramos, Whitman Richards, Kim Romney, JaySaul, Ian Scofield, Carol Skrenes, and Ken Vaden. I also thank two anony-mous reviewers whose comments helped greatly to clarify the paper. Themathematical formulation of conscious realism was developed in collab-oration with Bruce Bennett and Chetan Prakash. To both I am most

Page 30: Conscious Realism and the Mind-Body Problem

116 Hoffman

grateful. Sole responsibility for any errors here is, of course, mine. Thisresearch was supported in part by a grant from the US National ScienceFoundation.

References

Alais D. and Blake R., eds. (2004): Binocular Rivalry, MIT Press, Cambridge.

Albert D. (1992): Quantum Mechanics and Experience, Harvard UniversityPress, Cambridge.

Allison H.E. (1983): Kant’s Transcendental Idealism: An Interpretation andDefense, Yale University Press, New Haven.

Aloimonos Y. and Swain M.J. (1988): Shape from texture. Biological Cyber-netics 58, 345–360.

Atmanspacher H., Filk T., and Romer H. (2004): Quantum Zeno features ofbistable perception. Biological Cybernetics 90, 33–40.

Baars B. (1988): A Cognitive Theory of Consciousness, Cambridge UniversityPress, Cambridge.

Baillargeon R. (1987): Object permanence in 3 and 4 month-old infants. De-velopmental Psychology 23, 655–664.

Bennett B.M., Hoffman D.D., and Kakarala R. (1993a): Modeling performancein observer theory. Journal of Mathematical Psychology 37, 220–240.

Bennett B.M., Hoffman D.D., and Murthy P. (1993b): Lebesgue logic for proba-bilistic reasoning, and some applications to perception. Journal of MathematicalPsychology 37, 63–103.

Bennett B.M., Hoffman D.D., and Prakash C. (1989): Observer Mechanics: AFormal Theory of Perception, Academic Press, San Diego.

Bennett B.M., Hoffman D.D., and Prakash C. (1991): Unity of perception.Cognition 38, 295–334.

Bennett B.M., Hoffman D.D., and Prakash C. (2002): Perception and evolution.In Perception and the Physical World: Psychological and Philosophical Issuesin Perception, ed. by D. Heyer and R. Mausfeld, Wiley, New York, pp. 229–245.

Bennett B.M., Hoffman D.D., Prakash C., and Richman S. (1996): Observertheory, Bayes theory, and psychophysics. In Perception as Bayesian Inference,ed. by D. Knill and W. Richards, Cambridge University Press, Cambridge,pp. 163–212.

Block N.J. and Fodor J.A. (1972): What psychological states are not. Philo-sophical Review 81, 159–181.

Bradley M.M. and Petry M.C. (1977): Organizational determinants of subjec-tive contour: The subjective Necker cube. American Journal of Psychology 90,253-262.

Broad C.D. (1925): Mind and Its Place in Nature, Routledge and Kegan Paul,London.

Page 31: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 117

Brown R.J. and Norcia A.M. (1997): A method for investigating binocularrivalry in real-time with the steady-state VEP. Vision Research 37, 2401–2408.

Callender C. and Huggett N. (2001): Physics Meets Philosophy at the PlanckScale, Cambridge University Press, Cambridge.

Celesia G.G., Bushnell D., Cone-Toleikis S., and Brigell M.G. (1991): Corticalblindness and residual vision: Is the second visual system in humans capable ofmore than rudimentary visual perception ? Neurology 41, 862–869.

Chalmers D.J. (1996): The Conscious Mind: In Search of a Fundamental The-ory, Oxford University Press, Oxford.

Chomsky N. (1980): Rules and Representations, Columbia University Press,New York.

Chomsky N. (2000): New Horizons in the Study of Language and Mind, Cam-bridge University Press, Cambridge.

Churchland P.M. (1981): Eliminative materialism and the propositional atti-tudes. Journal of Philosophy 78, 67–90.

Churchland P.S. (1986): Neurophilosophy: Toward a Unified Science of theMind/Brain, MIT Press, Cambridge.

Collins M. (1925): Colour-Blindness, Harcourt, Brace & Co, New York.

Crick F. (1994): The Astonishing Hypothesis: The Scientific Search for the Soul,Scribners, New York.

Crick F. and Koch C. (1990): Toward a neurobiological theory of consciousness.Seminars in the Neurosciences 2, 263–275.

Crick F.C and Koch C. (2005): What is the function of the claustrum ? Philo-sophical Transactions of the Royal Society B 360, 1271–1279.

Critchley M. (1965): Acquired anomalies of colour perception of central origin.Brain 88, 711–724.

Davidson D. (1970): Mental events. In Experience and Theory, ed. by L. Fosterand J. Swanson, Humanities Press, New York, pp. 79–101.

Dennett D. (1978): Why you can’t make a computer that feels pain. In Brain-storms, MIT Press, Cambridge, pp. 190–229.

Edelman G.M. (2004): Wider Than the Sky: The Phenomenal Gift of Con-sciousness, Yale University Press, New Haven.

Edelman G.M. and Tononi G. (2000): A Universe of Consciousness: How Mat-ter Becomes Imagination, Basic Books, New York.

Faugeras O.D. and Maybank S. (1990): Motion from point matches: Multiplic-ity of solutions. International Journal of Computer Vision 4, 225–246.

Fodor J.A. (1974): Special sciences: Or, the disunity of science as a workinghypothesis. Synthese 28, 97–115.

Fodor J.A. and Pylyshyn Z. (1981): How direct is visual perception ? Somereflections on Gibson’s ‘Ecological Approach’. Cognition 9, 139–196.

Geiger D., Ladendorf B., and Yuille A. (1995): Occlusions and binocular stereo.International Journal of Computer Vision 14, 211–226.

Page 32: Conscious Realism and the Mind-Body Problem

118 Hoffman

Gibson J.J. (1950): The Perception of the Visual World, Houghton-Mifflin, NewYork.

Gibson J.J. (1966): The Senses Considered as Perceptual Systems, Houghton-Mifflin, New York.

Gibson J.J. (1979/1986): The Ecological Approach to Visual Perception, LawrenceErlbaum, Hillsdale.

Griffin D.R. (1998): Unsnarling the World Knot: Consciousness, Freedom, andthe Mind-Body Problem, University of California Press, Berkeley.

Grimson W.E.L. (1981): From Images to Surfaces, MIT Press, Cambridge.

Hameroff S.R. and Penrose R. (1996): Conscious events as orchestrated space-time selections. Journal of Consciousness Studies 3(1), 36–53.

Hartshorne C. (1937/1968): Beyond Humanism: Essays in the Philosophy ofNature, University of Nebraska Press, Lincoln.

Helmholtz H.L.F. von (1910/1962): Handbook of Physiological Optics, VolumeIII, Dover, New York.

Hoffman D.D. (1983): The interpretation of visual illusions. Scientific American249, 154–162.

Hoffman D.D. (1998): Visual Intelligence: How We Create What We See,W.W. Norton, New York.

Hoffman D.D. (2003): Does perception replicate the external world ? Behavioraland Brain Sciences 26, 415–416.

Hoffman D.D. (2006): The scrambling theorem: A simple proof of the logicalpossibility of spectrum inversion. Consciousness and Cognition 15, 31–45.

Hoffman D.D. and Bennett B.M. (1986): The computation of structure fromfixed-axis motion: rigid structures. Biological Cybernetics 54, 71–83.

Hoffman D.D. and Flinchbaugh B.E. (1982):. The interpretation of biologicalmotion. Biological Cybernetics 42, 197–204.

Horn B.K.P. and Brooks M., eds. (1989): Shape from Shading, MIT Press,Cambridge.

Horton J.C. and Hoyt W.F. (1991): Quadratic visual field defects: A hallmarkof lesions in extrastriate (V2/V3) cortex. Brain 114, 1703–1718.

Huang T. and Lee C. (1989): Motion and structure from orthographic projec-tions. IEEE Transactions on Pattern Analysis and Machine Intelligence 11,536–540.

Kant I. (1781/2003): Critique of Pure Reason, Dover, New York.

Kim J. (1993): Supervenience and Mind, Cambridge University Press, Cam-bridge.

Knill D. and Richards W., eds. (1996) Perception as Bayesian Inference, Cam-bridge University Press, Cambridge.

Koch C. (2004): The Quest for Consciousness: A Neurobiological Approach,Roberts & Company, Englewood.

Koenderink J.J. and Doorn A.J. van (1991): Affine structure from motion.Journal of the Optical Society of America A 8, 377–385.

Page 33: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 119

Lehar S. (2003): Gestalt isomorphism and the primacy of subjective consciousexperience: A Gestalt bubble model. Behavioral and Brain Sciences 26, 375–444.

Leopold D.A. and Logothetis N.K. (1996): Activity changes in early visualcortex reflect monkeys’ percepts during binocular rivalry. Nature 379, 549–553.

Lloyd S. (2006): Programming the Universe, Knopf, New York.

Longuet-Higgins H.C. and Prazdny K. (1980): The interpretation of a movingretinal image. Proceedings of the Royal Society of London B, 208, 385–397.

Lumer E.D., Friston K.J. and Rees G. (1998). Neural correlates of perceptualrivalry in the human brain. Science 280, 1930–1934.

Marr D. (1982): Vision, Freeman, San Francisco.

Marr D. and Poggio T. (1979): A computational theory of human stereo vision.Proceedings of the Royal Society of London B 204, 301–328.

Maynard-Smith J. (1982): Evolution and the Theory of Games, CambridgeUniversity Press, Cambridge.

McGinn C. (1989): Can we solve the mind-body problem ? Mind 98, 349–366.

Miller G. (2005): What is the biological basis of consciousness ? Science 309,79.

Noe A. and Regan J.K. (2002): On the brain-basis of visual consciousness: Asensorimotor account. In Vision and Mind: Selected Readings in the Philosophyof Perception, ed. by A. Noe and E. Thompson, MIT Press, Cambridge, pp. 567–598.

Pais A. (1979): Einstein and the quantum theory. Reviews of Modern Physics51, 863–914.

Palmer S.E. (1999): Vision Science: Photons to Phenomenology, MIT Press,Cambridge.

Place U.T. (1956): Is consciousness a brain process ? British Journal of Psy-chology 45, 243–255.

Penrose R. (1994): Shadows of the Mind, Oxford University Press, Oxford.

Piaget J. (1954): The Construction of Reality in the Child, Basic, New York.

Pinker S. (1997): How the Mind Works, W.W. Norton & Co, New York.

Poggio T., Torre V. and Koch C. (1985): Computational vision and regulariza-tion theory. Nature 317, 314–319.

Prichard H.A. (1909): Kant’s Theory of Knowledge, W.W. Norton & Co, Ox-ford.

Purves D. and Lotto R.B. (2003): Why We See What We Do: An EmpiricalTheory of Vision, Sinauer, Sunderland.

Radnitsky G. and Bartley W.W., eds. (1987): Evolutionary Epistemology, Ra-tionality, and the Sociology of Knowledge, Open Court, La Salle.

Rees G., Kreiman G., and Koch C. (2002): Neural correlates of consciousnessin humans. Nature Reviews Neuroscience 3, 261–270.

Page 34: Conscious Realism and the Mind-Body Problem

120 Hoffman

Rizzo M., Nawrot M., and Zihl J. (1995): Motion and shape perception incerebral akinetopsia. Brain 118, 1105–1127.

Sabra A.I. (1978): Sensation and inference in Alhazen’s theory of visual per-ception. In Studies in Perception: Interrelations in the History of Philosophyand Science, ed. by P.K. Machamer and R.G. Turnbull, Ohio State UniversityPress, Columbia, pp. 160–185.

Sacks O. (1995): An Anthropologist on Mars, Vintage Books, New York.

Searle J.R. (1992): The Rediscovery of the Mind, MIT Press, Cambridge.

Searle J.R. (2004): Mind: A Brief Introduction, Oxford University Press, Ox-ford.

Schneiderman B. (1998): Designing the User Interface: Strategies for EffectiveHuman-Computer Interaction, Addison-Wesley, Reading.

Shepard R. (2001): Perceptual-cognitive universals as reflections of the world.Behavioral and Brain Sciences 24, 581–601.

Skyrms B. (2000): Game theory, rationality, and evolution of the social contract.Journal of Consciousness Studies 7(1/2), 269–284.

Smart J.J.C. (1959): Sensations and brain processes. Philosophical Review 68,141–156.

Smolin L. (2006): The Trouble with Physics, Houghton Mifflin, Boston.

Stapp H.P. (1993): Mind, Matter, and Quantum Mechanics, Springer, Berlin.

Stapp H.P. (1996). The hard problem: A quantum approach. Journal of Con-sciousness Studies 3(3), 194–210.

Stoffregen T.A. and Bardy B.G. (2001): On specification and the senses. Be-havioral and Brain Sciences 24, 195–261.

Strawson P.F. (1966): The Bounds of Sense, an Essay on Kant’s Critique ofPure Reason, Methuen, London.

Stroud B. (2000): The Quest for Reality: Subjectivism and the Metaphysics ofColour, Oxford University Press, Oxford.

Tong F., Nakayama K., Vaughan J.T., and Kanwisher N. (1998): Binocularrivalry and visual awareness in human extrastriate cortex. Neuron 21, 753–759.

Tononi G. and Sporns O. (2003): Measuring information integration. BMCNeuroscience 4, 31–50.

Tononi G., Srinivasan R., Russell D.P., and Edelman G.M. (1998): Investigatingneural correlates of conscious perception by frequency-tagged neuromagneticresponses. Proceedings of the National Academy of Sciences of the USA 95,3198–3203.

Tye M. (1996): T Ten Problems of Consciousness, MIT Press, Cambridge.

Tye M. (2000): Consciousness, Color, and Content, MIT Press, Cambridge.

Ullman S. (1979): The Interpretation of Visual Motion, MIT Press, Cambridge.

Ullman S. (1980): Against direct perception. Behavioral and Brain Sciences 3,373–415.

Page 35: Conscious Realism and the Mind-Body Problem

Conscious Realism and the Mind-Body Problem 121

Wheeler J.A. and Zurek W.H., eds. (1983): Quantum Theory and Measurement,Princeton University Press, Princeton.

Whitehead A.N. (1929/1979): Process and Reality: An Essay in Cosmology,Free Press, New York.

Witkin A.P. (1981): Recovering surface shape and orientation from texture.Artificial Intelligence 17, 17–45.

Yuille A. and Buelthoff H. (1996): Bayesian decision theory and psychophysics.In Perception as Bayesian Inference, ed. by D. Knill and W. Richards, Cam-bridge University Press, Cambridge, pp. 123–161.

Zeki S. (1993): A Vision of the Brain, Blackwell, Oxford.

Zeki S., Watson J.D.G., Lueck C.J., Friston K.J., Kennard C., and FrackowiakR.S.J. (1991): A direct demonstration of functional specialization in humanvisual cortex. Journal of Neuroscience 11, 641–649.

Zihl J., Cramon D. von, and Mai N. (1983): Selective disturbance of movementvision after bilateral brain damage. Brain 106, 313–340.

Zihl J., Cramon D. von, Mai N., and Schmid C.H. (1991): Disturbance ofmovement vision after bilateral posterior brain damage: Further evidence andfollow up observations. Brain 114, 2235–2252.

Zurek W.H., ed. (1989): Complexity, Entropy and the Physics of Information,Addison-Wesley, Reading.

Received: 08 January 2007Revised: 15 May 2007Accepted: 27 July 2007

Reviewed by two anonymous referees.

Page 36: Conscious Realism and the Mind-Body Problem