Counterfactual computational vehicles of consciousness

31
Counterfactual computational vehicles of consciousness Ron Chrisley COGS/Dept. of Informatics University of Sussex Toward a Science of Consciousness, Tucson April 7th 2006

description

Counterfactual computational vehicles of consciousness. Ron Chrisley COGS/Dept. of Informatics University of Sussex. Toward a Science of Consciousness, Tucson April 7th 2006. Outline. Bishop's argument against computational explanations of consciousness - PowerPoint PPT Presentation

Transcript of Counterfactual computational vehicles of consciousness

Counterfactual computational vehicles of consciousness

Ron ChrisleyCOGS/Dept. of InformaticsUniversity of Sussex

Toward a Science of Consciousness, TucsonApril 7th 2006

Outline

• Bishop's argument against computational explanations of consciousness

• My response: Acknowledge the counterfactual nature of physical states

• Segue: Use emphasis on counterfactual properties as motivation for a specific form of computationalism/representationalism

• Provides a plausible yet non-trivial enactivist model of perceptual experience: Imagination-Based Architecture

Bishop: "Dancing with Pixies"

• Poses a dilemma for computational explanations of consciousness

• Horns based on two notions of computation:1.Non- or weakly- causal construal of computation

2.Strong, counterfactual causal construal of computation

Bishop's dilemma

Either notion has problems:• Horn 1: Weak causality:

– Implies every computation is realised in every physical system

– So any claim that a given computation is sufficient for consciousness implies panpsychism

– Phenomenal "pixies" everywhere!

• Horn 2: Strong causality:– Violates naturalism by appealing to non-physical aspects of a state

Rejecting the first horn

• Yes, weakly causal construal of computation implies panpsychism

• But:– What's wrong with panpsychism anyway?• Actually, a lot…

– Better (cf Chalmers 94, 96; Chrisley 94):• Weak construal not really a causal construal of computation at all

• Thus does not capture what is meant by computation

Embracing the second horn

• Strong, counterfactual causal construal of computation– Identity of a computational state depends not only on actual causal relations…

– …but also on causal effects (output, successor state) a state would have had were different input received

• Bishop: Subject to variants of Chalmers' Fading Qualia and Suddenly Disappearing Qualia arguments

Bishop's thought experiment

• Consider the operation of two robots:– R1: "controlled by a program replicating the fine-grained functional organisation of a system known to have phenomenal states"• A particular run of R1 with input I results in an actual sequence of behaviours B

– R2: any open physical system that generates B, given the same input I

More on R1 and R2• For example, R1 might be controlled by an AI program that enables it to output classifications of objects presented to its cameras– When given the input of a particular object, this results in a particular sequence of output classifications B:

– "This colour of this triangle is a bit more red than the square I just saw"

• While R2 can just have a hard-wired circuit that happens to output B (regardless of input!)

Branching FSA

• We can conceive of R1 and R2 as a finite-state automata with branching and non-branching states, respectively:

Non-branching FSA

(Diagrams from Bishop 2002)

The computationalist's view

• Bishop: "Hence, although the external behaviour of the two systems over the time interval is identical [viz, B], for [a computational theory of consciousness], only R1 would experience genuine phenomenal states."

• What's wrong with that?

Transforming R1 into R2

• One by one, delete one of the N state transition sequences of R1 that are not actually used in the case under consideration, to transform R1 into R11, R11 to R12… to R1N

• R1N will be computationally formally identical with R2

• So for a computationalist, R1N, like R2, has no conscious experience

R1, R2 & another dilemma

Bishop:• "What happens to the phenomenological experience of R1 as it incrementally undergoes the above transformation?"

• "Either its experience of phenomenal states must gradually fade (Fading Qualia) or it must switch abruptly at some point (Suddenly Disappearing Qualia)."

Me:• Not necessarily: There might be several, spaced, discrete transitions.

• But let that pass…

No SDQ?

Bishop:• Rule out first horn: Suddenly Disappearing Qualia

• It would "imply that the removal of one such privileged branching state transition instruction would result in the complete loss of the robot’s phenomenal experience"

Me:• Not convinced this is a problem• But agree to rule it out for the sake of argument

A general argument?

• Bishop presents an argument not against the second horn (Fading Qualia), but against computationalism in general:

• The computationalist's position implies the existence of "a system, whose phenomenal experience is contingent upon non-physical interactions with sections of its control program that are not executed – a form of dualism."

• "Hence, if phenomenal states are purely physical phenomena, the phenomenal experience of the two robot systems, R1 and R2, must be the same."

Warning sign: Too strong

• An indication that Bishop's argument can't be right:– It proves too much

• If right, it would imply that there could never be a physicalist computational explanation of anything…– …not even computers!

Misunderstanding the physical

• Bishop's main mistake: claiming that differences in counterfactual behaviour do not constitute physical differences

• Presumably, it is by virtue of some physical difference between a state of R1n and the corresponding state of R1n+1 that gives the former a counterfactual property the latter lacks

Misunderstanding the physical

• Note that to delete the nth transition, one would have to physically alter R1n-1

• So despite Bishop's claim, if R1 and R2 differ in their counterfactual formal properties, they must differ in their physical properties

• Causal properties (even counterfactual ones) supervene on physical properties

Counterfactuals are key

• So much for Bishop's argument against computational accounts of consciousness

• But although Bishop has nothing on computationalism in theory, he inspires a relevant critique of the form it usually takes

• That is, standard computationalist theories of consciousness neglect the importance of counterfactual properties

Segue…

"Actualist" computationalism

• Typically, computationalist (or functionalist) theories attempt to map:– A perceptual phenomenal content– To a computational (functional) state

– By virtue of the latter's actual causal origins (and perhaps its actual causal effects)

The Grand Illusion?

• For example, some argue:– Change blindness data show that only foveal information has an effect on our perceptual state

– Thus, our perceptual experience is only of the foveated world

– Any appearance that anything else is experienced is incorrect

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

Being counterfactual

• But a computationalist theory that places explicit emphasis on the role of counterfactual states can avoid the Grand Illusion result

• E.g.: The phenomenological state corresponding to a given computational state includes not just current foveal input

• But also the foveal input the computational system would expect to have if it were to engage in certain kinds of movement

• "Imagination-based architecture" (IBA)

More on IBA

• These expectations can be realized in, e.g., a forward model, such as a feed-forward neural network

• The model is updated only in response to foveal information– E.g., it learns: "If I were to move my eyes back there, I would see that (the current foveal content)"

The IBA explanation

• Thus, change blindness can be explained without denying peripheral experience

• Consider the system after an element of the scene has changed, but before the system foveates on that part of the scene

• The expectations of the forward model for what would be seen if one were to, say, foveate on that area, have not been updated

No Grand Illusion

• According to IBA, the (outdated) expectation is a part of current experience

• Thus no change is detected or experienced

• So our experience is just what it seems

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

Elaborations to IBA

• Only a simplistic version of IBA presented here

• Can be elaborated to include change not instigated by the system itself– E.g., expectations of what foveal information one would receive if the world were to change in a particular way

More elaborations to IBA

• Weighted contributions to experience– Current foveal info strongest of all

– Expected foveal after a simple movement a little less

– Contribution of expected results of complex movements/sequences inversely proportional to their complexity

Open questions for IBA

• E.g., what is the experience at a non-foveated part of the visual field if one has different expectations for what one would see depending on the motor "route" one takes to foveate there?– Some "average" of the different expectations?– Winner take all?– Necker-like shift between top n winners?– No experience at that part of field at all,

as coherence (systematicity, agreement) at a time is a requirement for perceptual experience?

Announcements

• For philosophers of Cognitive Science/AI:– My department, Informatics at the University of Sussex/COGS is hiring

– Tell your friends/colleagues!

• Those interested in Machine Consciouness:– Conference I am chairing, BICS 2006 in the Greek Islands, October, is still accepting submissions to the end of April

• Email me about either/both: [email protected]

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

BICS

Thank you!

• Thanks to Mark Bishop and Rob Clowes for helpful discussions