FULL PAPER.doc

54
13356 + 1253 = 14609 words Ethics and Consciousness in Artificial Agents Steve Torrance School of Health and Social Sciences Middlesex University, Enfield, EN3 4SA, UK. and Centre for Research in Cognitive Science (COGS) University of Sussex, Falmer, Brighton BN1 9QH, UK [email protected] Abstract In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally-controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically-based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics. Keywords: machine ethics, humanoid robotics, organism, sentience, moral status, rationality, empathy, autopoiesis, enaction. 1

Transcript of FULL PAPER.doc

Page 1: FULL PAPER.doc

13356 + 1253 = 14609 words

Ethics and Consciousness in Artificial Agents

Steve Torrance

School of Health and Social SciencesMiddlesex University, Enfield, EN3 4SA, UK.

and

Centre for Research in Cognitive Science (COGS)University of Sussex, Falmer, Brighton BN1 9QH, UK

[email protected]

Abstract

In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally-controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically-based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics.

Keywords: machine ethics, humanoid robotics, organism, sentience, moral status, rationality, empathy, autopoiesis, enaction.

1. Humanoid Ethics: The Challenge

The emerging field of Machine Ethics deals with the ways in which human-made devices might

perform actions that have ethical significance. This may involve, for example, an online advisory

system commenting on the ethical implications of a business investment, or a human-like robot making

1

Page 2: FULL PAPER.doc

ethical demands upon us for energy resources that it requires to continue to survive. Over recent

decades we have seen many developments in the area of humanoid robotics, with many more projected

in the near and medium future. In what follows I will examine some of the implications of such

developments, and particularly issues concerning the entry of human-like artificial agents into our

moral universe.

We humans display a variety of morally flavoured attitudes – often affectively intense – towards

inanimate mechanisms and other entities which play special roles in our lives. Children treat play-

objects as mock-allies or as enemies: many nascent moral attitudes and interactions are rehearsed in

such play. We fabricate similar patterns of second-hand moral play in adult life, for example, feeling

pride at a newly purchased appliance, vehicle, etc.; and we vent our anger at them when they

‘misbehave’. However, in our more reflective moods we readily affirm that such affective states have

no rational validity – we observe a clear distinction between the agency of machines and morally

significant agency. The moral universe that we inhabit in our most reflectively sanitized moments has

very different contours from the one shaped by our fantasy-lives.

Documents such as the UN Universal Declaration of Human Rights (1948) provide secular

frameworks for moral agency and attitudes. Article 2 in that document affirms, for instance, that

‘Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of

any kind, such as race, colour, sex, language, religion, etc.’ ‘Everyone’ here is interpreted to mean all

human beings: the Declaration does not cater for the rights of non-human animals, let alone possible

future artificial agents, even ones that may exhibit rich human-like properties. Clearly a lot of the

issues concerning the extension of the moral universe to admit artificial humanoids or androids will be

the same as that concerning the moral status of some of the more advanced animal species, but there

are also important differences. (See Calverley 2005b for some useful discussion.)

I wish to focus this discussion specifically on artificial humanoids (taking the term ‘humanoid’

quite widely). This limitation of the scope of the discussion involves leaving out a lot of possible kinds

of artificial systems that are often considered in the field of machine or information ethics, such as

screen-based knowledge-processing systems, computer animated virtual agents, non-humanoid robots

of various sorts, and organism-machine hybrids – (for instance humans and other biological creatures

who may, in the future, receive massive brain implants with no loss of, and possibly highly enhanced,

functionality, not to mention full brain-to-silicon uploads).

2

Page 3: FULL PAPER.doc

Also, we will be considering here only those humanoid artificially created agents, current and

future, that are, because of their physical makeup, clearly agreed to be machines rather than

organisms. The case of artificially produced creatures displaying biological or quasi-biological

properties that are so rich that they no longer merit being called machines (or which merit such a

description only in the rather stilted way that we humans deserve to be called machines), is a somewhat

separate one, and will not be at the foreground of our discussion here.

The similarities between humans and even the most advanced kinds of computationally based

humanoid robots likely to be practicable for some time, are in one way highly remote, just because of

the enormous design-gaps between electronic technologies and naturally-occurring human physiology.

However, in other ways the similarities are, at least potentially, very great. Thanks to mainstream AI

techniques, as well as more innovative research advances, humanoid robots may soon have a variety of

linguistic, cognitive and other functional capacities that are far closer to us than those possessed by any

non-human animals. Admittedly, there could also be areas of marked underperformance in robots,

which may stubbornly resist improvement, despite massive R&D effort. All the same, the cognitive

and physical powers of humanoid robots are increasing rapidly, and it appears very likely that the

market for humanoid robots will expand quite rapidly, especially as their costs come down, and as

functional niches are more clearly defined.

There are a number of ways in which such artificial humanoids may enter into the ethical domain.

One set of issues concerns how we might use AI techniques to simulate or replicate ethical judgment

and action in robots and other agents, where there is no implication that such agents are to be

considered as themselves moral agents, for instance as having moral responsibilities or moral claims –

nor, indeed mental properties of any sorts. This is an important area of work, as artificial agents

acquire the ability to engage in forms of autonomous action and ‘decision-making’ (the term is put in

quotes to indicate that no direct psychological attributions are necessarily being intended here). This

area is part of the more general field of the social responsibilities on technology producers to ensure

that their products conform to general requirements of safety, legality, respect for privacy, and so on.

Much of the current attention given to ethics in AI applications research involves seeking to build

explicit models of ethics into systems that are being developed, so as to ensure such systems have more

reliability and ‘scrutability’, given the progressively increasing complexity and autonomy of such

systems. This is a vitally important area. As some people in the field of Machine Ethics are pointing

3

Page 4: FULL PAPER.doc

out, the low general level of awareness of these issues within the AI research community (not to

mention the public) is something of a scandal.

Our discussion has crucial consequences for this important area (some all-too-brief comments on

some of the issues are made in the penultimate section of this paper). However it is not the direct

concern of the present discussion. My main focus is more long-term and perhaps more theoretical –

yet with, I suggest, important practical reverberations. This is this the question of artificial agents as

potential bearers, in themselves, of inherent ethical duties and/or rights – that is, as agents which (or

who?), on the one hand, (1) are considered as themselves having moral obligations to act in certain

ways towards us and each other; and who, on the other, (2) might themselves be fit targets of moral

concern by us (that is, as beings with their own moral interests or rights, as well as moral

responsibilities).

Clearly this field raises questions which are continuous with the more short- and medium-term

questions mentioned above. These concern, for instance, how to model ethics as a domain of rules or

knowledge within intelligent systems. However one of the things that makes the topic of the present

paper distinctive is that it deals primarily with artificial agents to which (it is assumed) it makes sense

to attribute in some literal sense various kinds of psychological or personal attributions – attributes

such as intelligence, intentionality, decision-making capabilities, complex and fluent modes of

sensorimotor interaction with the environment, and also forms of social engagement, emotion and

perhaps even sentience or consciousness. In other words we are primarily concerned with the idea of

agents which are, in a deep sense, ethical agents in their own right (rather than simply as working

simulations or models of ethical agency). This area of concern of course directly relates to a

longstanding philosophical area of debate: under what conditions should we consider our machines to

have genuine minds?

As we will see, questions of mental reality and of the reality of moral status are inextricably

linked.

The discussion in this paper will, I suggest, have crucial implications for human life in future

generations, given certain reasonable assumptions about how AI and robotics technologies may

develop. We could compare the current stage of research into artificial agents to that of the production

of automobiles about a century ago. Considerable effort was put by the founding fathers of the

automotive industry into making cars work efficiently and safely, and subsequently into large-scale

production. Little serious thought, if any, was put into the social and environmental implications of

4

Page 5: FULL PAPER.doc

their spread across the world, with consequences of which we are only too aware now. The

development of artificial people may require now the kind of deep and fearless thinking that, had it

taken place in relation to automobile production and related industries at an earlier stage, might have

moderated some of the environmental and social threats of today. The uninhibited, unregulated global

proliferation of human-like artificial agents may also carry environmental threats. But there would

also clearly be deep moral questions involved (not to mention sociological, political, legal, and

economic, ones). Just as we now regard ourselves as having responsibilities towards all humanity

(both in terms of what they ask of us and in terms of what we ask from them), so we may in the future

feel obliged (appropriately or inappropriately)i to extend those responsibilities towards myriad artificial

people. (To get a grip on possible numbers, imagine, for example, one advanced functioning

humanoid per natural human household in developed societies.) I will not engage in too much crystal

ball gazing here. However I will discuss some of the foundational issues which will, it is to be hoped,

enable effective crystal ball gazing to be done by others.

2. Humanoids and Ethical Status: The Presence and Absence of Consciousness

The main question to be discussed here, then, is how artificial humanoid agents might acquire

intrinsic moral status – either as ethical ‘consumers’ (beings towards whom we might have moral

obligations), or as ethical ‘producers’ (beings who may have moral obligations and responsibilities

themselves). This question is likely to raise a number of sceptical doubts. Many might think that it’s

unlikely for there to ever be circumstances where in which it will be appropriate for humanoid

machines – if they are developed using only currently known computationally-based technologies, as

employed in research in AI, robotics and related fields – to be treated as full-blooded ethical subjects

who intrinsically merit ethical regard from us. It might be similarly doubted whether such

technologies will be sufficient to enable the production of agents who do not just behave in accordance

with ethical norms, but are genuinely morally responsible for their ethical choices.

A number of reasons could be given for such scepticism. One is that the gaps in performance

(‘mental’ and ‘physical’) between such artificial humanoids and humanity proper will always remain

too great, however advanced the technical developments. However I believe it is unwise to pre-judge

how close humanoids might approach us in terms of indistinguishability of outward performance.

Another reason for such scepticism is this: independently of the question of performance, there is the

matter of ‘internal’ state. A sceptic about AI-based moral agents may say that it makes sense for

5

Page 6: FULL PAPER.doc

someone to count as a moral agent, or as a target of moral concern, strictly speaking, only if they enjoy

the kind of sentience that biological humans (and other biological creatures) enjoy. Thus it might be

said that my ethical attitude towards another human is strongly conditioned by my sense of that

human’s consciousness: that I would not be so likely to feel moral concern for a person who behaved

as if in great distress (for example) if I came to believe that the individual had no capacity for

consciously feeling distress, who was simply exhibiting the ‘outward’ behavioural signs of distress

without the ‘inner’ sentient states. Similarly it might be said that I would be less prone to judge you as

morally responsible, as having moral obligations, if I came to believe that you were a non-sentient

‘zombie’. So there may be a kind of internal relationship between having moral status and having

qualititative conscious states. And, it could be argued, AI-based agents, even ones which are

intelligent, autonomous, linguistically adept, and capable of fluent social interaction, will always be

just non-conscious behavers if such capacities are fundamentally based only on computational

architectures implemented in electronic components.ii

Could future humanoid robots have qualitative conscious or sentient states? (We limit our

consideration to robots whose actions are generated by computational control systems somewhat in the

way present-day robots are.) Such a view would be challenged, I think, at least among wide sections

of the population. So if being a genuine target of moral concern or being a genuine moral agent does

require having consciousness or sentience, then the common view that robots cannot possess such

conscious states would have an important implication. On this view artificial humanoids would never

count as being moral agents in the full sense (although perhaps they could in some secondary sense.)

Maybe this latter view is wrong and will one day dissolve (as many AI researchers from Turing

onwards, have proposed).iii Perhaps one day the technology of computationally-based humanoid

robots may get to the stage where everyone agrees that they DO have such first-person qualitative

states. There is scope for considerable debate about this, and I make some comments on it later on in

this paper. For much of the discussion, however, I will be more concerned with the moral status of

humanoids that are assumed NOT to possess phenomenal consciousness (even though those

humanoids may have other mental properties – for instance intelligence, purpose, etc.). Even

optimists about Machine Consciousness would agree that genuinely conscious, AI-based robots are

unlikely to be around for some time. But perhaps we may see robots sooner which, while not admitted

to be conscious in a genuine, phenomenal or qualitative sense, do exhibit apparently intelligent,

purposive, autonomous forms of behaviour. Moreover such agents – even while not being considered

6

Page 7: FULL PAPER.doc

to be conscious – may be capable of performing actions of a sort which, if performed by humans,

would be considered to have ethical import of some kind or another.

Some people talk of ‘functional’ or ‘cognitive’ consciousness and attribute it to robots and other

systems even where they do not possess ‘phenomenal’ consciousness (see Franklin, 2003). Some

people might additionally argue that any mechanism that successfully fulfils conditions for functional

consciousness will thereby also merit being treated as qualitatively or phenomenally conscious. I

believe that such a view operates with a seriously impoverished conception of what it is to be

phenomenally conscious (Torrance, 2005). However here I will for the main part simply limit myself

to considering humanoids based on computational technology which are assumed NOT to instantiate

phenomenal consciousness, or sentience, in the way that biological creatures, including humans, can

(even if they might be thought to possess functional consciousness).

My central concern, then, will be with the question of what moral status should be assigned to

robots or humanoids which do not have qualitative awareness - which are to be classed as, in the

philosophical sense, ‘zombies’. Many of these humanoids may behave in ways that convince a lot of

people that they are sentient or qualitatively aware; however, ex hypothesi such people would be wrong

in that belief concerning the artificial agents being considered here, given that we will be considering

only agents in which phenomenal consciousness is taken to be absent.

Leaving the presence or absence of consciousness aside for now, let me point out that there are a

great many kinds of ways in which artificial humanoids could, arguably, be implicated in our moral

universe of discourse and moral relationships. Here are two examples.

1. Humanoids (and other devices which have information-processing functions built into their

way of working) could communicate with us on matters of moral concern, without necessarily having

any status themselves as full moral participants (as holders of literal moral rights or duties). Thus they

could assist us with our own moral decision-making, perhaps by abstracting principles from corpora of

previous moral decisions made by ourselves or others; again they could collate factual data concerning

moral decisions which we need to make in ways that aid our decision-making; they could participate in

ethical training activities; and so on.

2. Humanoids might be able to engage in acts which, if done by humans, would be considered to

have moral dimensions of various sorts (for example, acts which might be appraised as kind,

dangerous, brave, dishonest, cruel, etc.) There are various other ways in which humanoids and other

7

Page 8: FULL PAPER.doc

machines may come to instantiate properties which have an ethical or quasi-ethical character. Some of

these will be explored further towards the end of this paper.

Returning to the UN Declaration of Rights, are there any circumstances under which it might be

morally appropriate for us to consider extending such rights specifically to humanoids that are taken to

be devoid of phenomenal consciousness? And should any such list of rights for such artificial agents

be accompanied by a list of duties for them - including, no doubt, the duty to respect human rights? In

considering these questions we will need to reflect further on how to account for the enormous

potential variability in these artificial robotic agents – variations in appearance, behaviour and

intelligence – even when keeping our discussion within the boundaries of the broadly humanoid

paradigm. Such an issue may seem remote now, when currently humanoid robots are for the most part

curiosities in U.S., European and Japanese research labs. However as pointed out earlier, they could,

within decades, become successful and widely available consumer product lines, possibly spreading

rapidly across the planet, and bringing, as do all mass-technologies, a host of peculiar problems

(including ones concerning competition with us for energy and other resources, and so on). If so, the

moral status of such beings will become a matter of considerable social concern and debate.

3. The Organic View of Ethical Status

Consider an intelligent humanoid robot of a relatively primitive kind, that

displays some ‘delinquent’ behaviour – let’s say it performs the job of a security

guard at a factory site, and it fails to respond with sufficient vigilance towards

potential intruders, and a burglary takes place on its watch. We might well hold

the humanoid ‘responsible’ or ‘accountable’ for its lack of care. (See Floridi and

Sanders, 2004.) We would also be likely to blame the manufacturers or suppliers

of the bot for defects in its design or testing (or maybe the owners of the bot

would be blamed for failing to follow the manufacturer’s instructions in how it was

to be set up or trained).

But these two responsibility-attributions – one against an artificial agent and

the other against its human creators or users– are of rather different standing.

The ‘fault’ that might be attributed to the bot would seem to be of a metaphorical

or indirect kind compared to the ‘fault’ attributed to the designers or owners. For

8

Page 9: FULL PAPER.doc

want of better terms, we might call the latter kind of fault (applied to the relevant

humans) ‘primary’ moral responsibility and the former (applied to the artificial

agent) ‘secondary’ responsibility. It would be at best misleading to say of the

latter (at least if constructed on anything like current technical lines) that it

carried moral responsibility sans phrase. A similar demarcation can be drawn in

the case of moral deserts or claims: those which people have on other people (for

example children on parents) would seem to be much more strongly embedded

than, say any deserts or claims that a humanoid might have towards its human

owner.iv

There is a fairly obvious, no-nonsense, way that the contrasts just alluded to

might be conceived. This is to say that morality is primarily a domain of organic

human persons – and possibly of other non-human organic beings to which

personhood might be usefully attributed. Robots and other possible

computationally-based agents of the present or future need not, it could be said,

apply for ‘primary’ ethical status: they do not have the appropriate qualifications

of natural, biological creatures, and in particular they don’t have the relevant

properties of personhood (no matter that they might be very sophisticated in i Future decision-makers may wrongly view certain kinds of artificial beings as having ethical status when they don’t merit this (for instance as being sentient creatures with needs for certain kinds of benevolent treatment when they actually don’t have sentient states). If there are vast numbers of such beings this could involve a massive diversion of resources from humans who could have greatly benefited from them. The converse may be the case: conscious, feeling, artificial beings may be mistakenly dismissed as non-sentient tools, so that their genuine needs (which, arguably, in this scenario, ethically ought to be given consideration) may be disregarded. In either scenario a great moral wrong could be committed, in the one case against humans, and in the other against humanoids – and more so as the numbers of such artificial beings are imagined to increase.ii See Torrance, 2000 for a defence of the position that a strong distinction should be drawn between cognitive and phenomenological properties of mind. It is argued that mentality may not be a composite field: some types of mental property may be computational, while others may require an inherently biological basis. Thus AI agents could be the subjects of genuine psychological attributions of a cognitive kind, while it may be that psychological attributions requiring subjective or conscious states could not apply to such agents merely in virtue of their computational features. iii Many within the AI community would strongly support the view that computational agents could in principle be fully conscious. See the papers assembled in Holland 2003, for example. For some doubts on that view see Torrance, 2005.iv The use of the term ‘owner’ in the last example is itself a telling sign of the difference in ethical status between humans and humanoids. The UN Declaration of Human Rights is designed, among other things, to outlaw ownership by one human of another. A prohibition of ownership of humanoid robots by humans is unlikely to be agreed upon for a considerable time, if ever.

9

Page 10: FULL PAPER.doc

purely cognitive terms). We might call this the Organic view of ethical status. In

fact the Organic view can come in many versions, and might best be seen as a

cluster of component positions, perhaps fitting together somewhat loosely, and

maybe not all of them requiring to be held together. In what follows I am going to

examine a few of these components; others will be hardly or not at all considered,

for the sake of a manageable discussion.

The Organic view, as I will articulate it, proposes at least these three things.

(1) There is a crucial dichotomy between beings that possess organic or

biological characteristics, on the one hand, and ‘mere’ machines on the other;

and, further, that

(2) It is appropriate to consider only a genuine organism (whether human or

animal; whether naturally-occurring or artificially synthesized) as being a

candidate for intrinsic moral status – so that nothing that is clearly on the

machine side of the machine-organism divide can coherently be considered as

having any intrinsic moral status.

(3) Moral thinking, feeling and action arises organically out of the biological

history of the human species and perhaps many more primitive species which

may have certain forms of moral status, at least in prototypical or embryonic

form.

The Organic view, as we have said, may come in many forms, but in what

follows I propose to consider a central variant of it that takes as pivotal the notion

of sentience (or phenomenal consciousness) discussed earlier.v Sentience is seen

by many people as a base-line for a certain sort of intrinsic moral status. A dog

may not be considered as having any moral duties or responsibilities (it is surely,

so it would be said, not morally to blame if it messes the house); but for a great

number of people it is at least a ‘receptive’ moral agent in the sense that such

people believe we have a moral duty not to treat it cruelly or take its life for no

reason, etc.

Bearing in mind this notion of sentience or phenomenal consciousness, we can

add another couple of components to the Organic position. These are:

10

Page 11: FULL PAPER.doc

(4) Only beings which are capable of sentient feeling or phenomenal

awareness could be genuine subjects of either moral concern or moral appraisal.

(5) Only biological organisms have the ability to be genuinely sentient or

conscious.

Taking these various propositions together, the Organic view would seem to definitely exclude

robots from having full moral status – if, at least we take the term ‘robots’ to include artificial

humanoids and related sensorimotor mechanisms designed and built using current computational and

electronic technologies. Be they ever so human-like or ‘person’-like in outward form and

performance, it would very unlikely that they could count as genuine moral subjects, on the Organic

view. This is because on that view only beings whose inner constitution enables genuine sentience or

feeling to be attributed to them deserve to be considered as moral subjects in either the sense of targets

of moral concern or that of sources of moral expectation, and electronic ‘innards’ aren’t, as a matter of

fact, able to generate real sentience.

The final element of the Organic view listed above would seem to discount the possibility of

electronic-based robots possessing sentience. A strong version of that claim would say that only

naturally-occurring biological creatures could possibly be part of the moral realm. However a more

permissive variant could allow creatures which possess an artificial biology – perhaps one produced

through some development of nanotechnological construction from the molecular level up, or from

some other process of which we currently have not even the least glimmer of an idea. But that

concession would still seem to be likely to exclude almost every kind of robotic or virtual humanoid

agent envisaged using current design and production techniques in the field. At the very least, then,

unless and until the technology of creating artificial biological organisms progresses to a stage where

genuine sentience can be physiologically supported, the Organic view would claim that no ‘mere’

machine, however human-like, intelligent and behaviorally rich its functionality allows it to be, can be

seriously taken as having genuine moral status – either as a source or as a recipient of moral respect.

v The notion of sentience should be distinguished from that of self-consciousness: many beings which possess the former may not possess the latter. Arguably, many mammals possess sentience, or phenomenal consciousness – they are capable of feeling pain, fear, sensuous pleasure, and so on. Nevertheless it is usually taken that such mammals don’t standardly possess the ability to articulate or be aware, in a higher-order way, of such sentient states, and so lack self-consciousness.

11

Page 12: FULL PAPER.doc

Supporters of the Organic view are likely to see those who dissent from the view as taking over-

seriously the sentimentalist, fantasist proclivities that we all have – our tendencies, that is, towards

over-indulgence in affective responses towards objects which do not intrinsically merit such responses.

Such responses to machines may, the Organic view accepts, be all too natural, and no doubt they will

need to be taken seriously in robot design and in planning the practicalities of human-robot interaction.

Perhaps ‘quasi-moral relationships’ may need to be defined between computer-run robots and their

human users, to make it easier for us, the human controllers, to modulate our relations with them.

(After all, quasi-moral attitudes abound in our relations to animals, particularly domestic pets: we

praise them extravagantly as being ‘faithful’ or ‘obedient’ companions, and we fall easily into attitudes

of moral censure when they are ‘bad’, ‘lazy’, etc.) But on the Organic view, whatever the case with

animals (which after all are considered to be sentient, and therefore have at least a foot within the

moral domain) any attributions of praise or blame made towards non-sentient robots, or any stance of

ethical consideration or concern, would be appropriate at best for pragmatic reasons only. To treat

artificial humanoids as moral agents (as either having responsibilities to us or as having moral claims

on us) could not be based on the objective moral status of such robots, which latter would remain

simply implements, of merely instrumental value, having at root only functional, rather than personal,

status.vi

4. Fleshing Out the Organic View

Some time ago P.F. Strawson introduced a distinction between two different kinds of attitudes that

people may display to other human or non-human agents (Strawson 1974). On the one hand there are

reactive attitudes, typified by emotions such as resentment, gratitude, censure, admiration, and other

affective responses implying an attribution of responsibility to the agent of whom the attitude is held.

On the other hand there are objective attitudes, displayed by us towards small children, animals, and

humans who suffer from various kinds of mental deficit: in such cases we withhold attributions of

responsibility and hence praise and blame. (To adopt an objective attitude to an individual when based vi There are even more extreme versions of the organic view, which assert one or both of the following: (A) Only organic beings could be subjects of any psychological states. (B) Only naturally occurring organic beings could be subjects of sentient (or other psychological states). I have resisted taking either of these more extreme positions. Reasons for doubting (A), and for allowing that some types of cognitive mental states could be applied in a full psychological sense to computational systems, will be found in Torrance, 2000. As for (B), this rules out the possibility of creating complete artifical replications of biological organisms via any means whatsoever – including via massive molecular-level engineering.

12

Page 13: FULL PAPER.doc

on the attribution of diminished responsibility, in no way implies a diminution of moral response to

such an individual in other respects – indeed it may in some cases imply an increased sense of moral

responsibility towards such individuals). We may find ourselves, in future scenarios, unable to refrain

from displaying reactive attitudes towards certain kinds of human-like robots in many situations. All

the same, according to the Organic view such attitudes, however unavoidable, would be as rationally

inappropriate as the praise or blame that we bestow on animals when they act in ways that would be

extolled or censured if done by responsible humans; or indeed as inappropriate as the feelings of

accusation and rage we currently unleash on our washing-machines, TVs, etc. when they fail to match

our expectations.

In what follows I wish to provide a little more flesh to the skeleton of the position I have been

calling the Organic view. I believe that such a position deserves careful consideration – even though it

goes somewhat against the grain of much of the dominant culture in current research in autonomous

agents and robotics. The position may well be wrong, or at least in need of careful qualification, but

that can become clear only after it has been carefully evaluated in debate. There are, as I will try to

show, some substantial theoretical considerations which count in its favour – although there may also

be reasons for questioning it, at least when expressed in a very stark form.

My discussion will centre on two key properties which feature in the string of component

propositions listed in the last section as elements in the Organic view. These are sentience (or

phenomenal consciousness) and biological constitution. The first of these notions will play a central

role in the next two sections, and the second will occupy us in the section after that. First, however, it

is necessary to make more explicit a key distinction which has come up in discussion already, but

which needs to be brought into focus in order to clarify the discussion somewhat. This is the

distinction between moral ‘agency’ and moral ‘patiency’ – or, put differently, between being a moral

‘producer’ and being a moral ‘consumer’.

It seems true of at least some mental properties – for instance susceptibility to undergoing

experiences of different sorts – that, in attributing such properties to an agent A, I commit myself to

viewing A in a certain moral light, to treating A as potentially a subject of moral concern.vii On the one

hand A may be treated as a target of morally-framed action or concern – that is, as a being whose

states of experience, well-being, etc., may set moral constraints on my actions or attitudes towards A.

But on the other hand, A may also be treated by me as a possible source of morally-framed action or

concern – that is, as an agent whose actions and attitudes may intelligibly be appraised in moral terms,

13

Page 14: FULL PAPER.doc

and who may be expected to regard itself as being subject moral constraints upon its treatment and

stance towards other (natural or artificial) beings. Being either a moral target or a moral source (a

moral patient or moral agent) represent two major and complementaryviii ways in which one may

qualify as a moral being.ix

Do the classes of moral targets and moral sources (moral patients and agents) coincide? (See

Floridi and Sanders, 2004.) Probably not: we think of many animals as moral targets without seeing

them as sources. That is, many believe themselves to have moral commitments towards various classes

of animals without thinking that those animals have any converse moral commitments to them. And it

would seem as though an artificial agent could sensibly be considered as a source of moral action or

appraisal, without necessarily being considered a target of moral action from us. Thus an agent A’s

control system might enable it to make decisions which to us are of moral consequence, simply on the

basis of a collection of decision-rules that enable A to generate various judgments and actions in given

circumstances, as an ‘uncomprehending’ computer program. For example, the owners of a ski resort

may delegate to an IT-based system the task of pronouncing whether a piste should be open on a given

day, in the light of the various safety-critical factors that may be relevant to the decision. Again, an

artificial agent may, while acting for a human client whose interests it has been given autonomous

powers to represent, commit a moral and legal infringement, such as stealing internet bank account

vii Many would argue that this commits the ‘fact-value’ fallacy – by attempting to derive moral values (concerning how someone’s experiential states are to be treated) from morally neutral factual statements (about the existence of those experiences themselves). To this I would reply that the assertion of a fact-value dichotomy has been widely challenged; and if there is any area where it seems to be most susceptible to challenge it is this – the relation between statements about experience and moral commitments towards such experiences. In any case the alleged fact-value dichotomy concerns a supposed logical or deductive gap between the two; and the kind of commitment which may be in question here might be some other kind of commitment – for instance a fundamental moral commitment or a non-deductive commitment of rationality. Also the presuppositions about the structure of thought and language presupposed by the fact-value distinction (a neat division into ‘factual’ statements and ‘evaluative’ prescriptions), while perhaps reasonable as a theoretical construct, may be quite inadequate to characterizing the concrete reality of moral and psychological thinking. viii To describe you as a moral source with respect to me, is to describe you as having a moral commitment to assist me. If you are a moral target with respect to me, then I have a moral commitment to assist you.ix It should be noted that there is another sense of ‘moral target’ in which someone could be a target of moral praise, recrimination, etc. In this sense of ‘target’, an agent could be a target of negative appraisal by doing wrong things or by failing to fulfil their responsibilities. Similarly an agent could be a target of positive moral appraisal if it does good or right things. However the sense of ‘target’ we are using in the discussion above is different. In the sense under discussion in the text one is a moral target if one is potentially an object of moral concern because of how one is, not because of what one has done or failed to do.

14

Page 15: FULL PAPER.doc

details of an arbitrary Amazon customer in order to make an online purchase on behalf of its client (see

Calverley 2005a). In each of such cases – and there may be many others – we may be inclined to say

that the artificial agent is in some sense responsible for the action, even while recognizing that the

agent is, strictly speaking, a non-sentient machine, albeit one which is capable of intelligent

deliberation of some sort. Whether it would be coherent to ascribe blame to such an agent, or indeed to

impose some form of ‘punishment’ is a separate matter, and there is no doubt room for several shades

of opinion here.

Such an artificial agent may thus be accorded a degree of moral responsibility (or, using the nicely

hedged term of Floridi and Sanders (2004), ‘accountability’). This might be true even if we would also

be inclined to place moral and/or legal responsibility on designers of such agents if their behaviour had

particularly untoward results. Also we might be inclined to attribute moral accountability to such an

artificial agent even where we (humans) do not view ourselves as being required to take moral

responsibility for that agent, that is, for its ‘welfare’, or for its education, or whatever.

This kind of case is perhaps where the Organic position is likely to be under considerable strain. In

the ski piste example, for instance, we may regard the agent as being potentially accountable for

actions that may affect skiers’ welfare, in a way that we would not regard a faulty cable-car as being

accountable – even though we see the artificial agent as having no more inherent interests than we do

the cable-car. Again, in the internet stealing example mentioned above, our willingness to attribute

some kind of responsibility (even a form of moral or legal sanction) to the artificial agent may not in

any way depend on seeing the agent as having its own sentient point of view or welfare, i.e. as being a

target of moral concern in its own right. So it looks as though a reasonable case can be made for saying

that moral responsibility or accountability of some sort may be attributable to moral agents even where

no correlative moral rights or interests are deemed attributable to that agent. However, we will see that

there is a different position on this, which the defender of the Organic view can take.

5. Sentience and Being a Moral Patient.

On the Organic view, an agent A can be legitimately treated as a proper subject for moral action or

concern only if A has certain more basic properties, which include sentience or phenomenal

consciousness, and, as an extension to those, the capacity for experiencing well- or ill-being. So there

may be a strong connection between being a living, sentient creature (and thus being able to suffer and

enjoy moments or situations in that sentient life, even if not in an articulate, self-conscious way) and

15

Page 16: FULL PAPER.doc

being a moral patient, in the sense of being at least to a significant degree within the threshold of moral

concern. On the Organic view, unless you have a kind of lived sentience in this sense, you don’t have

a call on the moral concern of others, a moral stake. However this view may be questioned by some

people who are working in the area of designing and developing artificial agents. It might be argued

that we may come to see ourselves as having moral responsibilities towards highly intelligent and

functionally sophisticated humanoid robots (to take the clearest kind of case). Such robots may come

to participate in our society in many sorts of ways – for example as companions to the very young or

very old human members of our society. That is, we may develop hybrid human-artificial social

groupings, in which we see ourselves as obliged to take moral responsibility for certain kinds of robots

even when we are not sure whether they are sentient in the ways we are (or, more strongly, are sure

they are not). So, for instance, if some such robot breaks down we may believe that we should fix it,

not just to render it once more useful to us, but for its own sake – because of its needs, not ours. And

this despite the fact that we recognize it as a non-sentient entity.

I am not sure if there is any way to arbitrate clearly on this issue. There may just be a kind of

fundamental difference of intuition between people who would take that view and those who would

insist that a being has to have at least some kind of sentient life in order for it to be appropriate to treat

it, not just instrumentally, but as having its own intrinsic moral interests. It is not clear how widely

held such a view would be, when set out clearly. It should be noticed that it has some uncomfortable

consequences. If we take non-sentient robots as having intrinsic moral claims, then we have to weigh

up those moral claims along with others in circumstances when they may come into conflict. Suppose,

for example that there were to be an explosion or an earthquake, with many humans and humanoid

robots trapped in the wreckage, and with limited time and resources available for rescuing those who

are trapped. The view under consideration seems to commit its supporters to saying that equal time

and resources should be devoted to rescuing the non-sentient robots as rescuing the humans, even if

that meant needlessly losing some human lives, or increasing human injury and suffering. Even if it is

conceded that the sentient injured should be prioritized, then, so long as some resources are diverted to

trying to rescue the non-sentient injured, then that could involve some avoidable loss of sentient lives

or some avoidable prolongation of acute sentient suffering.x

There is possibly a certain kind of marginal case here that should be considered: this is the case

where moral rights or interests may be correctly attributable to non-sentient humanoids on the basis of

such humanoids being granted rights in law – for example rights of ownership. We may change our

16

Page 17: FULL PAPER.doc

laws so that some kinds of humanoid robots may come to recognized as legal owners of property, as

legal possessors of bank accounts, domiciles, etc. To take a science-fiction case which offers one

possible scenario in which this might occur, Andrew, the robot hero of the movie Bicentennial Manxi

develops the skill of creating exquisite, finely-worked objets d’art out of wood; these become highly

sought after among the well-to-do friends Andrew’s owner. Sales of these items produce a more-than-

healthy income, and the robot’s owner decides it is only ‘fair’ for Andrew to be enabled to become the

legal owner of the proceeds of the sale of his craft work. This scenario seems not totally implausible,

in outline, if not in detail. Moreover it does not seem to require that the robot is considered to be

sentient or conscious in order for us to find it plausible.xii It may be that legally sanctioned ownership

of money and other kinds of property could become widespread among future artificial humanoids –

even ones which are taken without question to be non-sentient. Further, in such a situation, we might

well be forced to concede that such humanoids had, not just legal rights – for example rights not to

have their property arbitrarily taken from them – but also moral rights (indeed that their legal rights

formed the basis of their moral rights). So, for example, if such a property-owning robot were the

victim of a vicious investment scam we may think that a moral wrong to Andrew, as well as an

infringement of his legal rights, had taken place.

It is possible, then, that even non-sentient humanoids may be intelligibly taken as having certain

sorts of intrinsic moral rights – at least where these moral rights are closely associated with fulfilling

certain criteria for having legal rights. But one could perhaps concede this (and even concede that it

could be an important feature in a future society where such non-sentient humanoids existed in large

numbers co-existing with humans) while at the same time arguing that these are rather special sorts of

rights (again, precisely because of their close linkage with the legal rights on which they depend).

After all, if a non-sentient property-owning humanoid, falls upon hard times and becomes destitute, no

x It should be noted that the case under consideration is not that where we have to prioritize between rescuing humans and retrieving valuable or irreplaceable equipment. Here there may be difficult choices to be made – for instance we may need to divert time from attending to human need in order to rescue an important piece of medical equipment which can save many lives if retained intact. The case under consideration is rather, that the non-sentient creatures are given some priority, not because of their utility to sentient creatures, but because, despite recognized as being non-sentient, they have calls on our moral concern. It is not clear how many people would seriously take that as a nettle worth grasping.xi For an extended discussion of the short story by Isaac Asimov on which this film is based, see Anderson, this issue. xii In the movie, it is left rather indeterminate as to whether Andrew is to be considered as being phenomenally conscious or just functionally so – at least at the stage in the story where he acquires property-owning status. The force of the example is not unduly reduced by imagining a definitely non-sentient robot in Andrew’s situation.

17

Page 18: FULL PAPER.doc

suffering or hardship is consciously experienced precisely because of the humanoid’s lack of sentience.

Conceding this point, then, does not, I would suggest, militate unduly against the broad claim of the

Organic approach: that is, that being considered as having intrinsic moral claims (other than these

special, rather marginal, kinds of cases) requires having the capacity for sentience. So it looks as

though, notwithstanding the above kind of case, there is a fairly strong connection between being a

moral patient, i.e. having intrinsic moral rights or claims, and living a sentient life which contains

scope for experiencing satisfactions and discomforts of various kinds.

6. Moral Agency, Moral Patiency and Empathic Rationality

A key question that needs to be considered now is; what is the relation between being a moral

patient in this sense (having a moral stake, if you will) and being a moral agent? The crucial claim to

be considered is this: that only beings which are capable of being moral patients, as having full-

blooded moral claims, in the ways outlined earlier, could be counted as being moral agents, that is, as

capable of having moral responsibility or accountability or obligations in the fullest sense. (We here

leave aside any further discussion of the kinds of moral claims just examined, based on property-

owning status and other possible legally-generated kinds of moral claims.) To accept the claim that

being a moral agent, capable of carrying intrinsic moral responsibility for actions and decisions,

requires being a moral patient would have the effect of cutting electronic humanoid robots out from

any possibility of being considered fully morally responsible for their actions, if sentience is required

in order to qualify as a moral patient (as argued above) and only biologically-based beings could have

sentience.

So does one need sentience to be an intrinsic moral agent? One thing that seems certain is that the

dependency doesn’t go in the reverse direction. Many of us consider non-human animals, infants, etc.

to be sentient, and to have the status of moral patients, without our ascribing any intrinsic moral

agency to them. Could this be a problem for the Organic view? A sceptic about the Organic view

may well say that, if being a moral patient does not require moral agency, then maybe the dependency

also fails to hold in the other direction. Maybe an agent could be a fitting subject of moral

responsibility attributions while not being a target for moral concern.

Moreover, it might be said that this could well be true in the case, for example, of an artificially

intelligent agent which has a certain minimum level of rationality, and has the cognitive capacity to

recognize that certain beings have sentient states, and thus moral interests, and to reason about the

18

Page 19: FULL PAPER.doc

effects of different courses of actions on those states of sentience, etc. Such agents, it may be argued,

may be seen as having fully-fledged moral responsibilities in virtue of fulfilling such rationality

conditions (in ways that non-human animals don’t) even if they are not themselves sentient in ways

that humans are (and, arguably, at least some non-human animals).

This argument seems to me to be far from irresistible. We can agree that cases like that of non-

human animals, infants, etc. do indeed show that having the status of being a moral patient does not

entail that one has the status of being a morally responsible agent. Also we’ve already noted that what

is missing in the case of animals, infants, etc. is, in part a certain level of rational reflection. But it may

be too simple to assume that rationality is just a cognitive or intellectual matter, which could be

developed in progressively increasing degrees in AI-based agents.

Let us grant that AI agents can embody certain kinds of rationality. Nevertheless, the kind of

rationality which is required to have the status of a full moral agent (the kind of rationality which is

involved in progressing from an infant human to a mature adult moral agent) may be different, or may

require other elements besides, AI-achievable rationality.

One way to develop this view is to propose that the type of rationality that is associated with being

a morally responsible, morally reflective agent in the human case, may be seen as being integrally

bound up with that human being’s sentience. Indeed one could say that having moral agency requires

that one’s sentience enters into the heart of one’s rationality in a certain way: that it is a form of

rationality which involves the capacity for a kind of affective or empathic identification with the

experiential states of others, where such empathic identification is integrally available to the agent as a

component in its moral deliberations. It might be said, indeed, that this is a kind of rationality that is

integrally connected with the capacity to feel certain kinds of emotions – for example the emotion of

concerned identification with a being which appears to be in great distress or to be experiencing pain.

We might call this notion of rationality empathic rationality, contrasting it with the purely

cognitive or intellectual rationality which might be attributable to (among other things) intelligent

computationally-based artificial agents.xiii The suggestion, then, is that empathic rationality is crucial

to moral agency in the full-blooded sense. This kind of empathic rationality may arguably involve

strong affective elements, in addition to cognitive ones. Moreover, empathic rationality may arguably

be found only in a rational agent who is also a sentient agent. That is, it may be that this kind of

affective, empathic element is just what would be missing in rationality of an intelligent agent which

lacked sentience.

19

Page 20: FULL PAPER.doc

On this view, then, artificial, informationally-based humanoids that lack sentience would not be

capable of having the necessary kind of rationality, even though they might have rationality in a purely

intellectual sense. Sentience and certain kinds of affective responses that are entailed by sentience, or

which are at least natural concomitants of sentience, would be required for the kind of morally

reflective rationality that, so it would be claimed, is bound up with full-blooded ethical agency. Thus

moral agency would require the ability, not merely to make rationally-based judgments and decisions

in the purely intellectual way that one could expect information-processing mechanisms to do, but to

engage in the kind of engaged empathic rational reflection on the experiential states of others in a way

that seems to be at least partly constitutive of taking a moral point of view. Moreover, this form of

empathic rationality would in turn seem to require sentience. For, so the argument goes, how could a

being empathically grasp the experiential state of another if it did not have the capacity for experiential

states in its own right?

It is difficult to evaluate this position, since it clearly involves a number of assumptions –

assumptions about the nature of what it is to be a responsible moral agent, assumptions about the role

of empathic responses, and related affective states in such moral responses, and assumptions about

how the capacity to experience sentient states fits in with such empathic responses. Of course all these

assumptions (and others, no doubt) are open to challenge. But it does, I believe, provide a picture of

moral agency that has a lot of plausibility, and that would command support from a lot of people

working in the field of ethical theory. For this reason, it needs to be taken seriously when reflecting on

the ways in which artificial agents may have moral attitudes, and engage in moral action.

Someone working in the field of artificial agents may say, of the discussion to date: “We have

been asked to agree that informationally- or computationally-based artificial agents, of the sort that we

might see in the reasonably near future, could not be considered to be sentient or phenomenally

conscious beings. But why should we concede this point? Why should there not be, in the reasonable

future, robots created along computational lines that stem from current research, which achieve degrees

of intelligence and functionality sufficient to allow us to judge them to be genuinely conscious or

sentient in the sense understood in the foregoing arguments? Such robots would fulfil all the criteria

for both moral agency and moral patiency that have been at issue in the preceding discussion.”

xiii It might be said that all rationality involves at least some affective components, and that this strongly limits the kinds of rationality – or indeed intelligence – that could be implemented using current methods in AI. (For key discussion on this see Picard, 1997) I am not taking a view on this – I am concerned to concede as much to the viewpoint being opposed as possible – including that there can be purely cognitive forms of rationality.

20

Page 21: FULL PAPER.doc

It is certainly true that the scepticism which is voiced by the Organic position about the possibility

of full moral agency in informationally-based humanoids is founded on an assumption that such

artificial agents are not likely to be genuinely sentient beings, and that this is so because, whatever

their functionality, they are not constituted in the right way. The main proposed basis for this

assumption is pretty well known: that sentient consciousness in humans and other creatures derives in

some deep way from the biological make-up of such creatures. It is important to provide some kind of

justification for this assumption, so as to strengthen the credentials of the Organic view.

7. Organism, Moral Status and Autopoiesis

The Organic position, as developed above, claims that there is a strong link that between moral

categories and categories of organism, and proposes that sentience plays a central role in mediating this

relation. In this section we will try to shed some light on the notion of organism in a way that makes

clearer the linkage with sentience, and indeed with moral agency. Clearly this is a large field, and we

can only sketch some of the key themes.

A central idea is this. Morality is fundamentally a domain of creatures that have an internally-

organized existence rather than an externally-organized one – that is, creatures that exist not merely as

artefacts whose components have been assembled together by external designers, but which exist in a

stronger, more autonomous sense. On this view the moral universe can contain only, or primarily, self-

organizing, self-maintaining entities, that have an inherent, active telos to survive and thrive – entities

whose sole current instances are natural biological organisms. Moreover, consciousness is an

emergent property of the physiological structures of these naturally-occurring self-maintaining

organisms. So, in this view, both moral normativity – the expectations and requirements that we take

to be central to ethics – and the facts of lived, sentient experience, are seen as integrally bound up with

the forms of self-organisation and self-maintenance which are essential features of biological

organisms.

The Organic view, as developed here, thus asserts a strong linkage between (a) conscious, lived

experience; (b) autonomous self-maintenance of biological organisms; and (c) moral norms. It is the

strong association between these elements which, according to the Organic view, renders the project of

artificial ethical agency so problematic. According to the Organic view there is an important sense in

which all biological organisms have to actively engage in their maintenance of their bodily systems. In

this respect they seem to be different from (e.g.) purely electrically powered and computationally

21

Page 22: FULL PAPER.doc

guided mechanisms. The latter cannot be seen, except in rather superficial senses, as having an

inherent motivation to realize the conditions of their self-maintenance: rather it is their external

makers that realize the conditions of their existence.

Is such a view of things well-grounded, or does it simply reflect a set of arbitrary prejudices or

conceptual associations? I would claim that there are good theoretical grounds which are available to

inform the view. To support such a picture of how organisms differ from artefacts, one may draw on

certain strands in the philosophy of biology. Of particular relevance here is the ‘enactive’ approach to

agency (Varela et al, 1991; Thompson 2005; forthcoming), autopoietic theory (Maturana & Varela,

1980), and the philosophy of biology of Hans Jonas (1966).xiv However I will suggest some ways of

developing certain moral implications of these ideas which have not previously been drawn.

In contrast to the computational view of mind generally accepted by those working in the field of

IT-based artificial agents, the enactive approach to mind centres around the idea of ‘lived

embodiment’. Of the many philosophical strands which can be used to explicate the notion of lived

embodiment a central one concerns the idea of what it is to be an autopoietic, or self-recreating,

individual. An autopoietic system – whether a unicellular or a more complex creature – acts to further

its existence within its environment, through the appropriate exchange of its internal components with

its environment, and via the maintenance of a boundary with its environment. It has to be said,

however, that there has been a certain evolution in the notion of autopoiesis. In original versions of

autopoietic theory, an autopoietic system was seen as a special kind of machine – one which was

defined functionally as being in continuous activity to maintain its own existence. In recent

developments of the notion (Weber & Varela, 2002, Thompson, 2004), autopoiesis is closely tied to

the notions of sense-making and teleology: that is, autopoietic self-maintenance is a source or ground

of meaning and purpose for that organism (where that meaning or purpose is intrinsic to the organism,

rather than something which is merely the product of a pragmatically useful interpretive attribution on

the part of an observer). On the former view of autopoiesis, it is perhaps left open as to how readily

one might produce artefacts which fully satisfy the conditions of autopoiesis. On the latter version of

autopoiesis, however, autopoietic entities are radically distinguished from ‘mere’ mechanisms, since,

unlike the latter, they enact their own continued existence, and their own purpose or point of view.

All the same, even on the later conception of autopoiesis it is an open question as to whether the

defining properties of autopoiesis can be found outside the realm of the truly biological, and in xiv See additionally the recent synthesizing discussions by Weber and Varela, 2002; also Thompson, 2004, and Di Paolo, 2005.

22

Page 23: FULL PAPER.doc

particular whether there is any sense in which IT-based constructs could ever be seen as being

assimilable to an autopoietic framework – that is as original self-enacting loci of meaning and purpose

(or indeed of consciousness). However, it can be argued that any programme of producing enactive

artificial agents would have to involve a fairly radical shift in design philosophy from that which

prevails today in most AI or computing science circles. (Di Paolo, 2003; Letelier et al., 2003).

Moreover any successful programme of artificial autopoiesis would perhaps result in entities that were

no longer clearly describable as being machines in contrast to organisms. They would therefore not

fall within the scope of the present discussion, which we have limited to the consideration of entities

whose status is that of non-organic machine.

It is possible to extract from mature enactive-autopoietic theory (see particularly Thompson, 2005;

forthcoming) a certain view of how consciousness emerges from organism. On this view many basic

notions of psychological being – intrinsic teleology, sense-making, cognition, adaptation and

phenomenal consciousness – are all integrally bound up with organism as such. It follows from this

that all these psychological properties exist in some embryonic or primordial way in even the most

primitive unicellular creatures. This does not mean that bacteria literally think or feel or have

conscious experiences in the ways that we take ourselves and our fellow humans to do. However it

does mean that we can see such mental properties as having a lineage that stretches back indefinitely

through biological time, and as being derived, not from some relatively complex structure of brain

organization and function, but rather from the very basis of the autonomous self-maintenance of free-

standing organisms.

The enactive-autopoietic picture thus provides a way of helping us to situate consciousness as a

biological phenomenon – however this perhaps needs spelling out a little. Autopoiesis (in its more

recent reformulation) provides, it has been suggested, the basis for original teleology in organisms, and

thus for a world which includes sense or significance for that organism. On this view teleology and

significance are fundamental to autopoietic systems. So, it may be said, to exist or live as a centre of

purpose, as a centre of significance, is to live as being which experiences or lives significance. Thus

autopoietic significance is the living of a world which is taken to have meaning for the creature

because it is felt as significant. On this view, sentient consciousness can be seen to be as integral to

our lived biological creaturehood as is the intrinsic teleology and meaning-generation of such

creatures. Of course – to reiterate – this is not to say that primitive organisms such as bacteria, not to

mention snails or sea-urchins, have a conscious life anything like ours. Nor, of course, is it to say that

23

Page 24: FULL PAPER.doc

the primordial sentience that is being ascribed to such creatures in virtue of their autopoietic agency

generates any moral demands upon us. (We are not here advocating ‘Justice and equal rights for

paramecia’!) However what is being proposed is that such organisms have, in virtue of their organic

constitution, primitive features which, as they have evolved through progressively more complex

forms of organization in other species, present themselves as sentient, lived experience. They emerge,

in more complex creatures, as capacities to be phenomenally aware of the various forms of suffering

and satisfaction which affect organic life, and to consciously shun the former and seek the latter.

Moreover, these capacities of phenomenal awareness in an organism of what is good or bad for

that organism are the roots of value for such creatures. Here we are talking both of what may be called

creature-based (self-referential) value; and of the externalized (other-referential) value which involves

identification with the goals and needs of fellow creatures. Value of the latter kind provides one

primitive foundation for ethics which, in the intellectualized and culturally ramified forms that humans

have devised throughout a variety of historical settings, have become articulated as explicit moral

norms and codes.

This, then, is the way that the Organic view can be developed, so as to provide a (hopefully!)

coherent account of the foundations of moral normativity, and its roots in the sentient, lived existence

of biological organisms. The Organic view, as thus developed, arguably has important consequences

for how one views the moral status of individuals, and for how that view helps to provide a perspective

on the question of the moral status of artificially-created individuals. In particular the view suggests a

rather distinctive response to the coming challenge in the ethics of human-like machines. Autopoiesis

applies to self-maintaining agents of even the most primitive kind, yet it appears to provide a plausible

way of determining what might be involved in autonomous moral agency. From the autopoietic

perspective, as developed here, an important ingredient of our moral world is the feature whereby the

participants of that world are viewed as beings which are autonomous centres of meaning and purpose,

and further, as living and embodied conscious agents that enact their own sentient existence.xv This

conception of moral status should, it is suggested, apply equally to those we wish to create in our moral

image. On this picture, an agent will be seen as an appropriate source of moral agency only because of

that agent’s status as a self-enacting being that has its own intrinsic purposes, goals and experienced

interests – and hence, using the terminology introduced earlier, which is as much a moral target as a

moral source. Crucial to this conception of a self-maintaining, enactive being is the idea that such a

being involves, not just lived embodiment and lived meaning, but also lived experience or sentience.

24

Page 25: FULL PAPER.doc

Further, it is the close association between these various elements that helps to explain, not just how

consciousness emerges in embodied creatures, but the ultimate basis for at least some of our moral

norms.

The view that has been built up is complex, and has several aspects. It is a view which regards

with some scepticism the possibility that robotic humanoid creatures, built using current technologies,

could be moral agents or moral patients in the full-blooded sense. The Organic view says that such

robotic agents, while possibly capable of behaviour informed by excellent cognitively rational

reflection, and while no doubt capable in some way of articulating moral categories in their thought

and communication, would nevertheless not be capable of being taken as moral beings of the first

order. A number of reasons have been offered for withholding full moral status from such beings:

these are based on certain key properties which such agents, as long as they are viewable merely as

mechanisms, must lack. It is claimed that such agents, as non-organic mechanisms, must lack

sentience, and thus, for all the rational capabilities that may emerge from their computational design,

that they will not have capacities for empathic reflection that are thought to be integral to full moral

agency. Further, it is suggested, until a way is found to generate artificial beings that have the

necessary autonomous self-organization and self-maintenance, in the ways characterized above, we

won’t have produced artificial agents that have genuine moral claims and that have the kind of

autonomy in action that is a precondition for full moral agenthood.

8. Beyond the Organic View: Broader Aspects of Machine Ethics

I have tried to avoid dogmatically asserting the truth of the Organic View in the above. My

intention has rather been to explore its ramifications, and to display some reasons why it might seem

an attractive option. However, if the Organic view were true, then artificial agents with full-blooded

moral status look to be a much longer way off than they would seem to be if it were false. It would

seem to be necessary to develop an artificial biology rather than simply to develop artificial agents as

an offshoot of computational and robotic science.

xv This discussion prompts the question of where one draws the line between those creatures which have a sufficient degree of consciousness to be taken as having moral significance and those which don’t. To this there is, it seems, no clear answer if, as the above view seems to imply, consciousness emerges by the tiniest increments along the spectrum of evolutionary development. Just as, when standing on a very flat shore – such as the beaches of Normandy, for example – there is no clear division between land and sea, so there may be a very broad tideline between morally significant and nonsignificant creatures. Whether this is seen as a strength or a weakness of the position is left to the reader to decide.

25

Page 26: FULL PAPER.doc

As we’ve seen, a key element of the Organic view is the claim that consciousness, or sentience, is

at the root of moral status – both status as a moral patient (or target) and as a moral agent (or source).

This latter point is, in fact, independent of the issues to do with organism itself. An opponent may

hold, for example that a (non-Organic) computationally-based humanoid could be in principle be

conscious – something the Organic view denies. The opponent could claim additionally that such a

humanoid could be realized at some time in the future, but perhaps that the conditions necessary for the

realization of artificial, computationally-based consciousness are so demanding that it is unlikely to be

achieved for a great many generations. On such a view there may be a long period when artificial

agents are produced which have none of the features of phenomenological conscious awareness, but

which nevertheless exhibit rich, complex behaviours that raise some of the issues concerning moral

status that have been discussed in the current paper.

A supporter of this position could even agree broadly with that part of the Organic view, as

developed above, which holds that sentience is a crucial precondition for full moral status. So some of

the claims considered here are perhaps not tied inseparably to the whole package of the Organic view –

particularly the ones concerning the roles played by sentience and by empathic rationality as being

constitutive of full moral status. These could be accepted by a developer of artificial agents, who

might nevertheless claim (in opposition to the Organic view) that sentience and empathy are indeed

capable of being developed within computational, non-organic agents, at least in principle.

Further, even on the Organic view, as developed here, it can be readily agreed that artificial

informationally-based agents may be produced, and indeed in large numbers, with certain key

characteristics of moral agency, even if they lack other important properties that would qualify them as

full moral agents. For example, it may be appropriate, and indeed unavoidable in some circumstances,

to adopt emotional and practical attitudes towards such artificial agents – attitudes which have some of

the facets of the full moral stance. A supporter of the Organic position could for instance agree that

such attitudes may perhaps be taken up for pragmatic reasons, rather than for reasons to do with the

inherent moral status of such agents.

The realization of artificial agents that are fully-fledged moral beings would seem to be remote on

the Organic View (if that requires producing artificial organisms satisfying the complex and

demanding requirements of self-enacting agency discussed in the previous section). Nevertheless, the

proliferation of artificial ‘quasi-moral’ agents might be considered to be relatively close at hand – even

on that view – when judged in terms of ease of technological implemention. This may have enormous

26

Page 27: FULL PAPER.doc

social reverberations. Indeed, it is compatible with accepting all the elements of the Organic view

given above, that one agree on the necessity and urgency of building in some kind of (quasi-)ethical

behaviour constraints into our artificial agents. For example it might be agreed that something like

Asimov’s three Laws should be hardwired into all robots which might be in a position to harm humans

(or themselves). One might even develop decision-procedures for robots which enabled them to

generate ethical constraints on their conduct in a more ‘autonomous’ way – that is, which enabled them

to ‘choose’ actions which conformed to the ways which we, as designers, thought it morally incumbent

upon ourselves to limit how robots selected their actions and their decisions. Such robots might

communicate with us in moral terms, for example offering ethically-couched justifications of their

decisions and actions.

Consider, for example, a functionally and linguistically sophisticated (but, let us suppose, non-

sentient) robot aboard a space-craft (this is the stuff of too many science-fiction potboilers to mention,

but it’s the kind of artificial agent that should be in existence in a few decades if the claims of many in

the AI community are to be believed.). She might have the following dialogue with a human crew-

member.

Crew-member: “Dana, I saw you were working outside the craft – isn’t that a bit reckless of you,

given the proximity of those meteor showers?”

Dana: “Yes, I’m sorry about that, but I thought that I ought to check the safety of the re-entry

module a last time before we began our descent, even though there was little time left and I was putting

my own safety at risk. I made repairs to some of the outer protection plates; it didn’t take too long. I

hope you can see why I thought I should do that.”

Crew-member: “Yes, ok – you probably did the right thing under the circumstances. Thanks, and

well done.”

The moral dimensions of Dana’s decision-making, and of her interaction with the crew member,

are quite clear to see. So it’s perfectly possible to imagine some of the decisions and actions of even

quite conventionally-conceived robots as being represented in moral terms, and a defender of the

Organic view has to accept this. But a supporter of that view would not agree that a robot making such

decisions was engaging in action for which she was morally responsible in a primary sense. (Any

more than Dana might be considered a ‘she’ in a primary sense.)

27

Page 28: FULL PAPER.doc

Of course it remains to be seen if the Organic view survives deeper scrutiny – this is left as an

open question here: all I am doing in the current discussion is identifying it as a view that deserves

careful consideration, and showing one way that it may be developed so that it taps on some important

aspects of current thinking within enactive and autopoietic theory. It may seem as though much of the

argument set out here has been somewhat negative and limitative, putting rather tight reins on the idea

that a rich field of Machine Ethics is on the verge of development, heralding a swiftly-dawning new

age of human-machine moral relationships. If the Organic view is correct, it would seem as if the

inherent artefactual and non-organismal nature of currently envisaged machine agent technology must

be seen as imposing important limitations on the capacity of such machines either to be recipients or

originators of genuine moral concern. But this is not really the case. Even if the Organic view is

correct, it still leaves many possibilities available to be opened up in order to develop a domain of

machine ethics conceived in a broad way. We have already had some glimpses of how this might be so

in the foregoing discussion. For completeness, and in order further to correct the impression that the

position developed here must inevitably be negative about the field of Machine Ethics, I will briefly

outline some of the possibilities that seem to be left open by the Organic view. However space does

not allow me to enter into any proper consideration of them – they are the briefest of sketches only.

(1) Ethical models. First, and most obviously, it will be possible, for all that has been said above,

to develop rich AI-based models of ethical reasoning, appraisal and deliberation and, indeed of ethical

emotion. These models will range from screen-based systems to robotic agents with which we can

interact experimentally in a variety of ways. The development of such models may teach us a lot about

human moral attitudes and experiences, and may indeed deepen and ramify our conception of the

human ethical domain. Further it may be desirable to embed the best of such models as far as possible

into humanoids, so as to make them act in ways that serve both our and their interests as far as

possible.

(2) Advisors. Also such models may serve as useful and instructive moral advisors, in assisting us

to unravel the complexities of particular moral dilemmas. Just as expert chess programs, such as Deep

Blue, may provide powerful new tools for analysing chess play (and indeed may ‘beat’ world class

chess masters) even while not fully participating in, nor understanding, the overall human dimensions

of chess as a social interaction, so such moral expert systems may provide useful input into real

decision-making, even while being non-players in the full moral ‘game’. Moral deliberation involves

many complex non-moral elements (such as assembling as many of the contextual facts as possible

28

Page 29: FULL PAPER.doc

which are relevant to an argument, and such as ensuring consistency between different component

judgments in a complex nest of moral evaluations). AI-based agents may vastly outstrip humans in

terms of working with the purely informational aspects of decision-making. Further, such

computational agents may offer us novel, and genuinely instructive, moral perspectives – precisely

because they are not subject to many of the ills that flesh is heir to.

(3) Quasi-personal interaction. Humanoid robots will also function as ‘para-persons’ – that is, as

agents which lend themselves to being treated, if only constructively, as sources of morally enhancing

input – for example as ‘carers’ for the elderly, as playmates for the young, and as worthwhile

companions in all the years in between. The behaviours displayed by such humanoid agents will need

to display ‘morally appropriate’ features. In becoming more and more natural targets for our reactive

attitudes, the divisions in moral status between human and (certain kinds of) artificial agent may

become increasingly blurred.

(4) Robot ‘responsibilities’. Further, we will inevitably have to impose constraints upon the

autonomy of action and decision which will be designed into our moral agents: our robots will perforce

be required to recognize duties of care and other responsibilities, even if, as the Organic position

suggests, these might not be moral responsibilities in a fully-blown or primary moral sense. (For

example, as with Dana in the foregoing dialogue, they are likely to figure as ‘morally sensitive’

assistants capable of taking autonomous and ‘responsible’ decisions.)

(5) ‘Rights’. Together with such responsibilities will also, perhaps, come rights of various sorts –

including, for example, rights to be considered, at least in some sense, as owners of property or wealth,

as well as rights to be given the liberty to pursue goals that such agents have been assigned or have

come to adopt for themselves, by appropriate means, and so on. As we have seen, such rights may

well come in tandem with certain innovations and extensions of the legal system in certain ways –

although it is perhaps not necessary that the attribution of rights and related properties to artificial

humanoid agents has to be limited to taking place within a legal framework.

(6) ‘Moral’ competition for resources. As users, and, in senses to be determined, owners of

certain kinds of resources, machines will compete with humans for access to such resources. Complex

questions of distributive justice as between human and machine agents will thus arise, and for many

such questions, notwithstanding the Organic view, the right decisions to be made may perhaps need to

involve the kind of impartiality, as between biological and non-biological participants, that we now

expect to be adopted when deciding between the competing claims of different kinds of human

29

Page 30: FULL PAPER.doc

participants. Further, humanoids may well be required to enter into various kinds of commitments,

whether or not recognized as legally binding, and it may well be that such humanoids will need to be

designed in such a way that they understand the nature of such commitments and prioritize their

maintenance.

(7) Inter-robot moral constraints. Moreover machines will, of course, interact with other

machines, and here, if in no other area, it will be inevitable that constraints will have to be instituted

which are strongly analogous to the contours of familiar inter-human moral relationships. Again,

however, such constraints are likely to resemble ‘genuine’ moral rules only partially – perhaps closely

or perhaps only faintly.

9. Concluding Remarks

Here, then, are some categories for a future scheme of human-machine (and machine-machine)

ethical interaction, which fall short of the full or primary sense of machine ethics which the Organic

view excludes. This list is no doubt incomplete, and the categories on this list no doubt blend into each

other in various ways. Thus even if (non-Organic) humanoid machines never achieve a fundamental

moral status equivalent to that of humans, for the reasons that supporters of the Organic view say they

won’t, it looks as though there will nevertheless be many ways in which machines will be seen as fit

holders of certain kinds of moral status. And the fact that they might have to be given this ‘lesser’

grade of moral status is something which itself may raise important dilemmas. It would be likely to

produce a kind of stratified society – one which differentiated between natural humans as full moral

beings and artificial humanoids as ‘courtesy’ moral beings. Many might find this morally disturbing or

even morally repugnant.

This is, perhaps, a distant problem, but it may be closer than people are currently inclined to

believe. In any case, technical research programmes for advanced robotic development are already

concerned with the need to build in increased flexibility of operation, robustness, safety, scrutability,

trustability, etc. in robot design, as well as increased communicative transparency between robots and

human users, and increased reliability in robot-robot interaction.

All of these current-day design requirements have moral or quasi-moral aspects to them, so they

need to be considered as part of a wider machine ethics programme. But I would strongly contend that

30

Page 31: FULL PAPER.doc

we will develop an appropriate ethical framework for robots and other systems of the present and near-

future only if we situate such a framework within the larger context which it has been the purpose of

this paper to investigate.

The Organic view may of course itself turn out to be wrong: for example it may depend on an

incomplete or distorted view of what intrinsic moral relations between humans and machines might be

like. Or it may be that it seriously underestimates the rich layers of moral interaction, responsibility,

etc. that will emerge from the complexities of a future human-robot society. A fuller assessment of the

view developed above must be left for another occasion – here all I have been concerned to do is to

argue for its being a position that merits serious consideration in any discussion on the foundations of

Machine Ethics.

Acknowledgements. This paper is the result of long-standing dialogues that the author has had with various members of the Machine Consciousness and Machine Ethics communities. It is a much revised and expanded version of ‘A Robust View of Machine Ethics’, delivered at the AAAI Fall 2005 Symposium on Machine Ethics, Arlington, VA (Anderson et al., 2005). I am grateful, for helpful discussions on aspects of the above paper, to Igor Aleksander, Colin Allen, Michael Anderson, Susan Anderson, Selmer Bringsjord, David Calverley, Ron Chrisley, Robert Clowes, Ruth Crocket, Hanne De Jaegher, Ezequiel Di Paolo, Kathleen Richardson, Aaron Sloman, Iva Smit, Wendell Wallach and Blay Whitby; and also members of the ETHICBOTS group at Middlesex and the PAICS group at Sussex. However these people may not easily identify, or identify with, the ways their inputs have been taken up and used.

References

Anderson, M., Anderson, S.L., Armen, C., eds., 2005. Machine Ethics. Papers from the AAAI Fall Symposium. Technical Report FS-05-06. Menlo Park, CA: AAAI Press.

Calverley, D. 2005a Towards a Method for Determining the Legal Status of a Conscious Machine. in Chrisley, R.; Clowes, R.; and Torrance, S. eds. Next Generation Approaches to Machine Consciousness: Imagination, Development, Intersubjectivity, and Embodiment, (Proceedings of an AISB05 Symposium), 75-84. Hertfordshire, UK: Univ. Hertfordshire.

Calverley, D. 2005b Android Science and the Animals Rights Movement: Are There Analogies? Toward Social Mechanisms of Android Science, ((Proceedings of CogSci-2005 Workshop), 127-136. Stresa, Italy: Cognitive Science Society.

Di Paolo, E. 2003. Organismically-Inspired Robotics: Homeostatic Adaptation and Natural Teleology Beyond the Closed Sensorimotor Loop. In Murase, K. and Asakura, T. eds. Dynamical Systems Approach to Embodiment and Sociality. 19-42. Adelaide: Advanced Knowledge International.

31

Page 32: FULL PAPER.doc

Di Paolo, E. 2005. Autopoiesis, adaptivity, teleology, agency. Phenomenology and the Cognitive Sciences, 4(4). 429 - 452.

Floridi, L. and Sanders, J. 2004. On the Morality of Artificial Agents, Minds and Machines, 14(3): 349-379.

Franklin, S. 2003. IDA: A Conscious Artefact? Journal of Consciousness Studies. 10(4-5): 47-66.

Holland, O, (ed). 2003. Machine Consciousness. Exeter: Imprint Academic. (Also published as special issue of Journal of Consciousness Studies. 10(4-5))

Jonas, H. 1966. The Phenomenon of Life: Towards a Philosophical Biology. Evanston, Illinois: Northwestern U.P.

Letelier, J., Marin, G., Mpodozis, J. 2003. Autopoietic and (M,R) systems. Journal of Theoretical Biology, 222(2): 261-72.

Maturana, H. & Varela, F. 1980. Autopoiesis and Cognition: The Realization of the Living. Dordrecht, Holland: D. Reidel.

Picard, R. 1997. Affective Computing. Cambridge, MA: MIT Press.

Strawson, P.F. 1974. Freedom and Resentment. In Strawson, P.F., Freedom and Resentment and Other Essays. London: Methuen.

Thompson, E. 2004 . Life and Mind: From Autopoiesis to Neurophenomenology, A Tribute to Francisco Varela. Phenomenology and the Cognitive Sciences, 3(4): 381-398.

Thompson, E. 2005. Sensorimotor Subjectivity and the Enactive Approach to Experience. Phenomenology and the Cognitive Sciences, 4(4). Forthcoming.

Thompson, E. forthcoming. Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Harvard University Press, 2007.

Torrance, S.B. 1986. 'Ethics, Mind and Artifice.' in K.S.Gill (ed), AI for Society. Chichester: John Wiley, 55-72.

Torrance, S.B. 1994. 'The Mentality of Robots.' Proceedings of the Aristotelian Society, Supplementary Volume LXVIII, 229 - 262.

Torrance, S.B. 2000. ‘Producing Mind’. Journal of Experimental and Theoretical Artificial Intelligence. xxxx

Torrance, S.B. 2004. Us and Them: Living with Self-Aware Systems, in Smit, I.; Wallach, W.; and Lasker,G. eds. Cognitive, Emotive And Ethical Aspects Of Decision Making In Humans And In Artificial Intelligence, Vol. III, 7-14.. Windsor, Ont: IIAS.

32

Page 33: FULL PAPER.doc

Torrance, S.B. 2005 Thin Phenomenality and Machine Consciousness, in Chrisley, R., Clowes, R., and Torrance, S. eds. Next Generation Approaches to Machine Consciousness: Imagination, Development, Intersubjectivity, and Embodiment, (Proceedings of an AISB05 Symposium), Hertfordshire, UK: Univ. Hertfordshire, 59-66.

United Nations. 1948. U.N. Universal Declaration of Human Rights. http://www.unhchr.ch/udhr/index.htm

Varela, F.; Thompson, E.; and Rosch, E. 1991. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press.

Weber, A., & Varela, F. 2002 . Life after Kant: Natural purposes and the autopoietic foundations of biological individuality. Phenomenology and the Cognitive Sciences, 1(2): 97-125.

33