Multi Modal Conversations 09-11-2010
-
Upload
marie-noellelamy -
Category
Documents
-
view
216 -
download
0
Transcript of Multi Modal Conversations 09-11-2010
-
8/8/2019 Multi Modal Conversations 09-11-2010
1/22
1
DRAFT 17/11/10
A cross-disciplinary approach to the analysis of language learners' online
conversations
Abstract
Online conversations are now ubiquitous, and language learning researchers are one communitywith a particular need for a better understanding of data collected from such exchanges. This paperaddresses the lack of formalised methodology for analysing research data created intechnologically-mediated conversations. First we show the importance of conversations in languagelearning and establish that appropriate methodologies for investigating them have as yet emergedneither from the field of computer-assisted language learning nor from conversation analysis (CA).We analyse three brief text and voice-based conversations among language learners engaged inonline tutorials via a multimodal platform. From this we conclude that the analysis andinterpretation of such exchanges can be improved by a crossdisciplinary approach which consists ofaugmenting constructs drawn from CA with selected constructs from social semiotics.
Keywords
Multimodality, computer-mediated communication, second language acquisition, social semiotics,conversational analysis.
-
8/8/2019 Multi Modal Conversations 09-11-2010
2/22
2
Introduction
This paper reports on research into interactions undertaken in technology-enhanced environments
by learners seeking to enhance their oral and written fluency in their second or foreign language
(L2). The aim of the study is to establish methodological principles to address a gap in the analysis
of online multimodal data. The paper, which focuses on such phenomena as turn-taking and face-
saving in an online course, takes Conversational Analysis (CA) as its theoretical starting point but
adopts a multimodal perspective that appears to shed greater light on the meaning-making processes
involved in interactive digital environments.
To this aim, we will start by establishing the need for an understanding of what learners are
doing when they converse via multimodal electronic systems. We will then discuss the applicability
of CA to multimodal conversations online. Finally, drawing from communication theory and
multimodal social semiotics, we will identify three frameworks which may provide guiding
principles for a methodology with the potential to answer questions raised by our data.
The role played by online conversation in language teaching and learning
Conversations are seen as beneficial to L2 learners for three reasons. Firstly they have been part of
the communicative model of language teaching for half a century, as conversation classes in
which learners, in the safety of the classroom, are invited to experience the pressure ofconversation (Cook, 1996 [1991]: 61) that it is assumed they will face when called upon to talk to
native speakers in the target country. Secondly, work carried out since the mid-80s within Second
Language Acquisition theory has established that, providing they are structured so as to require
participants to negotiate meaning, conversations promote socio-cognitive progress (Gass
-
8/8/2019 Multi Modal Conversations 09-11-2010
3/22
3
andVaronis 1995; Long 1998; Gass, Mackeyand Pica 1998; Chapelle 2004). Thirdly, for
researchers such as Belz andThorne (2005) and ODowd (2006), who have investigated
intercultural aspects of telecollaborative transnational online projects, conversations are the means
whereby learners from different cultures construct their knowledge of the conversational strategies
of the target culture while developing their skills as interactants.
To attain these learning outcomes, research has shown that conversations have to satisfy
certain criteria. They need to be part of a constraining task (for example a group of learners might
be asked to address a problem then reach a negotiated consensus). However, to help learners cope
with the spontaneity of speech, conversations also have to offer them sufficient time and freedom to
develop new topics or to attend to unexpected conversational moves made by their discourse
partners. Tutorial software can offer such opportunities: it may allow learners to negotiate in groups
online (through the use of grouping tools and breakaway rooms), to respond fast (by textchat or
audio) or more reflectively (with written documents co-created during the interaction). Such
software may also allow the sharing of visual objects or sound files, that act as stimuli for talk.
Multimodal voice-enabled platforms are thus of great interest to language learning researchers.
Some preliminary thinking about how to approach the study of conversations in these
environments was nourished by reflections on our earlier piece of work (Author, 2006: 260),
focused on a course involving both written text (asynchronous and synchronous) and recorded
speech (asynchronous only). We had observed that prescribing the use of specific communication
channels for particular discursive goals was ineffective, because users re-appropriated theenvironment in their own way, successfully developing conversations in phases of the educational
task, and in spaces of the environment, that were not originally dedicated to this purpose. For
example, although we had structured the asynchronous forum into two separate threads, one for
task-completion and one for social interaction, users exchanged messages about both topics
-
8/8/2019 Multi Modal Conversations 09-11-2010
4/22
4
everywhere in the environment, making no attempt to select the prescribed thread. Building on this
experience, we based the design of the course from which the data below is extracted on the
hypothesis that observations of learners unconstrained conversational behaviours across the
environments modalities would provide us with opportunities to understand how they achieved
their discourse aims. We also expected that learners would reach some of these aims by creative re-
assignment of the tools functions, orcatachresis as Rabardel (1996) terms it.
Central to our project was a focus on multimodality, defined for the purpose of this study as
the co-availability of several modalities, this term itself being understood as the material through
which the conditions for interaction are created when humans use an online platform. For example
any technological feature seen in its relationship with a semiotic resource is a modality, so that a
semiotic resource may be associated with a single modality (as when spoken language is associated
solely with the output of an audio channel), or with several modalities (as when written language is
associated both with user-generated messages (real-time and asynchronous) as well as with system-
generated ones.
Analysing conversations in technology-mediated environments
One approach to research into online conversations is computer-mediated discourse analysis.
Introduced by Herring in 2004, it has been influential within the computer-assisted language
learning research community but its application is limited to text-based exchanges. To account for
synchronous voice-enabled interaction, CA appears to be a more promising methodological
approach. Drawing on Goffmans (1967) work on everyday mother-tongue face-to-faceinteractions, Sacks, Schegloff and Jefferson (1974) used CA to analyse everyday conversations in a
technologically-mediated setting, the telephone. Hutchby (2001) further extended the field of
application of CA to Internet-mediated (including video-enhanced) everyday interaction. Mondada
has used CA to explore technology-mediated professional conversations, for example telesurgery
-
8/8/2019 Multi Modal Conversations 09-11-2010
5/22
5
involving surgical teams using both mother tongue and L2 (Mondada, 2003). There has been some
debate about the usefulness of CA in researching face-to-face L2 conversations, with Wagner
(1996) querying it and Seedhouse (1998, 2004) convincingly countering his views. Garcia and
Jacobs (1999) demonstrated CAs appropriateness for analysing online L2 conversations in a paper
on turn-taking in written chat, as long as the research instruments included videos of the screen of
each learner which today we would call dynamic screen capture in order that hesitations and
self-corrections could be evidenced in users messages prior to their being sent. Also focused on
textchat, Payne and Ross (2005) used elements of CA to argue that there is a relationship between
written chat proficiency and oral proficiency in L2. But to date there has been no use of CA for
investigations that integrate the multimodality of online settings and human interactivity in a L2
learning project, with the exception of our own work Author (2004) and Author and Another
(forthcoming), where we draw selectively from CA to account for aspects of learning on
audiographic platforms and desk-videoconferencing environments respectively.
The nature of the research gap: CA and synchronous multimodality online
Among the tenets of CA, two are pertinent to the data to be introduced shortly. Like all CA
principles, they can account for conversational material both when participants abide by them and
equally when they breach them. In a simplified form they can be summed up as follows: (a) turns of
speech alternate and interlink, since the basic principle of conditional relevance asserts that given
a question, you may expect an answer; given an apology, expect an acknowledgement; given a
topic, expect that it will be pursued; (b) face-saving is a major preoccupation in any conversation,because conversants are in a relationship of perpetually converging and diverging interests with
their fellow-conversants - you may try to save your own face or to protect others, and it is also
possible to threaten ones own or others faces for strategic reasons.
-
8/8/2019 Multi Modal Conversations 09-11-2010
6/22
6
In the computer-assisted multimodal communication context under study, discourse partners
use many semiotic systems, including linguistic (written and spoken), iconic and symbolic systems.
They may use these systems in rapid succession (type then draw), quasi-simultaneously (speak
while hitting the send button to post a written message) or they may choose among systems to
make meaning in particular ways. For example, a user may close a conversation by typing Bye for
now, by clicking a specific button, or by announcing their withdrawal orally. Different semiotic
systems are also involved in responding to such a move. For example, if users of a synchronous
environment signal their impending departure orally, their discourse partners will receive aural
input, made up of linguistic and non-linguistic information, e.g. a phone heard ringing in the
departing persons home will suggest reasons for their terminating the conversation. Or a user may
type a valedictory message into a box and send it. In this case, their message will be displayed on
the shared screens and will remain a part of the ongoing conversation even after they have
disappeared from the environment. Or again users might avail themselves of the systems
telepresence indicators, e.g. icons of different colours signalling that users are offline, online,
on standby etc, or display symbolic changes to the objects on the screen, such as the greying-out
of a name, or fading-out of a photo. Choosing to leave the conversation by activating such a tool is
a strategy likely to influence the direction of the conversation in a way that is distinct from the
effect produced by disconnecting altogether (in which case the name or image would vanish), or
from the effect of leaving all connections ostensibly active but walking away from the computer for
the rest of the session.
Thus in environments with multiple meaning-making devices, the combinations andinterlinking of semiotic systems and effects can become extremely complex, and there is scope for
observing conditional relevance and face-saving at work, through studying the users conventional
as well as catachrestic use of the tools. In the rest of this paper we seek theoretical support when
attempting to grasp this complex reality, by studying three brief conversations carried out through
-
8/8/2019 Multi Modal Conversations 09-11-2010
7/22
7
theLyceum environment as part of an English-for-Special-Purposes project. The next section briefly
describes the system and the pedagogical setting, and then introduces the three data sets.
The Copas project: environment, population and learning context
Lyceum is a synchronous audiographic groupware system used at the University of XXXX between
1995 and 2009, designed to facilitate social learning in distance tutorials. Figure 1 shows the
characteristic tools and spaces of this tutorial platform: a facility for creating rooms for small-
group work, a space for textual and graphic displays (whiteboards or other shared documents), a
space representing the participants conversational status (speaking, listening quietly or seeking
permission to take a turn), a space for text-chatting and icons for indicating agreement or
disagreement non-vocally and non-textually (voting or Tick icons).
[INSERT Fig 1. Screenshot from aLyceum session]
Fig.1. Screenshot fromaLyceum session
While technically the Universitys servers may accommodate hundreds of connections at a
time, the logistics of turn-taking on this platform is such that the optimal number of participants is
10 to 15 per tutorial room. However the subdividing of plenary groups into separate sub-groups of3
to 5 students, and back again to plenary spaces, is easy and almost instantaneous. Within each room,
different tools and shared spaces can be displayed and collaboratively used. A turn-taking
management system based on clickable icons is another feature of such platformsthat has relevancefor our study. As shown in Figure 1, a raised hand icon symbolises an intention to take a turn, while
a loudhailer or a microphone icon indicates who is speaking. However, there is no technical
obstacle to ignoring the turn-taking tool altogether (although in practice no more than three people
are recommended to speak at once, because of aural overload on listeners).
-
8/8/2019 Multi Modal Conversations 09-11-2010
8/22
8
The extracts discussed below originate from a project concerned with teacher-learner
communication in an audio-synchronous environment. The project, known as Copas, was a
partnership between the University of XXXX and the University of YYYY. Sixteen French-
speaking students studying for a Professional Masters in Open and Distance Teaching (ODT)
worked in two groups of eight, connecting from their homes in various parts of France. Each group
had an English native speaker tutor connecting from the United Kingdom. The groups met during
10 sessions of over an hour each. The course had a dual objective, linguistic and professional,
which was the development of competences in ODT through spoken and written English. The less
proficient group (false beginners with wide variations in knowledge) provided the extracts
discussed below.
TheLyceum environment and the Copas project are presented in detail in Vetter and
Chanier (2006). The environment supports semiotic resources for constructing discourse in
interaction, such as natural language in its written and spoken forms, as well as visual resources
such as icons, images, colours and shapes.
The data: three multimodal conversations
The three extracts occur within the last seven minute of the last session in the course. Both Tables 1
and 2 give simplified versions of a tabular transcription of the screen videos. Reading the top row of
Table 1 from left to right, the transcripts show: participants initials (U
ser), the speech turns (T,
characterisingallinputs,ratherthansimply thosein speech), chronological sequences (Time) and
data collected from the four modalities (in the four rightmost columns).
Table 1
Re-interpreting sequentiality (Extract 1)
-
8/8/2019 Multi Modal Conversations 09-11-2010
9/22
9
[INSERT TABLE 1 EXTRACT 1]
When the first extract begins, the learners are engaged in an evaluation-type conversation.
Three students, H, C and A are communicating orally negotiating an agreed heading, summing up
what they think they mostly learned during the course. The heading then needs to be typed on the
shared document screen by one of the group. In Extract1 (Table 1), H twice tries to pronounce the
phrase technicalvocabulary (at T1 and T7) but stutters and hesitates each time. As C attempts to
settle the answer by typing vocabulary learningin the shared document (T3) and eliciting Hs
approval of this formulation (T5), H holds to his original wording by writing it in the chat window
(T6). This results in C modifying (T8) what he had typed on the shared document, which H finally
approves (T9) by clicking the Tick icon. Arrows show topic-maintenance by H across modalities.
To compensate for his articulatory problems in English, H deploys an alternative conversational
strategy that takes advantage of the environments multimodality. His strategy cannot be made
visible through conversation analysis conducted in its classical form since CA relies on the
sequentiality principle, and the conversation, if read vertically down the second column of Table 1,
has no sequencing that could be sensibly interpreted. Sequentiality is in fact present, but can only be
detected by analysing the four rightmost columns.
Table 2
Re-interpreting the face-saving principle (Extract 2)
[INSERT TABLE 2 EXTRACT 2]
Just before the start of Extract 2, the tutor had launched the conversation by asking the
members of the group (A, P, G and C) to introduce themselves orally. The extract in Table 2 shows
the conversation proceeding orally, with some input typed into the text chat window (highlighted in
grey-blue), while two of the learners (A and P) carry out a completely different conversation in text
-
8/8/2019 Multi Modal Conversations 09-11-2010
10/22
10
chat (highlighted in green), concerned with an auditory difficulty. In the recording which was used
for the transcription, A speaks quite audibly but, possibly due to her home setup or for server-
related reasons, P does not hear her. So A and P run a conversation in parallel with the main
conversation initiated by the tutor. They use chat to construct a dialogue on a different theme,
without apparently disturbing the main conversation which is proceeding in the audio modality.
Thus, their attitude maybe seen as non-transgressive in terms of saving the tutors face. In a non-
virtual classroom, such a move would probably have been seen as face-threatening for the tutor,
who may have taken steps to stop it, had it persisted for more than a few seconds. The face-saving
principle helps explain A and Ps conversational moves. However, the example shows how this
principle needs to be re-interpreted for this online multimodal communication scenario.
In the last example, Figure 2, we see how learner C uses the four modalities at his disposal:
audio, text-chat, Tick icon and shared document.
[INSERT FIGURE 2]
Fig.2 Use of modalities by participants in Extract3
The chart in Figure 2 highlights discrepancies in the ways each participant (H, C and A) uses
these modalities: the horizontal axis shows the modalities available while the vertical axis gives the
number of speech turns per individual. Learner Cs inputs are mainly via the audio channel (45
audio speech turns, 4 text-chat turns). Of the group, C has the highest level of proficiency, as
evidenced by his score in the pre-experiment test and by his ease with target structures such as
wouldyourepeat?, wemustchoose, wecantanswer, do you wanttoaddsomething?, dont
youthinkso? are you OK with whatI write? etc. Content analysis of his input shows that in over
half his spoken turns (28 out of 44) he is asking for others opinions. Yet all his text-chat inputs are
language accuracy checks. Additionally, C makes much more use of the shared document than do
his two colleagues. In data collected from the parts of the session that precede and follow Extract3,
-
8/8/2019 Multi Modal Conversations 09-11-2010
11/22
11
90% of Cs turns in the shared document are directly preceded by a turn elsewhere (mostly in the
audio channel) seeking confirmation of the groups approval of what he has written.
We hypothesize that C uses the text-chat modality when he is checking the accuracy of the
lexicogrammatical forms he is using rather than moving through the conversational agenda of the
class. He prefers to progress through the class agenda via the shared document modality, but only
after obtaining consensus from his peers orally. This pattern of use could relate to two different yet
converging factors: C may be specialising particular discourse aims to particular screen elements,
and his representation of himself as a communicator may also play a role in which of these he uses.
For example, in relation to the elements in the screen layout, analysts of his conversational
strategies would need to assess the degree of salience of the text-chat window tucked away
unobtrusively (and indicated in the bottom right-hand corner of Figure 1 by an arrow pointing to
chat) in contrast to the intrusiveness of the sound of his voice coming through each group
members headset. As for self-representation, we may posit that face-saving issues are involved: for
example C might be prepared to give an image of himself as a confident language speaker in the
audio channel while specialising the chat window for more risk-taking face-threatening activities,
such as asking for help with English forms. A similar explanation might be offered for his use of the
shared document: the visual organisation of the screen when that document is uploaded to it (see
Figure 3 further down), the central disposition of that document and its status as the official record
of the groups collaboratively negotiated view may explain both his self-appointed guardianship
of it and his diffidence in committing material to it, unsupported by his peers.
Discussion
In this section and on the basis of the findings from the extracts presented earlier we suggest that
while CA remains a useful approach to the understanding of sense-making in real-time online
multimodal settings, it needs to be rearticulated. Our suggestion is that such reframing can be done
-
8/8/2019 Multi Modal Conversations 09-11-2010
12/22
12
in the light of three theoretical frameworks drawing from affordance theory as developed by Gibson
(1979), multimodal social semiotics and geosemiotics respectively. Our reading of the data showed
that two of the principles of traditional CA, sequentiality (Extract1) and face management (Extract
2) could be used to characterise conversational moves within electronically-mediated multimodal
conversations, albeit redefined, because when more than one modality was available, both
sequentiality and face management operated across modalities. Hutchbys (2001) work is, we
suggest, relevant in this respect.
1. CAre-interpretedinthelightofcommunicativeaffordances
Hutchby (2001) reminds us through the example of the early history of telephone communication
that technology is in a reciprocal relationship with its users. The telephone was originally sold as an
instrument for transacting business or for exchanging practical information. Early marketing
stressed this functional use to the extent of advising subscribers to wait until late at night if they
really needed to use the machine for the purpose of personal chat. However users were not
persuaded and appropriated the telephone as a medium for family or other intimate conversations,
forcing the telephone companies to review their marketing strategies.
[...] whiledesignersmay besaidtohavesomecontroloverthefeaturesthey designintoan
artefact,andwhilethey may havesomeideaabouttherangeofusesto whichtheartefact
shouldbeput,they havelittlecontrolovertheartefactscommunicativeaffordances over
therangeofthingsitturnsouttoenablepeopletodo(Hutchby, 2001: 123).
We witnessed this mechanism at work in the project extracts, where learners appropriatedmodalities in diverse ways. For example the text-chat was re-appropriated in the contexts of
different conversations in order to support control over content (Extract1), to provide technical
assistance (Extract2) and to enact face management (Extract3). We also observed that individual
participants engaged with different communicative affordances to satisfy identical communicative
-
8/8/2019 Multi Modal Conversations 09-11-2010
13/22
-
8/8/2019 Multi Modal Conversations 09-11-2010
14/22
14
2. Theinfluenceofdesignondiscourseinmultimodalenvironments
The materiality of the environment impacts on the dynamics of conversations. In Extract 2, two
conversations were progressing in parallel. It is not accidental that the tutor-initiated one was
carried out orally while the students exchange about sound levels was conducted in text-chat: all
we need to do to become persuaded of this is to mentally invert the modal choices and imagine that
the tutor led his tutorial via postings in the text-chat while students talked about other topics in the
audio channel. It is unlikely that the group would accept such a position for the tutor, and we draw
from multimodal social semiotics to help explain why.
Kress and van Leeuwens (2001) work on the semiotics of multimodal pages in newspapers
and books identifies the four dimensions of structuring for such artefacts: discourse, production,
dissemination and design. While we believe that all four are relevant to understanding meaning-
making in electronic learning environments (see for example a discussion of the impact of
dissemination on learner progress in Author, 2004: 523-524), for the purpose of the current
discussion we concentrate on one of them, the dimension of design. The designers of the software
that our learners used can appositely be compared with the architect in Kress and van Leeuwens
explanation of the relationship between design and discourse:
Anarchitect,forinstance,designs (butdoesnotbuild)ahouseorablockofapartments. The
discourseprovidesacertain view ofhow housesarelivedinthe way they do,andarguments
whichcritiqueordefendthis way oflife. Thedesignofthehousethenconceptualiseshow to
giveshapetothisdiscourseintheformofahouse,oratypeofapartment(Kress and van
Leeuwen, 2001:6).
-
8/8/2019 Multi Modal Conversations 09-11-2010
15/22
15
The relevance of this architectural example will become clear as we look at two very
different designs of environments that have been used for language learning,Lyceum and Traveler,
through comments made by learners.Lyceum(Figure 3), as we have seen, is an academic tool
designed to look like a university campus building, which is a traditional design choice for
electronic learning environments as pointed out by Bayne (2008). Traveler(Figure 4) is an avatar-
based system with a fantasy feel inherited from the world of games.
[INSERT FIGURE 3]
Fig.3: The design of aLyceum screen, with whiteboard displaying learners work
[INSERT FIGURE 4]
Fig.4.The design of a Travelerscreen
Just as the architect has provided a shape for the cultural discourses of human habitats, so
the designers of electronic environments and virtual worlds conceptualise the form that the
environments take on the screen into interpretable signs. For example here are responses from two
of our users (forLyceum) and two of rnberg Berglunds (2005) users (forTraveler), upon being
asked about their feelings of presence when online:
y Lyceum user 1: Quand le prof rentre dans la salle, cela ne drange pas. Je sais pas commentlexpliquer. (When the teacher enters the room, its not intrusive. I dont know how to
explain it).
-
8/8/2019 Multi Modal Conversations 09-11-2010
16/22
16
y Lyceumuser2: Le style du prof joue, mais le fait quil est invisible, il ne peut pas simposerde la mme faon quen prsentiel. (The teachers style is a factor but the fact that hes
invisible, he cant impose himself in the same way as in face-to-face).
y Traveleruser 1: It took me to another world and was a real adrenaline buzz. It was on myscreen and I was conscious of it always, but I was definitely virtually gone from my usual
habitat.
y Traveleruser2: Im always immersed.[] It doesnt matter that the environment isartificial. [] I think of the place as real.
Whereas these two Travelerusers produce a discourse of emotions and escapism, the
discourse ofLyceum users reflects school-like representations of a particular type: teacher-led
classes. We make two comments here. Firstly, although these differences in perception may not be
surprising given the strongly contrasted visual identities of Figures 3 and 4, the question is whether
two groups using these environments for achieving the same language-learning objectives might
have very different types of conversations in each environment. The second observation relates to
designers and the uses made of their designs.Lyceums design was underpinned by a democratic
and participative pedagogical posture: We have imposed minimal technical constraints on floor
control: anyone can speak anytime (Buckingham-Shum, Marshall, Brier and Evans, 2001:4). Yet
the users comments show a preoccupation with teacher control. It is likely that this is part of their
pre-existing non-virtual educational culture, in which case the question can be asked: to what extent
and in what ways can the design features of interactive learning environments transform the users
representations of self? The answer to this question is another determiner of sense-making in these
environments. Finally, regarding the device which introduced this section, i.e. the proposal that the
Copas tutor could conduct core tutorial business in the chat box while the students conversed
orally, evidence fromLyceum users perceptions supports the view that the systems design
-
8/8/2019 Multi Modal Conversations 09-11-2010
17/22
17
provides a shape for the cultural discourses of traditional teacher-centered classrooms. But based on
multimodal social semiotics understanding of design, there is no in-principle reason why other
types of design could not work to support other cultural discourses, producing distinct types of
conversations.
3. Theroleofthebody inrelationtospaceinmultimodalenvironments
While Scollon and Scollon (2003) acknowledge the importance of Kress and van Leeuwens design
dimension, they re-interpret and re-inforce it in order to account for interaction. We have found
their insights useful for understanding the way that our users perform conversations by choosing
among the different spaces offered to them within the interface. Scollon etalstress the spatial
dimension of social semiotic resources, presenting a framework which they call geosemiotics:
Geosemioticsisthestudy ofmeaningsystemsby whichlanguageislocatedinthematerial
world. Thisincludesnotjustthelocationofwordsonthepage youarereadingnow butalso
thelocationofthebookin yourhandsandyourlocationas youstandorsitreadingthis
(Scollon et al, 2003: x-xi).
The authors structure geosemiotics into three sub-sets: the interaction order, visual semiotics
(on which we will not elaborate here, as this concept comes close to Kress and van Leeuwens
notion of design mentioned earlier) and space semiotics. The understanding of space that is of
interest to us in our study of learners using a virtual environment on a computer is predicated on
each of these three sub-sets.
The interaction order provides a construct for understanding how individuals perceive the
interactional value of the space they choose to use. In their description of the interaction order, the
authors include perceptual spaces and interpersonal distances. Dominant perceptual spaces are
-
8/8/2019 Multi Modal Conversations 09-11-2010
18/22
18
visual and auditory (less noticed ones are olfactory, thermal and tactile). The addition of the
construct of interpersonal distances as a scale of values inspired by Halls (1969) work on
proxemics allows geosemiotics to ask questions about the relationship between space, sound,
embodiment and socialisation. For example, the auditory space which I perceive and my perceived
intimacy or distance with the individual vocalising the sound that I am hearing, together create the
semiotic resource through which I embody meanings. Applying this framework to Copas
participants, in particular to the parallel conversation mechanism in Extract2 and to the multimodal
preferences of the learner in Extract3, the question becomes: how do they co-construct
interpersonal values (intimate, personal, social, public) into conversations which proceed
simultaneously through visual spaces of varying salience and through an auditory space defined by
the spatially and tactilely intimate device of an earpiece or headset?
Space semiotics, in Scollon etals words, is the most fundamental part of geosemiotics,
because it asks Where in the world is the sign or image located? and because it aims to account for
any aspect of the meaning that is predicated on theplacementof the sign in the material world
(2003: 146, our italics). The distinction between well-rehearsed debates within visual semiotics on
the subversive placement of images for artistic purposes (e.g. Warhol soup cans), and the focus of
space semiotics, is that the latter is looking at the material world as a whole, and not simply at
materials used for the bearing of signs, such as paper, canvas or brick. In terms of multimodal
electronic environments, space semiotics provides the basis for asking questions such as: how do
users decode and encode meanings in a material situation involving their computer and its various
peripherals (keyboard, mouse or keypad, webcam) as well as other stimuli around them (possiblyanother computer, a video screen, a person physically present, who is talking, writing or drawing,
etc.)?
Conclusion
-
8/8/2019 Multi Modal Conversations 09-11-2010
19/22
-
8/8/2019 Multi Modal Conversations 09-11-2010
20/22
20
References
Author, (2004) Oral Conversations Online: Redefining Oral Competence in Synchronous
Environments, ReCALL 16 (2), 520-538.
Author, (2006) Interactive Task Design: Metachat and the Whole Learner, in M. del Pilar Garca
Mayo. (ed)InvestigatingTasksin FormalLanguage Learning. Clevedon: Multilingual
Matters, 242-264.
Author and Another, (forthcoming) The multimodal mediation of semiotic resources in an online
conversation, in Develotte, C. Kern, R. and Author (eds)Dcrirelacommunicationenligne:
leface--facedistanciel, Lyon : ENS Editions.
Belz, J. and Thorne S.(eds.) (2005).Internet-MediatedInterculturalForeign LanguageEducation.
Boston: Thomson Heinle.
Bayne, S. (2008) Higher Education as a Visual Practice: Seeing Through the Virtual Learning
Environment, Teachingin HigherEducation, 13(4): 395-410.
Buckingham-Shum, S., Marshall, S. and Evans, T. (2001) Lyceum: Internet Voice Groupware for
Distance Learning, in Proceedingsofthe 1st FirstEuropeanConferenceonComputer-
SupportedCollaborative Learning. Maastricht: The Netherlands.
Chapelle, C. (2004). Technology and Second Language Learning: Expanding Methods and
Agendas, System, 32: 593-601.
Cook, V. (1996 [1991]). SecondLanguageTeachingandLanguage Learning. London: Arnold.
Garcia, A. and Jacobs, J. (1999). The Eyes of the Beholder: Understanding the Turn-taking System
in Quasi-Synchronous Computer-Mediated Communication,Researchon Languageand
SocialInteraction,32
(4):33
7-3
67.Gass, S., Mackey, A. and Pica, T.(1998). The Role of Input and Interaction in Second Language
Acquisition, Modern Language Journal,82: 299-307.
Gass, S. and Varonis, E. (1994). Input, Interaction, and Second LanguageProduction,Studiesin
SecondLanguageAcquisition , 16 (3): 283-302.
-
8/8/2019 Multi Modal Conversations 09-11-2010
21/22
21
Gibson, J. J. (1979). TheEcologicalApproachto VisualPerceptions. Boston: Haughton Mifflin.
Goffman, E. (1967)InteractionRituals:Essayson Face-to-face Behavior, New-York: Random
House.
Hall, E. (1969). The HiddenDimension. New York: Doubleday.
Herring, S. (2004). Computer-Mediated Discourse Analysis: An Approach to Researching Online
Behavior, in S. Barab, R. Kling and J. Gray. (eds),DesigningforVirtualCommunitiesinthe
ServiceofLearning. New York: Cambridge University Press, pp. 338-376.
Huang A.W. and Chuang, T.R. (2009) Social Tagging, Online Communication, and Peircean
Semiotics: A Conceptual Framework,JournalofInformationScience,35 (3); 340-357.
Hutchby, I. (2001). ConversationandTechnology: FromtheTelephonetothe Internet. Cambridge:
Polity.
Jewitt, C. and Kress, G. (2003). MultimodalLiteracy. Bern: Peter Lang Publishing.
Jewitt, C. (2004). Multimodality and New Communication Technologies, in P. LeVine and
R.Scollon. (eds), DiscourseandTechnology:MultimodalDiscourseAnalysis. Washington
DC: Georgetown University Press, 184-195.
Kress, G. and Van Leeuwen, T. (2001). MultimodalDiscourse:theModesandMediaof
Contemporary Communication. London: Arnold.
Lemke, J. (2006). Towards Critical Multimedia Literacy, in M.McKenna, L.Labbo , R. Kieffer
and D. Reinking. (eds).InternationalHandbookofLiteracy andTechnology Volume III.
Mahwah, New Jersey: Laurence Erlbaum Associates.
Long, M. (1983). Linguistic and Conversational Adjustments to Non-Native Speakers, Studiesin
SecondLanguageAcquisition . 5 (2): 177-193.
Mondada, L. (2003). Describing Surgical Gestures: the View from Researchers and Surgeons
Video Recordings, in Proceedingsofthe FirstInternationalGestureConference, Austin, July
2002.
-
8/8/2019 Multi Modal Conversations 09-11-2010
22/22
22
O'Dowd, R. (2006). TelecollaborationandtheDevelopmentofInterculturalCommunicative
Competence. Berlin, Langenscheidt.
rnberg Berglund, T.(2005) Multimodality in a Three-dimensional Voice Chat, in Proceedingsof
the 3rdConferenceMultimodalCommunication,PapersinTheoreticalLinguistics. Gteborg:
Gteborg University.
Payne, J. S. and Ross, B. M. (2005) Synchronous CMC, Working memory, and L2 Oral
Proficiency Development,Language LearningandTechnology, 9 (3): 35-54.
Rabardel, P. (1995)Les HommesetlesTechnologies, ApprocheCognitivedes Instruments
Contemporains, Paris: Armand Colin.
Sacks, H., Schegloff, E. and Jefferson, G. (1974). A Simplex Systematics for the Organisation of
Turn-taking for Conversation,Language, 50: 696-735.
Scollon, R. and Scollon, S. W. (2003). DiscoursesinPlace: LanguageintheMaterialWorld.
London and New York: Routledge.
Seedhouse, P. (1998) CA and the Analysis of Foreign language Interaction: a Reply to Wagner,
JournalofPragmatics30: 85-102.
Seedhouse, P. (2004) The InteractionalArchitectureofthe LanguageClassroom:AConversation
AnalysisPerspective. Oxford: Blackwell.
van Leeuwen, T. and Jewitt, C. (2001)A HandbookofVisualAnalysis. London: Sage.
Vetter, A. and Chanier, T. (2006). Supporting Oral Production for Professional Purposes, in
Synchronous Communication with Heterogenous Learners, ReCALL 18 (1): 5-23.
Wagner, J. (1996) Language Acquisition through Foreign Language Interaction: A Critical Review
of Studies in Second Language Acquisition,JournalofPragmatics2
6:2
15-3
5.
Online reference
Travelerhttp://dipaola.org. Accessed 23.10.2010.